-
-This page is about Aurora 3D Templates,contains Aurora Printable Disney ... 3D Presentation 16.01.07 Multilingual Full Keygen,Aurora 3D Text & Logo Maker and more... ... Aurora 3D Text & Logo Maker 20.01.30 + Crack. Aurora 3D Templates Aurora 3D Text & Logo Maker - Free download and software reviews - CNETÂ ... 1fdad05405
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Elysium 720p In Dual Audio Hindi [REPACK].md b/spaces/1gistliPinn/ChatGPT4/Examples/Elysium 720p In Dual Audio Hindi [REPACK].md
deleted file mode 100644
index 352d8c598eb12f43f51a2bb9de2267a820d07860..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Elysium 720p In Dual Audio Hindi [REPACK].md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-
Elysium: A Dystopian Sci-Fi Thriller in Hindi
-
Elysium is a 2013 American dystopian science fiction action film written, produced, and directed by Neill Blomkamp. It was Blomkamp's second directorial effort. The film stars Matt Damon and Jodie Foster alongside Sharlto Copley, Alice Braga, Diego Luna, Wagner Moura, and William Fichtner. [^2^]
-
The film takes place on both a ravaged Earth and a luxurious artificial world called Elysium. The planetâs citizens live in poverty while the rich and powerful live on Elysium, an orbiting space station just outside of Earthâs atmosphere. Spider, a hacker living on Earth in Los Angeles, runs three space shuttle flights to Elysium to smuggle people in to use their Med-Bays, devices that can heal any disease or condition. Elysium Defense Secretary Delacourt shoots down two of the spacecraft in space, killing everyone on board, and has everyone on the shuttle that does reach Elysium arrested and deported. Elysium President Patel reprimands Delacourt for her actions, threatening discharge for any more actions of a similar manner. In retaliation, she offers Armadyne Corp CEO John Carlyle defense contracts for life in exchange for a program that will allow Delacourt to conduct a coup and install herself as president. Carlyle writes the program and stores it inside his brain. [^2^]
On Earth, parolee Max Da Costa is working for Armadyne Corp as a laborer when he is accidentally exposed to a lethal dose of radiation. He is given medication and told he has five days to live after being dismissed by Carlyle from Armadyne. Max and his friend Julio approach Spider and make a deal: If Max can successfully steal information from a powerful Elysium citizen, in exchange Spider will give Max a shuttle ride to Elysium to use a Med-Bay to cure his condition. Max agrees to steal the information from Carlyle's brain, unaware that it contains the coup program. [^2^]
-
When Delacourt learns that the information she needs to become president was stolen from Carlyle's brain, she sends the notorious agent Kruger to hunt down Max and recover the software at any cost. Max must fight for his life and his chance to reach Elysium before it's too late. [^2^]
-
Elysium is a film that offers deliberate social commentary that explores political and sociological themes such as immigration, overpopulation, transhumanism, health care, worker exploitation, the justice system, technology, and social class issues. [^2^] It received positive reviews from critics but was considered disappointing after Blomkamp's first film District 9. [^2^] It grossed $286 million worldwide and was released on DVD and Blu-ray on December 17, 2013. [^2^]
-
If you are looking for a thrilling and thought-provoking sci-fi film with amazing visuals and action scenes, you should watch Elysium in Hindi dubbed dual audio 720p quality. You can download it from this link [^1^] or watch it online on various streaming platforms.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Frontdesigner 3 0 Full !EXCLUSIVE! Version Download.rar.md b/spaces/1gistliPinn/ChatGPT4/Examples/Frontdesigner 3 0 Full !EXCLUSIVE! Version Download.rar.md
deleted file mode 100644
index e7ac2154175f7ceca164340f12f78a3bb1db9a7e..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Frontdesigner 3 0 Full !EXCLUSIVE! Version Download.rar.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
-August 14, 2020 - IBM Lotus Domino Server 8.5.3 64-bit. Note that Lotus Domino 8.5 was a 64-bit version of the application, and the internal data structures are . 1 byte.
-There are currently 64-bit versions available in the environment, so we cannot use 64-bit types and data structures, but we can use 32-bit types and data structures such as int32 which are .4 bytes in size.
-How can I tell which version is using 64-bit or 32-bit frameworks and use 64-bit versions to work with 64-bit applications, and also use 32-bit versions to work with 32-bit applications?
-Gratitude 8a78ff9644
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android Gameplay How to Sync Your Progress and Earn Rewards Across Devices.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android Gameplay How to Sync Your Progress and Earn Rewards Across Devices.md
deleted file mode 100644
index cf60ec8ffa4570f0df42de15e8eb78dd6135b5ce..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android Gameplay How to Sync Your Progress and Earn Rewards Across Devices.md
+++ /dev/null
@@ -1,162 +0,0 @@
-
-
Android Gameplay: A Guide for Gamers and Developers
-
Android gameplay is also popular because it offers a wide range of games in different genres, catering to different tastes and preferences. Whether you are looking for adventure, arcade, puzzle, racing, strategy, or any other type of game, you can find it on Android. Here are some of the best Android games in different genres that you can try out:
Adventure games are games that involve exploration, story, puzzles, and interaction with characters and environments. Adventure games often have immersive graphics, sound effects, and voice acting that create a rich and engaging experience. Some of the best adventure games on Android are:
-
-
The Room series: The Room is a series of puzzle games that challenge you to solve mysterious and intricate contraptions in a dark and atmospheric setting. The games have stunning graphics, realistic physics, and captivating stories that will keep you hooked. The latest game in the series is The Room: Old Sins, which takes you to a haunted dollhouse where you must uncover the secrets of a missing engineer and his wife.
-
Lara Croft GO: Lara Croft GO is a turn-based puzzle game that follows the iconic heroine as she explores ancient ruins and faces deadly enemies. The game has beautiful visuals, clever puzzles, and a relaxing soundtrack that make it a perfect game for casual gamers. You can also unlock new outfits and collectibles as you progress through the game.
-
Life is Strange: Life is Strange is a choice-based adventure game that tells the story of Max Caulfield, a teenage girl who discovers that she can rewind time and change the course of events. The game has a compelling plot, realistic characters, and multiple endings that depend on your choices. You can also use your camera to take photos and collect memories along the way.
-
-
Arcade Games
-
Arcade games are games that are fast-paced, simple, and addictive. Arcade games usually have simple controls, high scores, and endless levels that test your reflexes and skills. Some of the best arcade games on Android are:
-
-
Subway Surfers: Subway Surfers is an endless runner game that has you running away from the police on a subway track. You can swipe left or right to dodge obstacles, jump or roll to avoid trains, and collect coins and power-ups to boost your score. You can also customize your character and unlock new hoverboards and jetpacks.
-
Fruit Ninja: Fruit Ninja is a classic arcade game that has you slicing fruits with your finger as they fly across the screen. You can play in different modes, such as Classic, Arcade, Zen, or Online Multiplayer, and use different blades and dojos to enhance your gameplay. You can also unlock achievements and compete with your friends on the leaderboards.
-
Angry Birds 2: Angry Birds 2 is the sequel to the popular arcade game that has you launching birds at pigs using a slingshot. The game has improved graphics, new levels, new birds, new powers, and new challenges that make it more fun and exciting than ever. You can also join clans and play with other players online.
-
Puzzle Games
-
Puzzle games are games that require logic, strategy, and problem-solving skills. Puzzle games often have relaxing graphics, sound effects, and music that create a soothing and satisfying experience. Some of the best puzzle games on Android are:
-
-
Monument Valley 2: Monument Valley 2 is a beautiful puzzle game that has you guiding a mother and her child through a world of impossible architecture and optical illusions. The game has stunning visuals, enchanting music, and clever puzzles that will challenge your perception and imagination.
-
Candy Crush Saga: Candy Crush Saga is a popular puzzle game that has you matching candies of the same color to clear them from the board. The game has hundreds of levels, each with different goals and obstacles. You can also use boosters and power-ups to help you along the way. You can also play with your friends and compete for the highest score.
-
2048: 2048 is a simple but addictive puzzle game that has you sliding tiles with numbers on them to combine them and create larger numbers. The game ends when you reach the 2048 tile or when there are no more moves left. You can also try different modes and variations of the game, such as 4x4, 5x5, 6x6, or Fibonacci.
-
-
Racing Games
-
Racing games are games that involve driving or riding vehicles at high speeds and competing with other racers or against the clock. Racing games often have realistic graphics, sound effects, and physics that create a thrilling and immersive experience. Some of the best racing games on Android are:
-
-
Asphalt 9: Legends: Asphalt 9: Legends is a stunning racing game that has you driving some of the most prestigious cars in the world on exotic locations. The game has amazing graphics, smooth controls, and a variety of modes and events. You can also customize your cars, join clubs, and play with other players online.
-
Mario Kart Tour: Mario Kart Tour is a fun racing game that features characters and tracks from the Mario franchise. The game has colorful graphics, catchy music, and easy controls. You can also use items and power-ups to boost your speed or hinder your opponents. You can also play with your friends and compete in rankings and tournaments.
-
Real Racing 3: Real Racing 3 is a realistic racing game that has you driving some of the most authentic cars on real tracks around the world. The game has impressive graphics, realistic physics, and a variety of modes and challenges. You can also upgrade your cars, join teams, and play with other players online.
-
Strategy Games
-
Strategy games are games that involve planning, decision-making, and resource management. Strategy games often have complex and challenging gameplay that require tactical thinking and long-term vision. Some of the best strategy games on Android are:
-
-
Clash of Clans: Clash of Clans is a popular strategy game that has you building and defending your own village from other players. You can also join clans, train troops, and attack other villages to loot resources and trophies. You can also participate in clan wars, events, and seasons to earn rewards and bonuses.
-
Plants vs. Zombies 2: Plants vs. Zombies 2 is a fun strategy game that has you planting and growing plants to fend off waves of zombies. The game has colorful graphics, humorous characters, and a variety of modes and levels. You can also collect and upgrade your plants, travel through time and space, and face boss battles.
-
Civilization VI: Civilization VI is a classic strategy game that has you leading a civilization from the ancient to the modern era. You can choose from different leaders, cultures, and policies to shape your civilization's history and destiny. You can also explore, expand, exploit, and exterminate other civilizations on a randomly generated map.
-
-
Other Genres
-
Of course, there are many other genres of games that you can enjoy on Android, such as role-playing, simulation, sports, trivia, word, and more. You can browse the Google Play Store to discover new and trending games in different categories. You can also check out some of the best Android games of 2023 according to TechRadar and PCMag.
-
The Best Android Gaming Tips and Tricks
-
Now that you have some ideas of what games to play on Android, you might want to know some tips and tricks to enhance your gaming experience. Here are some of the best Android gaming tips and tricks that you can use:
Android has some digital wellbeing features that can help you manage your screen time and avoid distractions while gaming. For example, you can use Focus mode to pause notifications from certain apps while you play. You can also use Bedtime mode to dim your screen and mute sounds at night. You can access these features from the Settings app or the Quick Settings panel.
-
How to use voice search and commands
-
Android has a built-in voice assistant called Google Assistant that can help you search for games, launch apps, control settings, and more with your voice. You can activate Google Assistant by saying "Hey Google" or by tapping the microphone icon on the search bar or the home screen. You can then ask Google Assistant questions or give commands related to gaming, such as "What are some good racing games?" or "Turn on Do Not Disturb mode".
-
How to uninstall unwanted apps
-
How to sync your progress across devices
-
If you have more than one Android device, you might want to sync your game progress across them so that you can continue playing where you left off. You can do this by using Google Play Games, a service that lets you save your game data, achievements, and leaderboards online. You can sign in to Google Play Games with your Google account and enable cloud save for the games that support it. You can also use Google Play Games to play with other players online and discover new games.
-
How to earn rewards with Google Play Points
-
Google Play Points is a program that rewards you for using the Google Play Store. You can earn points by downloading and playing games, making in-app purchases, subscribing to services, and more. You can then redeem your points for rewards, such as discounts, coupons, free apps, and more. You can also use your points to support causes that you care about. You can join Google Play Points for free and start earning points today.
-
The Mobile Gaming Market Statistics and Trends
-
Android gameplay is not only fun and exciting, but also lucrative and influential. The mobile gaming market is one of the fastest-growing and most profitable segments of the gaming industry. Here are some of the mobile gaming market statistics and trends that you should know:
-
The global and U.S. revenue and user numbers
-
According to Statista, the global mobile gaming market generated $86.3 billion in revenue in 2020, accounting for 49% of the total gaming market. The number of mobile gamers worldwide reached 2.7 billion in 2020, representing 34% of the global population. The U.S. mobile gaming market generated $13.9 billion in revenue in 2020, ranking second after China. The number of mobile gamers in the U.S. reached 203 million in 2020, representing 61% of the U.S. population.
-
The most downloaded and highest-grossing apps
-
According to Statista, the most-downloaded Android gaming app worldwide in September 2022 was Garena Free Fire, with 63 million downloads. The second-most downloaded app was Subway Surfers, with 40 million downloads, followed by Among Us, with 38 million downloads. The highest-grossing Android gaming app worldwide in September 2022 was Honor of Kings, with $240 million in revenue. The second-highest grossing app was PUBG Mobile, with $198 million in revenue, followed by Genshin Impact, with $156 million in revenue.
-
The most popular genres and platforms
-
The future projections and opportunities
-
According to Statista, the global mobile gaming market is expected to grow to $116.4 billion in revenue by 2024, with a compound annual growth rate of 7.7%. The number of mobile gamers worldwide is expected to reach 3.1 billion by 2024, with a compound annual growth rate of 3.6%. The U.S. mobile gaming market is expected to grow to $18.8 billion in revenue by 2024, with a compound annual growth rate of 7.8%. The number of mobile gamers in the U.S. is expected to reach 222 million by 2024, with a compound annual growth rate of 2.3%.
-
The mobile gaming market offers many opportunities for both gamers and developers, as technology, innovation, and creativity continue to evolve and improve. Some of the trends and opportunities that are shaping the future of mobile gaming are:
-
-
Cloud gaming: Cloud gaming is a service that allows gamers to stream games from remote servers without downloading or installing them on their devices. Cloud gaming enables gamers to access high-quality games on any device, regardless of their hardware specifications or storage capacity. Cloud gaming also reduces the cost and complexity of game development and distribution for developers. Some of the cloud gaming services that are available or in development are Google Stadia, Microsoft xCloud, Amazon Luna, and Nvidia GeForce Now.
-
5G technology: 5G technology is the next generation of wireless communication that offers faster speed, lower latency, higher bandwidth, and more reliability than 4G technology. 5G technology enables gamers to enjoy smoother and more immersive gameplay, especially for online multiplayer and cloud gaming. 5G technology also allows developers to create more complex and realistic games that can leverage the full potential of mobile devices.
-
Augmented reality and virtual reality: Augmented reality (AR) and virtual reality (VR) are technologies that create interactive and immersive experiences by overlaying digital elements on the real world or creating a simulated environment. AR and VR enable gamers to experience games in a new and exciting way, as they can interact with their surroundings and feel more immersed in the game world. AR and VR also offer new possibilities for game design and storytelling for developers. Some of the AR and VR games that are available or in development are Pokemon Go, Harry Potter: Wizards Unite, Minecraft Earth, Beat Saber, Half-Life: Alyx, and The Walking Dead: Saints & Sinners.
-
-
Conclusion
-
Android gameplay is a fascinating and diverse topic that covers many aspects of gaming on Android devices. Android gameplay offers many benefits for both gamers and developers, such as entertainment, challenge, learning, creativity, market potential, platform flexibility, community support, and more. Android gameplay also offers a wide range of games in different genres, such as adventure, arcade, puzzle, racing, strategy, and more. Android gameplay also has some tips and tricks that can enhance your gaming experience, such as using digital wellbeing features, voice search and commands, uninstalling unwanted apps, syncing your progress across devices, and earning rewards with Google Play Points. Android gameplay also has some statistics and trends that show its growth and popularity in the global and U.S. markets, as well as its future projections and opportunities in terms of technology, innovation, and creativity.
-
If you are interested in android gameplay, you can try out some of the games that we have recommended in this article or explore other games on the Google Play Store. You can also use Google Play Games to save your game data online, play with other players online, and discover new games. You can also use Google Assistant to search for games, launch apps, control settings, and more with your voice.
-
If you are a developer or aspiring to be one, you can use Android Studio to create your own games for Android devices. You can also use Firebase to add features such as authentication, database, storage, analytics, and more to your games. You can also use Google Play Console to publish your games on the Google Play Store and reach millions of users worldwide.
-
We hope that this article has given you some useful information and insights about android gameplay. We also hope that you have enjoyed reading it as much as we have enjoyed writing it. Thank you for your time and attention.
-
Frequently Asked Questions
-
Here are some frequently asked questions about android gameplay that you might find helpful:
-
What are some of the advantages of android gameplay over other platforms?
-
Some of the advantages of android gameplay over other platforms are:
-
-
Android devices have a larger and more diverse user base than other devices
-
Android devices have a more open and flexible platform than other devices
-
Android devices have a more diverse and dynamic range of games than other devices
-
Android devices have more features and functions that can enhance gaming than other devices
-
-
What are some of the challenges or drawbacks of android gameplay?
-
Some of the challenges or drawbacks of android gameplay are:
-
-
Android devices have a lower performance and battery life than other devices
-
Android devices have a higher risk of malware and security issues than other devices
-
Android devices have a more fragmented and inconsistent platform than other devices
-
Android devices have a more competitive and saturated market than other devices
-
-
How can I improve my android gameplay experience?
-
Some of the ways that you can improve your android gameplay experience are:
-
-
Choose games that are compatible and optimized for your device model and specifications
-
Update your device software and apps regularly to ensure smooth and secure performance
-
Clear your device storage and cache frequently to free up space and speed up your device
-
Adjust your device settings and preferences to suit your gaming needs and preferences
-
Use accessories such as headphones, controllers, stands, chargers, and more to enhance your gaming comfort and convenience
-
-
How can I learn more about android gameplay?
-
Some of the ways that you can learn more about android gameplay are:
-
-
Read blogs, magazines, reviews, guides, and news about android gameplay online or offline
-
Watch videos, podcasts, streams, tutorials, and demos about android gameplay online or offline
-
Join forums, communities, groups, and events about android gameplay online or offline
-
Ask questions, share opinions, give feedback, and exchange tips about android gameplay online or offline
-
Play games, experiment with features, explore genres, and discover new games on Android
-
-
What are some of the best resources for android gameplay?
-
Some of the best resources for android gameplay are:
-
-
The Google Play Store: The Google Play Store is the official app store for Android devices that offers millions of games in different categories. You can browse, download, update, rate, review, and share games on the Google Play Store. You can also access Google Play Games, Google Play Points, Google Play Pass, and Google Play Protect on the Google Play Store.
-
The Android Developers website: The Android Developers website is the official website for Android developers that offers tools, documentation, guides, tutorials, courses, and more for creating games for Android devices. You can access Android Studio, Firebase, Google Play Console, Google Play Services, and more on the Android Developers website.
-
The Android Authority website: The Android Authority website is one of the leading websites for Android news, reviews, tips, tricks, and more. You can find articles, videos, podcasts, newsletters, deals, and more about android gameplay on the Android Authority website.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bloons TD 6 APK How to Install and Play the Latest Version of the Epic Monkey Game.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bloons TD 6 APK How to Install and Play the Latest Version of the Epic Monkey Game.md
deleted file mode 100644
index 6d8e5acdfb2be63e8847df3df1a955475dfe1cf7..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bloons TD 6 APK How to Install and Play the Latest Version of the Epic Monkey Game.md
+++ /dev/null
@@ -1,162 +0,0 @@
-
-
Bloons TD 6 APK: A Fun and Challenging Tower Defense Game
-
If you are a fan of tower defense games, you might have heard of Bloons TD 6, the latest installment in the popular Bloons TD series. But did you know that you can download and play Bloons TD 6 APK on your device for free? In this article, we will tell you everything you need to know about Bloons TD 6 APK, including what it is, how to download and install it, how to play and enjoy it, and more. Read on to find out more!
Bloons TD 6 is a 3D tower defense game developed and published by Ninja Kiwi, a New Zealand-based game studio. It was released in June 2018 for iOS, Android, Windows, and Mac platforms. It is the sixth main game in the Bloons TD series, which has been around since 2007.
-
The history and features of the Bloons TD series
-
The Bloons TD series is one of the most popular and successful tower defense franchises in the world. It has over a billion downloads across all platforms, and has received positive reviews from critics and players alike. The series is known for its colorful graphics, humorous animations, addictive gameplay, and diverse content.
-
The premise of the series is simple: you have to stop the invading balloons (called bloons) from reaching the end of the path by placing various monkey towers along the way. Each monkey tower has its own unique abilities, upgrades, and activated powers that can help you pop the bloons. The bloons come in different types, colors, and sizes, each with their own characteristics and resistances. Some bloons can even contain other bloons inside them, making them harder to pop.
-
The series has evolved over time, adding new features and improvements with each game. Some of the notable features include:
Co-op mode: You can team up with up to three other players online or locally to play any map or mode together.
-
Heroes: You can choose from a roster of 14 heroes, each with their own personality, voiceover, signature upgrades, and special abilities. Heroes level up automatically during the game, making them more powerful as you progress.
-
Monkey Knowledge: You can unlock over 100 meta-upgrades that add passive bonuses to your monkey towers or gameplay mechanics.
-
Trophy Store: You can earn trophies by completing various achievements or events, and use them to buy cosmetic items that customize your monkeys, bloons, animations, music, and more.
-
Content Browser: You can create your own challenges and odysseys using various settings and modifiers, and share them with other players online. You can also browse and play the most liked and played community content.
-
-
The gameplay and content of Bloons TD 6
-
Bloons TD 6 offers a huge amount of gameplay and content for players of all skill levels and preferences. The game has over 60 handcrafted maps, each with their own theme, layout, difficulty, and special rules. The maps range from easy to expert, and some of them have alternate versions that change the bloon spawns or tower placements.
Continuing the article:
-
The advantages and disadvantages of downloading the APK version
-
Bloons TD 6 is a premium game that costs $6.99 on the official app stores. However, you can also download and play Bloons TD 6 APK for free from various third-party sources. APK stands for Android Package Kit, and it is the file format used by Android devices to install and distribute apps. By downloading Bloons TD 6 APK, you can enjoy the game without paying anything.
-
However, there are also some drawbacks and risks of using Bloons TD 6 APK. Here are some of them:
-
-
You may not get the latest updates and features of the game, as the APK version may not be updated as frequently as the official version.
-
You may encounter compatibility issues or bugs that affect the performance or stability of the game.
-
You may not be able to access some online features or modes of the game, such as co-op, boss events, odysseys, or content browser.
-
You may violate the terms of service or privacy policy of Ninja Kiwi, and risk getting banned or suspended from the game.
-
You may expose your device to malware or viruses that can harm your data or security.
-
-
Therefore, you should weigh the pros and cons of downloading Bloons TD 6 APK before deciding to do so. You should also make sure that you download Bloons TD 6 APK from a reputable and trustworthy source, and scan it with an antivirus software before installing it.
-
How to download and install Bloons TD 6 APK on your device?
-
If you have decided to download and install Bloons TD 6 APK on your device, you will need to follow some steps to do so. The steps may vary depending on whether you are using an Android device or a PC. Here are the steps for both platforms:
-
The steps to download and install Bloons TD 6 APK on Android
-
-
Go to a reliable website that offers Bloons TD 6 APK, such as . Make sure that the website is safe and secure, and that the APK file is compatible with your device.
-
Tap on the download button to start downloading Bloons TD 6 APK. You may need to allow downloads from unknown sources in your device settings.
-
Once the download is complete, locate the APK file in your device storage and tap on it to start installing it. You may need to grant some permissions for the app to run properly.
-
Wait for the installation to finish, and then launch Bloons TD 6 from your app drawer or home screen. Enjoy!
-
-
The steps to download and install Bloons TD 6 APK on PC
-
-
Go to a reliable website that offers Bloons TD 6 APK, such as . Make sure that the website is safe and secure, and that the APK file is compatible with your PC.
-
Click on the download button to start downloading Bloons TD 6 APK. You may need to save it in a folder where you can easily find it later.
-
Download and install an Android emulator on your PC, such as BlueStacks, NoxPlayer, or LDPlayer. An Android emulator is a software that allows you to run Android apps on your PC.
-
Launch the Android emulator and sign in with your Google account. You may need to create one if you don't have one already.
-
Drag and drop the Bloons TD 6 APK file into the emulator window, or use the built-in browser to locate and install it. You may need to allow installations from unknown sources in the emulator settings.
-
Wait for the installation to finish, and then launch Bloons TD 6 from the emulator app drawer or home screen. Enjoy!
-
-
The precautions and risks of using Bloons TD 6 APK
-
As mentioned earlier, using Bloons TD 6 APK comes with some potential dangers and disadvantages. Therefore, you should take some precautions and be aware of the risks before playing Bloons TD 6 APK. Here are some tips to help you:
-
-
Always backup your data before installing or updating Bloons TD 6 APK. You can use a cloud service or an external storage device to do so.
-
Always scan Bloons TD 6 APK with an antivirus software before installing it. You can use a reputable antivirus app on your device or PC, such as Avast, McAfee, or Kaspersky.
-
Always check the reviews and ratings of Bloons TD 6 APK on the website where you download
Continuing the article:
-
Always check the reviews and ratings of Bloons TD 6 APK on the website where you download it. You can also read the comments and feedback from other users who have tried it. This can help you avoid fake or malicious APK files.
-
Always update Bloons TD 6 APK whenever a new version is available. This can help you fix any bugs or glitches, and enjoy the latest features and content of the game.
-
Always respect the rights and policies of Ninja Kiwi, the developer and publisher of Bloons TD 6. Do not use Bloons TD 6 APK to cheat, hack, or exploit the game. Do not distribute or share Bloons TD 6 APK without permission. Do not claim ownership or credit for Bloons TD 6 APK.
-
-
By following these tips, you can reduce the risks and enhance the experience of playing Bloons TD 6 APK. However, you should still be careful and responsible when using Bloons TD 6 APK, as there is no guarantee that it will work perfectly or safely on your device or PC.
-
How to play and enjoy Bloons TD 6?
-
Bloons TD 6 is a fun and challenging tower defense game that can keep you entertained for hours. Whether you are a beginner or an expert, you can find something to suit your taste and skill level in Bloons TD 6. Here are some tips on how to play and enjoy Bloons TD 6:
-
The basic tips and tricks for beginners
-
If you are new to Bloons TD 6, you may want to start with the tutorial mode, which will teach you the basics of the game, such as how to place towers, upgrade them, use powers, and pop bloons. You can also play the easy maps and modes first, to get familiar with the game mechanics and strategies.
-
Here are some basic tips and tricks for beginners:
-
-
Try different combinations of monkey towers and heroes, and see what works best for you. Each tower and hero has its own strengths and weaknesses, and can synergize well with others.
-
Upgrade your monkey towers wisely, and don't spend all your money on one tower. You can choose from three upgrade paths for each tower, each with five tiers of upgrades. The higher tiers are more expensive but more powerful.
-
Use your activated powers and abilities sparingly, and save them for when you really need them. Activated powers are consumable items that can give you an edge in the game, such as extra lives, cash, or damage. Abilities are special skills that your towers or heroes can use once they reach a certain level.
-
Pop as many bloons as possible, and don't let them escape. Each bloon that escapes will cost you one life (or more, depending on the bloon type). If you lose all your lives, you will lose the game.
-
Have fun and experiment with different settings and modifiers. You can change the game speed, difficulty, mode, map, and more to suit your preference and challenge yourself.
-
-
The best strategies and builds for advanced players
-
If you are an experienced player of Bloons TD 6, you may want to try some of the harder maps and modes, such as impoppable, chimps, or expert. These modes will test your skills and knowledge of the game, and require you to use more advanced strategies and builds.
-
Here are some of the best strategies and builds for advanced players:
-
-
Use monkey knowledge points to unlock meta-upgrades that can boost your performance in the game. Monkey knowledge points are earned by leveling up in the game, and can be spent on various categories of upgrades, such as primary, military, magic, support, powers, heroes, or balance.
-
Use monkey money to buy premium items that can enhance your gameplay experience. Monkey money is earned by completing maps or achievements in the game, and can be used to buy skins, insta-monkeys, powers, heroes, or knowledge respecs.
-
Use trophies to buy cosmetic items that can customize your appearance in the game. Trophies are earned by completing events or challenges in the game, and can be used to buy decals,
Continuing the article:
-
Use trophies to buy cosmetic items that can customize your appearance in the game. Trophies are earned by completing events or challenges in the game, and can be used to buy decals, portraits, music, sound effects, and more.
-
Use the alchemist tower to buff your other towers with powerful potions. The alchemist tower can apply various effects to nearby towers, such as increased damage, range, pierce, attack speed, or cash per pop. The alchemist tower is especially effective with fast-firing or multi-shot towers, such as the ninja, dartling gunner, or super monkey.
-
Use the perma-spike tower to create a reliable backup defense. The perma-spike tower can produce spikes that last until they are used up, and can deal massive damage to bloons. The perma-spike tower is especially useful for dealing with late-game bloons, such as DDTs, ZOMGs, or BADs.
-
Use the sun avatar tower to unleash devastating beams of plasma. The sun avatar tower is one of the most powerful towers in the game, capable of popping almost any type of bloon with ease. The sun avatar tower can also be upgraded to the sun temple or the true sun god, which are even more powerful and can affect other towers in their range.
-
Use the banana farm tower to generate extra income. The banana farm tower can produce bananas that can be collected for cash, or automatically deposited into your account. The banana farm tower can also be upgraded to increase its production rate, value, or efficiency.
-
-
The fun and creative modes and challenges for everyone
-
Bloons TD 6 is not only a challenging game, but also a fun and creative one. You can play various modes and challenges that can spice up your gameplay and test your skills in different ways. Here are some of the fun and creative modes and challenges for everyone:
-
-
Sandbox mode: You can create your own scenarios and experiments using unlimited money, lives, towers, bloons, and powers. You can also modify the bloon properties, such as speed, health, immunity, or regrowth. Sandbox mode is a great way to test your strategies, learn new things, or just have fun.
-
Races mode: You can compete with other players online to see who can complete a map or a challenge faster. You can also create your own races and share them with others. Races mode is a great way to challenge yourself, improve your skills, or show off your achievements.
-
Daily challenges: You can play a new challenge every day that has different settings and modifiers. You can also vote for the next daily challenge from a list of options. Daily challenges are a great way to discover new possibilities, earn rewards, or join the community.
-
Advanced challenges: You can play a special challenge every week that has more difficult settings and modifiers. You can also submit your own advanced challenges for others to play. Advanced challenges are a great way to push your limits, earn trophies, or showcase your creativity.
-
Achievements: You can complete various achievements that have different objectives and criteria. You can also view your progress and statistics in the game. Achievements are a great way to track your goals, earn monkey money, or unlock new items.
-
-
Conclusion
-
Bloons TD 6 is a fun and challenging tower defense game that has something for everyone. Whether you want to pop some bloons, build some towers, or create some challenges, you can do it all in Bloons TD 6. You can also download and play Bloons TD 6 APK for free from various sources online.
-
However, you should also be careful and responsible when using Bloons TD 6 APK, as there are some risks and drawbacks involved. You should always backup your data, scan your APK file,
Continuing the article:
-
However, you should also be careful and responsible when using Bloons TD 6 APK, as there are some risks and drawbacks involved. You should always backup your data, scan your APK file, check the reviews and ratings, update your APK version, and respect the rights and policies of Ninja Kiwi. By doing so, you can reduce the dangers and enhance the enjoyment of playing Bloons TD 6 APK.
-
Bloons TD 6 is a game that can provide you with hours of fun and challenge. Whether you play it on your device or PC, with the official version or the APK version, you can experience the thrill and excitement of popping bloons and building towers. So what are you waiting for? Download Bloons TD 6 APK today and join the monkey madness!
-
FAQs
-
Here are some of the frequently asked questions about Bloons TD 6 APK:
-
-
Q: Is Bloons TD 6 APK safe to use?
-
A: Bloons TD 6 APK is safe to use as long as you download it from a reputable and trustworthy source, and scan it with an antivirus software before installing it. However, there is no guarantee that Bloons TD 6 APK will work perfectly or safely on your device or PC, so use it at your own risk.
-
Q: Is Bloons TD 6 APK legal to use?
-
A: Bloons TD 6 APK is not legal to use, as it violates the terms of service and privacy policy of Ninja Kiwi, the developer and publisher of Bloons TD 6. By using Bloons TD 6 APK, you are infringing on the intellectual property rights of Ninja Kiwi, and risk getting banned or suspended from the game.
-
Q: Is Bloons TD 6 APK free to use?
-
A: Bloons TD 6 APK is free to use, as you do not have to pay anything to download or play it. However, you may not get the full features and content of the game, as the APK version may not be updated as frequently as the official version. You may also encounter compatibility issues or bugs that affect the performance or stability of the game.
-
Q: How to update Bloons TD 6 APK?
-
A: To update Bloons TD 6 APK, you will need to download and install the latest version of the APK file from a reliable website. You may need to uninstall the previous version of Bloons TD 6 APK before installing the new one. You may also need to backup your data before updating Bloons TD 6 APK, as you may lose your progress or settings.
-
Q: How to uninstall Bloons TD 6 APK?
-
A: To uninstall Bloons TD 6 APK, you will need to delete the APK file from your device or PC storage. You may also need to remove any residual files or folders related to Bloons TD 6 APK. You may also need to restore your device or PC settings to their original state.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Boss Domino APK A Local Indonesian Game with Various Slots.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Boss Domino APK A Local Indonesian Game with Various Slots.md
deleted file mode 100644
index e0c683e9cc3cf8f847ac2f75f37737b232ca0bc9..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Boss Domino APK A Local Indonesian Game with Various Slots.md
+++ /dev/null
@@ -1,132 +0,0 @@
-
-
Boss Domino APK: A Fun and Exciting Casino Game for Android
-
If you are looking for a casino game that offers a variety of local Indonesian games, such as Domino Gaple, Domino Qiu Qiu, and various slots, then you should try Boss Domino APK. Boss Domino APK is a free Android game that lets you play casino games online with your friends or other players. You can also win free coins, bonuses, jackpots, and prizes by playing Boss Domino APK. In this article, we will tell you everything you need to know about Boss Domino APK, including what it is, how to download and install it, how to play it, and why you should play it.
-
What is Boss Domino APK?
-
Boss Domino APK is an Android game that provides local Indonesian casino games, such as Domino Gaple , Domino Qiu Qiu, and various slots. Domino Gaple is a popular card game in Indonesia that is similar to poker, but uses domino tiles instead of cards. Domino Qiu Qiu is another domino game that is also known as 99 or Kiu Kiu, and involves forming the best combination of four tiles out of six tiles in your hand. Slots are games of chance that involve spinning reels and matching symbols to win prizes.
Boss Domino APK is not only a game, but also a social platform where you can chat with other players, send emoticons, and make friends. You can also join clubs, participate in tournaments, and rank on the leaderboard. Boss Domino APK is designed with attractive graphics, sound effects, and animations that make the gameplay more enjoyable and realistic. You can also customize your avatar, profile, and game settings according to your preferences.
-
How to download and install Boss Domino APK?
-
Boss Domino APK is not available on the official Google Play Store, but you can still download and install it from other sources. There are two ways to download and install Boss Domino APK: from APKCombo or from Google Play Store using a VPN.
-
Steps to download Boss Domino APK from APKCombo
-
APKCombo is a website that provides APK and XAPK files for Android apps and games. You can download Boss Domino APK from APKCombo by following these steps:
Choose the latest version of Boss Domino APK and download the XAPK file to your device.
-
Install the XAPK file using APKCombo Installer, which is an app that can extract and install XAPK files.
-
Open Boss Domino APK and enjoy playing.
-
-
Steps to download Boss Domino APK from Google Play Store
-
If you prefer to download Boss Domino APK from Google Play Store, you will need to use a VPN app that can change your location to Indonesia. This is because Boss Domino APK is only available in Indonesia on Google Play Store. You can download Boss Domino APK from Google Play Store using a VPN by following these steps:
-
-
Open the VPN app on your Android device and connect to an Indonesian server.
-
Open the Google Play Store app on your Android device and search for Boss Domino in the search bar.
-
Tap on the Boss Domino icon and then tap on Install.
-
Wait for the installation to complete and then open Boss Domino APK.
-
Disconnect the VPN app and enjoy playing.
-
-
How to play Boss Domino APK?
-
Boss Domino APK offers three types of casino games: Domino Gaple, Domino Qiu Qiu, and slots. Each game has its own rules, tips, and features that you need to know before playing. Here are some basic guidelines for playing each game:
-
Rules and tips for playing Domino Gaple
-
Domino Gaple is a card game that uses domino tiles instead of cards. The objective of the game is to get rid of all your tiles before your opponents. Here are some rules and tips for playing Domino Gaple:
-
-
Each player starts with seven tiles and one tile is placed on the table as the starter.
-
On your turn, you can either place a tile that matches one of the ends of the chain on the table, or draw a tile from the stock if you have no matching tiles.
-
The game ends when one player runs out of tiles or when the stock is empty and no one can make a move.
-
The player with the lowest total value of tiles in their hand wins the game.
-
You can use strategies such as blocking, bluffing, or counting tiles to gain an advantage over your opponents.
-
-
Rules and tips for playing Domino Qiu Qiu
-
Domino Qiu Qiu is another domino game that is also known as 99 or Kiu Kiu. The objective of the game is to form the best combination of four tiles out of six tiles in your hand. Here are some rules and tips for playing Domino Qiu Qiu:
-
-
Each player starts with six tiles and four rounds of betting take place.
-
On each round, you can either check, raise, call, fold, or all-in depending on your tiles and the bets of other players.
-
The game ends when one player wins all the chips or when all players except one fold.
-
The ranking of tile combinations is as follows: six gods > four balak > pure big > pure small > three doubles > two doubles > one double > high card.
-
You can use strategies such as bluffing, folding, or all-in to influence the bets of other players.
-
-
Rules and tips for playing slot games
-
Slots are games of chance that involve spinning reels and matching symbols to win prizes. The objective of the game is to match symbols on the reels and win prizes. Here are some rules and tips for playing slot games:
-
boss domino apk download
-boss domino apk mod
-boss domino apk latest version
-boss domino apk free
-boss domino apk android
-boss domino apk offline
-boss domino apk online
-boss domino apk hack
-boss domino apk update
-boss domino apk 2023
-boss domino game apk
-boss domino gaple apk
-boss domino qiu qiu apk
-boss domino slot apk
-boss domino casino apk
-boss domino indonesia apk
-boss domino higgs island apk
-boss domino 99 apk
-boss domino kiu kiu apk
-boss domino qq apk
-download boss domino apk gratis
-download boss domino apk terbaru
-download boss domino apk mod unlimited money
-download boss domino apk for pc
-download boss domino apk versi lama
-cara download boss domino apk
-link download boss domino apk
-situs download boss domino apk
-unduh boss domino apk
-install boss domino apk
-mainkan boss domino apk
-daftar boss domino apk
-login boss domino apk
-cheat boss domino apk
-tips boss domino apk
-trik boss domino apk
-strategi boss domino apk
-panduan boss domino apk
-tutorial boss domino apk
-review boss domino apk
-rating boss domino apk
-komentar boss domino apk
-testimoni boss domino apk
-bonus boss domino apk
-jackpot boss domino apk
-hadiah boss domino apk
-koin emas boss domino apk
-emotikon interaktif boss domino apk
-tampilan game menarik boss domino apk
-
-
Each slot game has different themes, paylines, symbols, and features.
-
You can adjust your bet size and number of paylines before spinning the reels.
-
You can also use auto spin, turbo spin, or max bet options to speed up the gameplay.
-
You can win free spins, bonus games, jackpots, or multipliers depending on the slot game.
-
You can use strategies such as choosing the right slot game, managing your bankroll, or betting wisely to increase your chances of winning.
-
-
Why should you play Boss Domino APK?
-
Boss Domino APK is not just a game, but also a fun and exciting way to spend your time. There are many benefits of playing Boss Domino APK, such as:
-
Benefits of playing Boss Domino APK
-
-
You can enjoy various casino games in one app.
-
You can play with your friends or other players online.
-
You can improve your skills and strategies in domino games.
-
You can have fun and relax with colorful graphics and sound effects.
-
You can win real money or prizes by playing slot games.
-
-
Conclusion
-
Boss Domino APK is a free Android game that offers a variety of local Indonesian casino games, such as Domino Gaple, Domino Qiu Qiu, and various slots. You can download and install Boss Domino APK from APKCombo or Google Play Store using a VPN. You can also play Boss Domino APK online with your friends or other players. You can also win free coins, bonuses, jackpots, and prizes by playing Boss Domino APK. Boss Domino APK is a fun and exciting casino game that you should try if you love casino games.
-
FAQs
-
-
What is the difference between APK and XAPK files?
-
APK files are Android application packages that contain the app code and resources. XAPK files are extended APK files that contain additional data such as OBB files or split APKs.
-
How can I get more free coins and bonuses in Boss Domino APK?
-
You can get more free coins and bonuses in Boss Domino APK by logging in daily, completing tasks, inviting friends, joining clubs, participating in tournaments, or watching ads.
-
Is Boss Domino APK safe to download and play?
-
Boss Domino APK is safe to download and play as long as you download it from a trusted source such as APKCombo or Google Play Store using a VPN. You should also avoid using any modded or hacked versions of Boss Domino APK as they may contain viruses or malware.
-
Can I play Boss Domino APK offline?
-
No, you cannot play Boss Domino APK offline as it requires an internet connection to access the online features and games. You can only play Boss Domino APK online with your friends or other players.
-
Can I play Boss Domino APK on PC?
-
Yes, you can play Boss Domino APK on PC by using an Android emulator such as BlueStacks or NoxPlayer. You can download and install the emulator on your PC and then download and install Boss Domino APK from the emulator's app store.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Parking Multiplayer APK A Free Game with Realistic Cars and Maps.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Parking Multiplayer APK A Free Game with Realistic Cars and Maps.md
deleted file mode 100644
index 77ab00947bc8d5848816f23d66770d9416058ca6..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Parking Multiplayer APK A Free Game with Realistic Cars and Maps.md
+++ /dev/null
@@ -1,66 +0,0 @@
-
-
Free Download Car Parking Multiplayer APK: A Realistic Driving Simulator for Android
-
Do you love driving and parking games? Do you want to experience a realistic simulation of driving various vehicles in a detailed city? Do you want to challenge yourself and other players in different parking scenarios? If you answered yes to any of these questions, then you should try Car Parking Multiplayer, a popular game for Android devices that lets you enjoy all these features and more. In this article, we will tell you what Car Parking Multiplayer is, how to download and install it on your device, and some tips and tricks for playing it.
Car Parking Multiplayer is a 3D driving simulator game developed by olzhass, a studio that specializes in creating realistic car games. The game has over 100 million downloads on Google Play Store and has received positive reviews from players and critics alike. The game offers more than just parking, as you can also explore an open-world city with various locations, such as airports, gas stations, car washes, garages, and more. You can also customize your car with different paint colors, stickers, rims, spoilers, and other accessories. Moreover, you can play online with other players in multiplayer mode, where you can chat, race, trade cars, or even prank each other.
-
Features of Car Parking Multiplayer
-
Realistic graphics and physics
-
One of the main attractions of Car Parking Multiplayer is its realistic graphics and physics. The game uses high-quality 3D models and textures to create a lifelike environment that you can explore. The game also simulates the physics of different vehicles, such as their weight, speed, acceleration, braking, steering, suspension, and damage. You can also experience different weather conditions, such as rain, snow, fog, or night time. The game also has realistic sound effects that match the engine sounds, horn sounds, tire sounds, and collision sounds of each vehicle.
-
Variety of vehicles and customization options
-
The game features over 100 vehicles that you can drive and park in various scenarios. You can choose from different categories of vehicles, such as cars, trucks, buses, motorcycles, or even helicopters. Each vehicle has its own characteristics and performance that affect how it handles on the road. You can also customize your vehicle with different options, such as paint color, stickers, rims, spoilers, neon lights, license plates, and more. You can also modify your vehicle's engine power, transmission type, brake force, steering sensitivity, and other settings.
-
Open-world exploration and multiplayer mode
-
The game allows you to explore an open-world city with various locations that you can visit. You can drive around freely or follow the GPS directions to find your destination. You can also interact with different objects in the city, such as traffic lights, gas stations, car washes, garages, or even animals. You can also play online with other players in multiplayer mode, where you can join different servers based on your region or language. You can chat with other players, race with them, trade cars with them, or even prank them by blocking their way or honking at them.
-
How to download and install Car Parking Multiplayer APK?
-
Download from official sources
-
The easiest way to download Car Parking Multiplayer APK is to use the official sources, such as Google Play Store or Upt
Enable unknown sources on your device
-
If you download Car Parking Multiplayer APK from a third-party source, such as Uptodown, you will need to enable unknown sources on your device. This is because Android devices normally block the installation of apps that are not from the official sources, such as Google Play Store. To enable unknown sources, you need to go to your device's settings, then security, then toggle on the option that says "allow installation of apps from unknown sources". This will allow you to install Car Parking Multiplayer APK on your device.
-
How to install car parking multiplayer apk on Windows PC
-Car parking multiplayer apk mod with unlimited money and all cars unlocked
-Car parking multiplayer apk latest version 4.8.9.4.4 free download
-Car parking multiplayer apk gameplay and features review
-Car parking multiplayer apk hack tool for Android devices
-Car parking multiplayer apk online mode with friends and voice chat
-Car parking multiplayer apk offline mode with AI and custom maps
-Car parking multiplayer apk best cars and tuning tips
-Car parking multiplayer apk cheats and tricks for easy parking
-Car parking multiplayer apk comparison with other parking games
-Car parking multiplayer apk problems and solutions for common issues
-Car parking multiplayer apk update and patch notes for 2023
-Car parking multiplayer apk requirements and compatibility for different devices
-Car parking multiplayer apk guide and walkthrough for beginners
-Car parking multiplayer apk download link and installation instructions
-
Install the APK file and launch the game
-
Once you have downloaded Car Parking Multiplayer APK from your preferred source, you need to locate the file on your device's storage. You can use a file manager app to find the file, or you can check your download folder. Once you find the file, tap on it to start the installation process. Follow the instructions on the screen to complete the installation. After the installation is done, you can launch the game by tapping on its icon on your home screen or app drawer. Enjoy playing Car Parking Multiplayer on your Android device!
-
Tips and tricks for playing Car Parking Multiplayer
-
Learn the basics of parking and driving
-
Before you start playing Car Parking Multiplayer, you should learn the basics of parking and driving in the game. The game has a tutorial mode that teaches you how to use the controls, such as the steering wheel, pedals, gearbox, handbrake, and camera. You should also practice parking in different situations, such as parallel parking, reverse parking, diagonal parking, and more. You can also adjust the difficulty level of the parking challenges, from easy to hard. The game also has a driving school mode that teaches you how to follow traffic rules, such as speed limits, stop signs, traffic lights, and more.
-
Use the map and GPS to find your destination
-
The game has a large open-world city that you can explore freely. However, if you want to find a specific location or complete a mission, you will need to use the map and GPS features. The map shows you the layout of the city and the locations of different places, such as gas stations, car washes, garages, airports, and more. You can also see your current position and direction on the map. The GPS feature helps you navigate to your destination by showing you the best route and giving you voice directions. You can also set waypoints on the map to mark places that you want to visit.
-
Earn money and upgrade your car
-
The game allows you to earn money by completing missions, parking challenges, races, or other activities. You can use the money to buy new cars or upgrade your existing ones. You can also sell your cars or trade them with other players online. You can upgrade your car's performance by modifying its engine power, transmission type, brake force, steering sensitivity, and other settings. You can also customize your car's appearance by changing its paint color, stickers, rims, spoilers, neon lights, license plates, and more.
-
Join online servers and interact with other players
-
The game has a multiplayer mode that lets you play online with other players from around the world. You can join different servers based on your region or language. You can chat with other players, race with them, trade cars with them, or even prank them by blocking their way or honking at them. You can also create your own server and invite your friends to join. You can also join clans or groups of players who share similar interests or goals in the game.
-
Conclusion
-
Car Parking Multiplayer is a realistic driving simulator game for Android devices that offers more than just parking. You can explore an open-world city with various locations, customize your car with different options, and play online with other players in multiplayer mode. The game has realistic graphics and physics, a variety of vehicles and customization options, and an easy-to-use interface. If you want to download Car Parking Multiplayer APK for free, you can use the official sources, such as Google Play Store or Uptodown, or any other trusted third-party source. However, you will need to enable unknown sources on your device before installing the APK file. You can also follow our tips and tricks for playing Car Parking Multiplayer to improve your skills and enjoy the game more.
-
FAQs
-
-
Q: Is Car Parking Multiplayer safe to download?
-
A: Yes, Car Parking Multiplayer is safe to download if you use official sources, such as Google Play Store or Uptodown[^1^ or any other trusted third-party source. However, you should always scan the APK file for viruses or malware before installing it on your device.
-
Q: How much storage space does Car Parking Multiplayer require?
-
A: Car Parking Multiplayer requires about 500 MB of storage space on your device. However, this may vary depending on the updates and additional content that you download in the game.
-
Q: Can I play Car Parking Multiplayer offline?
-
A: Yes, you can play Car Parking Multiplayer offline in single-player mode. However, you will need an internet connection to play online in multiplayer mode or to access some features, such as the car market or the clan system.
-
Q: What are the minimum requirements to play Car Parking Multiplayer on Android?
-
A: The minimum requirements to play Car Parking Multiplayer on Android are as follows:
-
Android version: 4.4 or higher
-
RAM: 1 GB or higher
-
CPU: 1.5 GHz or higher
-
GPU: Mali-400 MP or higher
-
-
-
Q: How can I contact the developers of Car Parking Multiplayer?
-
A: You can contact the developers of Car Parking Multiplayer by sending them an email at olzhass@yandex.com or by visiting their website at https://olzhass.com/. You can also follow them on Facebook, Instagram, YouTube, or Discord for the latest news and updates about the game.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/College Romance Season 1 A Hilarious Story of Three BFFs and Their Love Lives.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/College Romance Season 1 A Hilarious Story of Three BFFs and Their Love Lives.md
deleted file mode 100644
index 213789b86bd9073cde478682e9362a16e585eb4d..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/College Romance Season 1 A Hilarious Story of Three BFFs and Their Love Lives.md
+++ /dev/null
@@ -1,111 +0,0 @@
-
-
College Romance Season 1 Download Filmyzilla: Is It Legal and Safe?
-
College Romance is a popular web series that depicts the lives and loves of three college friends. The first season of the show was released in 2018 and received rave reviews from the audience. The show is full of humor, drama, romance, and nostalgia. It is available to watch online on Sony Liv and TVF Play.
Filmyzilla is a notorious website that leaks movies and shows online for free. The website has a huge collection of pirated content from various genres and languages. One can find the latest movies, Bollywood movies, dubbed movies, Telugu movies, Tamil movies, Hollywood movies, web series, and more on this website.
-
But is it legal and safe to download College Romance season 1 from Filmyzilla? In this article, we will answer this question and provide you with some alternatives and solutions to watch College Romance season 1 online legally and safely.
-
College Romance Season 1: A Fun and Relatable Web Series
-
The Plot and Characters of College Romance Season 1
-
College Romance season 1 follows the adventures of three best friends - Naira, Trippy, and Karan - who are looking for love, laughs, and some lifelong memories while attending college together. Naira is a smart and confident girl who helps her friends with their relationship problems. Trippy is a fun-loving guy who has a crush on a senior girl named Raveena. Karan is a confused guy who is dating a girl named Deepika but is not sure about his feelings for her.
-
The show explores the various aspects of college life such as parties, exams, friendships, breakups, hookups, and more. The show also features some hilarious situations and dialogues that will make you laugh out loud. The show has a total of five episodes, each lasting for about 20 minutes.
-
College Romance Season 1 free streaming online
-How to watch College Romance Season 1 on Sony Liv
-College Romance Season 1 full episodes HD quality
-College Romance Season 1 cast and crew details
-College Romance Season 1 review and ratings
-College Romance Season 1 best scenes and moments
-College Romance Season 1 download in Hindi
-College Romance Season 1 subtitles download
-College Romance Season 1 plot summary and spoilers
-College Romance Season 1 memes and fan reactions
-College Romance Season 1 behind the scenes and bloopers
-College Romance Season 1 trivia and facts
-College Romance Season 1 soundtrack and songs
-College Romance Season 1 awards and nominations
-College Romance Season 1 release date and time
-College Romance Season 1 trailer and teaser
-College Romance Season 1 comparison with other web series
-College Romance Season 1 season 2 predictions and updates
-College Romance Season 1 Filmyzilla alternatives and options
-College Romance Season 1 torrent download links
-College Romance Season 1 watch party ideas and tips
-College Romance Season 1 merchandise and products
-College Romance Season 1 fan art and creations
-College Romance Season 1 quotes and dialogues
-College Romance Season 1 characters and relationships
-College Romance Season 1 genre and themes
-College Romance Season 1 controversies and issues
-College Romance Season 1 interviews and podcasts
-College Romance Season 1 fan theories and speculations
-College Romance Season 1 funniest episodes and jokes
-College Romance Season 1 romantic episodes and scenes
-College Romance Season 1 emotional episodes and scenes
-College Romance Season 1 action-packed episodes and scenes
-College Romance Season 1 twisty episodes and scenes
-College Romance Season 1 most liked episodes and scenes
-College Romance Season 1 most disliked episodes and scenes
-College Romance Season 1 most surprising episodes and scenes
-College Romance Season 1 most relatable episodes and scenes
-College Romance Season 1 most memorable episodes and scenes
-College Romance Season 1 binge-watch guide and tips
-College Romance Season 1 rewatch value and ranking
-College Romance Season 1 feedback and suggestions
-College Romance Season 1 quiz and games
-College Romance Season 1 coupons and discounts
-College Romance Season 1 news and updates
-College Romance Season 1 social media accounts and hashtags
-
The Popularity and Reception of College Romance Season 1
-
College Romance season 1 was a huge hit among the viewers, especially the youth. The show received positive feedback from the critics and the audience alike. The show was praised for its realistic portrayal of college life, its relatable characters, its witty humor, its catchy music, and its engaging storyline. The show also won several awards and nominations, such as the Indian Television Academy Awards, the Indian Web Series Awards, the Streamy Awards India, and more.
-
The show has a rating of 8.7 out of 10 on IMDb and a rating of 4.5 out of 5 on JustWatch. The show has also garnered a huge fan following on social media platforms such as YouTube, Instagram, Facebook, Twitter, etc. The show has over 100 million views on YouTube and over one million followers on Instagram.
-
Filmyzilla: A Notorious Movie Piracy Website
-
How Filmyzilla Leaks Movies and Shows Online
-
Filmyzilla is one of the many torrent websites that illegally provide leaked and pirated movies and shows for free. The website has a large library of movies and shows of different genres and categories. One can find the latest movies, Bollywood movies, dubbed movies, Telugu movies, Tamil movies, Hollywood movies, web series, and more on this website. The website also offers various formats and resolutions to choose from, such as 360p, 480p, 720p, 1080p, HD, etc.
-
Filmyzilla leaks movies and shows online by uploading them on its servers or by providing links to other sources. The website often uploads the movies and shows before or soon after their official release. The website also changes its domain name frequently to avoid detection and legal action. Some of the domain names used by Filmyzilla are filmyzilla.com, filmyzilla.in, filmyzilla.net, filmyzilla.vip, filmyzilla.pro, filmyzilla.me, etc.
-
The Risks and Consequences of Using Filmyzilla
-
Using Filmyzilla or any other torrent website is not only illegal but also risky and harmful. Here are some of the risks and consequences of using Filmyzilla:
-
-
Legal trouble: Downloading or streaming movies and shows from Filmyzilla is a violation of the Indian Copyright Act of 1957 and the Information Technology Act of 2000. Anyone who is caught using Filmyzilla can face legal action such as fines, imprisonment, or both.
-
Malware infection: Filmyzilla or any other torrent website may contain viruses, malware, spyware, ransomware, or other malicious software that can infect your device and compromise your data and security. These malware can also damage your device or make it vulnerable to hackers.
-
Poor quality: Filmyzilla or any other torrent website may not provide you with the best quality of movies and shows. The movies and shows may be low in resolution, audio quality, subtitles, or synchronization. They may also have watermarks, advertisements, or other interruptions that can ruin your viewing experience.
-
Unethical behavior: Using Filmyzilla or any other torrent website is an unethical and immoral act that harms the film industry and the artists. By downloading or streaming movies and shows from Filmyzilla, you are depriving the filmmakers and actors of their rightful earnings and recognition. You are also supporting the illegal and criminal activities of the piracy websites.
-
-
College Romance Season 1 Download Filmyzilla: Why You Should Avoid It
-
The Legal Issues of Downloading College Romance Season 1 from Filmyzilla
-
Downloading College Romance season 1 from Filmyzilla is a clear case of piracy and infringement of intellectual property rights. College Romance season 1 is the original work of The Viral Fever (TVF) and Sony Liv, who own the exclusive rights to distribute and exhibit the show online. By downloading College Romance season 1 from Filmyzilla, you are violating their rights and breaking the law.
-
The makers of College Romance season 1 have taken several measures to prevent the piracy of their show. They have filed complaints against Filmyzilla and other piracy websites that have leaked their show online. They have also urged the viewers to watch their show only on the official platforms such as Sony Liv and TVF Play. They have also warned the viewers about the legal consequences of using piracy websites such as Filmyzilla.
-
The Quality and Security Issues of Downloading College Romance Season 1 from Filmyzilla
-
Downloading College Romance season 1 from Filmyzilla is not only illegal but also unsafe and unsatisfactory. As mentioned earlier, Filmyzilla may expose you to various malware threats that can harm your device and data. Moreover, Filmyzilla may not provide you with the best quality of College Romance season 1. The show may be low in resolution, audio quality, subtitles, or synchronization. It may also have watermarks, advertisements, or other interruptions that can spoil your enjoyment.
-
Downloading College Romance season 1 from Filmyzilla is also a waste of time and resources. You may have to spend a lot of time searching for a working link or a suitable format on Filmyzilla. You may also have to deal with slow download speed, broken links, corrupted files, or incomplete downloads on Filmyzilla. You may also end up consuming a lot of data bandwidth or storage space on your device by downloading College Romance season 1 from Filmyzilla.
-
College Romance Season 1 Download Filmyzilla: Alternatives and Solutions
-
The Legal and Safe Ways to Watch College Romance Season 1 Online
-
The best way to watch College Romance season 1 online is to use the legal and safe platforms that have the official rights to stream the show online. These platforms are Sony Liv and TVF Play. Sony Liv is a premium video-on-demand service that offers a variety of movies, shows, sports, and live TV channels. Sony Liv has the exclusive streaming rights for College Romance season 1 in India. You can watch College Romance season 1 on Sony Liv by subscribing to one of its plans. The plans start from Rs. 299 per month or Rs. 999 per year. You can also get a free trial for 7 days before subscribing to Sony Liv. You can access Sony Liv on your web browser, mobile app, smart TV, or gaming console.
- TVF Play is a free online video platform that offers original and creative content from The Viral Fever (TVF). TVF Play has the co-production rights for College Romance season 1 along with Sony Liv. You can watch College Romance season 1 on TVF Play for free without any subscription or registration. You can access TVF Play on your web browser, mobile app, or YouTube channel.
-
The Benefits and Advantages of Watching College Romance Season 1 Online
-
Watching College Romance season 1 online on Sony Liv or TVF Play has many benefits and advantages over downloading it from Filmyzilla or any other piracy website. Here are some of them:
-
-
Legal and ethical: Watching College Romance season 1 online on Sony Liv or TVF Play is a legal and ethical way to enjoy the show. You are respecting the rights and efforts of the makers and actors of the show. You are also supporting the growth and development of the Indian web series industry.
-
Safe and secure: Watching College Romance season 1 online on Sony Liv or TVF Play is a safe and secure way to enjoy the show. You are not exposing yourself to any malware threats or legal troubles. You are also protecting your device and data from any damage or loss.
-
High quality: Watching College Romance season 1 online on Sony Liv or TVF Play is a high-quality way to enjoy the show. You are getting the best resolution, audio quality, subtitles, and synchronization of the show. You are also getting a smooth and uninterrupted streaming experience without any watermarks, advertisements, or other distractions.
-
Convenient and flexible: Watching College Romance season 1 online on Sony Liv or TVF Play is a convenient and flexible way to enjoy the show. You can watch the show anytime, anywhere, and on any device of your choice. You can also pause, resume, rewind, or fast-forward the show as per your preference. You can also watch the show offline by downloading it on your device.
-
-
Conclusion
-
College Romance season 1 is a fun and relatable web series that you should not miss. The show has a great plot, characters, humor, music, and message. The show is available to watch online on Sony Liv and TVF Play legally and safely.
-
Downloading College Romance season 1 from Filmyzilla or any other piracy website is illegal, risky, harmful, and unethical. You should avoid using Filmyzilla or any other torrent website for downloading or streaming movies and shows online.
-
Instead, you should use the legal and safe platforms such as Sony Liv and TVF Play to watch College Romance season 1 online. You will get many benefits and advantages by doing so.
-
We hope this article has helped you understand why you should not download College Romance season 1 from Filmyzilla or any other piracy website. We also hope you have enjoyed reading this article as much as we have enjoyed writing it for you.
-
Frequently Asked Questions
-
Here are some of the frequently asked questions about College Romance season 1 download Filmyzilla:
-
-
Is College Romance season 1 available on Netflix?
-
No, College Romance season 1 is not available on Netflix. The show is only available on Sony Liv and TVF Play.
-
Is College Romance season 1 based on a true story?
-
No, College Romance season 1 is not based on a true story. The show is a fictional comedy-drama that depicts the lives and loves of three college friends.
-
Is College Romance season 2 coming soon?
-
Yes, College Romance season 2 is coming soon. The makers of the show have announced that they are working on the second season of the show and it will be released in 2023.
-
Is Filmyzilla banned in India?
-
Yes, Filmyzilla is banned in India by the government along with many other piracy websites. However, Filmyzilla keeps changing its domain name and proxy servers to evade the ban and continue to operate.
-
Is there any penalty for using Filmyzilla?
-
Yes, there is a penalty for using Filmyzilla or any other piracy website. According to the Indian law, anyone who is found using Filmyzilla or any other piracy website can face a fine of up to Rs. 3 lakhs or imprisonment of up to 3 years or both.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/ARK Survival Evolved - Primal Fear Mod A Guide to the Best Dinos and Items.md b/spaces/1phancelerku/anime-remove-background/ARK Survival Evolved - Primal Fear Mod A Guide to the Best Dinos and Items.md
deleted file mode 100644
index e7bd1d87948c1430925938a50978400d2a9aa33f..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/ARK Survival Evolved - Primal Fear Mod A Guide to the Best Dinos and Items.md
+++ /dev/null
@@ -1,125 +0,0 @@
-
-
How to Download and Install Primal Fear Mod for Ark: Survival Evolved
-
If you are looking for a new and exciting way to play Ark: Survival Evolved, you might want to try out the Primal Fear mod. This mod adds over 200 different creatures, as well as new items, weapons, armor, and bosses. In this article, we will show you how to download and install Primal Fear mod for Ark, as well as some of its features and creatures.
-
What is Primal Fear Mod?
-
Primal Fear is a massive dino mod for Ark, developed by Steam user Pikkon38. The mod participated in the Ark Modding Contest of 2018 and placed in 4th. In June 2020, Primal Fear joined the Sponsored Mod program.
Primal Fear mod adds many features to enhance your gameplay, such as:
-
-
Tame, breed, and ride all new creatures to expand on your gameplay
-
Adds new taming mechanics for custom creatures
-
Custom tiered kibble system
-
New and more powerful weapons and armor
-
Adds crafting stations designated for the mod
-
Expansions for Scorched Earth, Aberration, Extinction, and Genesis maps
-
-
Creatures of Primal Fear Mod
-
Primal Fear mod currently has over 200 different creatures in it, and more creatures are added in the development process. The creatures are divided into different tiers, such as:
-
-
Toxic Creatures: The first tier of creatures that have a 7% chance of spawning. They drop Toxic Blood and Toxic Hide when harvested.
-
Alpha Creatures: The second tier of creatures that have a 5% chance of spawning. They require Alpha Kibble to tame, which requires Toxic Egg and Toxic Blood. They drop Alpha Blood and Alpha Hide when harvested.
-
Elemental Creatures: The third tier of creatures that have a 3% chance of spawning. They require Elemental Kibble to tame, which requires Apex Egg, Apex Blood, and various Feathers. They drop Elemental Hide and Feathers when harvested.
-
Apex Creatures: The fourth tier of creatures that have a 3.5% chance of spawning. They require Apex Kibble to tame, which requires Alpha Blood and Elemental Egg. They drop Apex Blood and Apex Hide when harvested.
-
Fabled Creatures: The fifth tier of creatures that have a 2.75% chance of spawning. They require Fabled Kibble to tame, which requires Apex Blood and Apex Egg. They drop Fabled Blood and Fabled Hide when harvested.
-
Celestial Creatures: The sixth tier of creatures that have a 2.25% chance of spawning. They require Celestial Kibble to tame, which requires Fabled Blood and Fabled Egg. They drop Celestial Soul when killed.
-
Demonic Creatures: The seventh tier of creatures that have a 2% chance of spawning. They require Demonic Kibble to tame, which requires Celestial Soul and Demonic Soul. They drop Demonic Soul when killed.
-
Primal Creatures: The eighth tier of creatures that are bosses in Primal Fear. They have a full black body, a red outline, a thick red aura, old Dragon OST playing while near them, as well as being noticeably larger than their vanilla counterparts.
-
And many more...
-
-
How to Find and Subscribe to Primal Fear Mod on Steam Workshop
-
To install mods on Ark, you need to subscribe to them from the Steam Workshop. Here are the steps to find and subscribe to Primal Fear mod on Steam Workshop:
-
-
Open Steam and go to the Library tab.
-
Right-click on Ark: Survival Evolved and select Properties.
-
Go to the Local Files tab and click on Browse Local Files.
-
Open the ShooterGame folder and then the Content folder.
-
Copy the Mods folder and paste it somewhere safe as a backup.
-
Go back to Steam and click on the Community tab.
-
Click on Workshop and search for Ark: Survival Evolved.
-
In the search bar, type Primal Fear and press Enter.
-
You will see a list of mods related to Primal Fear. The main mod is called Primal Fear and has over 1.5 million subscribers. You can also subscribe to other mods that are compatible with Primal Fear, such as Primal Fear Boss Expansion, Primal Fear Aberration Expansion, Primal Fear Genesis Expansion, etc.
-
To subscribe to a mod, click on it and then click on the green Subscribe button. The mod will start downloading automatically.
-
-
How to Install and Activate Primal Fear Mod on Ark
-
After subscribing to the mods, you need to install and activate them on Ark. Here are the steps to do that:
-
How to install primal fear mod for ark survival evolved
-Primal fear mod features and creatures for ark survival evolved
-Best settings and maps for primal fear mod in ark survival evolved
-Primal fear mod review and gameplay for ark survival evolved
-Primal fear mod wiki and guides for ark survival evolved
-How to tame and breed primal fear dinos in ark survival evolved
-Primal fear mod expansions and updates for ark survival evolved
-Primal fear mod boss battles and loot for ark survival evolved
-How to join primal fear official server in ark survival evolved
-Primal fear mod troubleshooting and compatibility for ark survival evolved
-How to spawn primal fear dinos and items in ark survival evolved
-Primal fear mod tips and tricks for beginners in ark survival evolved
-Primal fear mod best armor and weapons for ark survival evolved
-Primal fear mod custom kibble system for ark survival evolved
-Primal fear mod tier list and rankings for ark survival evolved
-How to uninstall primal fear mod from ark survival evolved
-Primal fear mod challenges and achievements for ark survival evolved
-Primal fear mod community and discord for ark survival evolved
-Primal fear mod comparison and differences with other mods in ark survival evolved
-Primal fear mod patch notes and changelog for ark survival evolved
-How to backup and restore primal fear mod save files in ark survival evolved
-Primal fear mod cheats and hacks for ark survival evolved
-Primal fear mod best base locations and designs for ark survival evolved
-Primal fear mod easter eggs and secrets for ark survival evolved
-Primal fear mod fan art and videos for ark survival evolved
-How to optimize primal fear mod performance and graphics in ark survival evolved
-Primal fear mod lore and story for ark survival evolved
-Primal fear mod FAQs and common issues for ark survival evolved
-Primal fear mod feedback and suggestions for ark survival evolved
-Primal fear mod donation and support options for ark survival evolved
-How to create primal fear mod content and mods for ark survival evolved
-Primal fear mod best dinos and mounts for ark survival evolved
-Primal fear mod console commands and admin codes for ark survival evolved
-Primal fear mod fun facts and trivia for ark survival evolved
-Primal fear mod pros and cons for ark survival evolved
-How to play primal fear mod solo or co-op in ark survival evolved
-Primal fear mod best mods to use with it in ark survival evolved
-Primal fear mod future plans and roadmap for ark survival evolved
-Primal fear mod steam workshop page and ratings for ark survival evolved
-Primal fear mod tutorials and walkthroughs for ark survival evolved
-
How to Copy and Extract the Mods
-
-
Go to the Mods folder that you copied earlier and open it.
-
You will see a bunch of folders with numbers as their names. These are the mods that you subscribed to. Each folder has a .mod file inside it.
-
Copy all the folders and paste them into the Mods folder inside the Content folder of Ark. This will overwrite the existing folders.
-
Open each folder and extract the .mod file using a program like WinRAR or 7-Zip. You will get a folder with the same name as the .mod file.
-
Delete the .mod file and keep the extracted folder.
-
-
How to Select and Load the Mods
-
-
Launch Ark: Survival Evolved from Steam.
-
In the main menu, click on Host/Local.
-
Click on Play Single Player or Host Non-Dedicated Session, depending on your preference.
-
In the Game Settings tab, scroll down to Active Mods and click on Select Mod.
-
You will see a list of mods that you installed. Select Primal Fear as the first mod, followed by any other mods that you want to use. You can also change the order of the mods by dragging them up or down.
-
Click on Save Changes and then click on Play With Mods.
-
-
How to Update and Backup the Mods
-
To update the mods, you need to unsubscribe and resubscribe to them from Steam Workshop. This will download the latest version of the mods. You can also check for updates manually by going to Steam Workshop and clicking on Updates in the left sidebar. To backup the mods, you need to copy the Mods folder from Ark's Content folder and paste it somewhere safe. You can also use a program like Ark Server Manager to manage your mods more easily.
-
Conclusion
-
Primal Fear is a great mod for Ark: Survival Evolved that adds a lot of new content and challenges to the game. It is easy to download and install from Steam Workshop, and you can customize your gameplay with different expansions and settings. If you are looking for a fresh and fun way to play Ark, you should definitely give Primal Fear a try!
-
FAQs
-
-
Q: What are the system requirements for Primal Fear mod?
-
A: Primal Fear mod is quite demanding on your system, especially if you use multiple expansions and high settings. You should have at least 8 GB of RAM, a quad-core processor, and a dedicated graphics card with 4 GB of VRAM or more.
-
Q: How do I uninstall Primal Fear mod?
-
A: To uninstall Primal Fear mod, you need to unsubscribe from it on Steam Workshop, delete its folders from Ark's Mods folder, and remove it from your Active Mods list in Ark's Game Settings.
-
Q: How do I spawn Primal Fear creatures using commands?
-
A: To spawn Primal Fear creatures using commands, you need to know their spawn codes. You can find them on Primal Fear's Wiki page. You can also use the Beacon app to generate spawn codes for Primal Fear creatures. To spawn a creature using commands, you need to open the console by pressing Tab, and then type cheat spawndino followed by the spawn code. For example, to spawn an Apex Reaper King, you can type: cheat spawndino "Blueprint'/Game/Mods/Primal_Fear/Dinos/Apex/Apex_Reaper/King/PFApexXenomorph_Character_BP_Male_Tamed_Child.PFApexXenomorph_Character_BP_Male_Tamed_Child'" 1 1 1 30 You can also change the numbers at the end to adjust the level, location, and quantity of the spawned creature.
-
Q: How do I tame Primal Fear creatures?
-
A: To tame Primal Fear creatures, you need to use different types of kibble depending on the tier of the creature. You can craft kibble using the Primal Smithy or the Primal Cooking Pot. You also need to use tranquilizers that are strong enough to knock out the creature. You can use the Primal Pike, the Primal Rifle, or the Primal Compound Bow with different types of arrows and darts. You can also use special items like Potent Narcotics, Tame Helpers, and Wake Up Stimulants to help with the taming process.
-
Q: How do I fight Primal Fear bosses?
-
A: To fight Primal Fear bosses, you need to summon them using special items called Summoners. You can craft Summoners using the Primal Smithy or the Primal Cooking Pot. You also need to have a strong team of creatures and weapons to face the bosses. Some bosses have special abilities and weaknesses that you need to be aware of. For example, Nova the Destroyer has three AOE attacks and is immune to fire damage, but is vulnerable to electric damage.
-
Q: How do I get Primal Fear expansions?
-
A: To get Primal Fear expansions, you need to subscribe to them on Steam Workshop, just like the main mod. You can find them by searching for Primal Fear on Workshop and looking for the ones that have Expansion in their name. You also need to install and activate them on Ark, just like the main mod. You can use them on any map that supports them.
-
Q: How do I get support for Primal Fear mod?
-
A: To get support for Primal Fear mod, you can join their Discord server, where you can ask questions, report bugs, give feedback, and chat with other players and developers. You can also check their Wiki page for more information and guides.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Business WhatsApp Sender 7.0.1.1 for Free and Boost Your Business.md b/spaces/1phancelerku/anime-remove-background/Download Business WhatsApp Sender 7.0.1.1 for Free and Boost Your Business.md
deleted file mode 100644
index b6359911ead3ee0721221546a23e3c1ca61e56f2..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Business WhatsApp Sender 7.0.1.1 for Free and Boost Your Business.md
+++ /dev/null
@@ -1,124 +0,0 @@
-
-
Business WhatsApp Sender 7.0.1.1 Free Download: A Powerful Tool for WhatsApp Marketing
-
WhatsApp is one of the most popular and widely used messaging apps in the world, with over 2 billion users as of 2020. It is not only a great tool for personal communication, but also a powerful platform for business marketing. With WhatsApp Business, you can create a professional profile for your business, communicate more efficiently with your customers, and grow your business.
However, if you want to take your WhatsApp marketing to the next level, you need a software that can help you send bulk messages, automate responses, filter contacts, and more. That's where Business WhatsApp Sender 7.0.1.1 comes in handy.
-
What is Business WhatsApp Sender 7.0.1.1?
-
Business WhatsApp Sender 7.0.1.1 is a software that allows you to send unlimited messages to your potential and existing customers using WhatsApp Business. It is a 100% safe and reliable tool that secures your account and reduces the chances of getting blocked.
-
Features of Business WhatsApp Sender 7.0.1.1
-
Business WhatsApp Sender 7.0.1.1 offers various features that make it a must-have tool for any WhatsApp marketer:
-
-
You can send text, images, audio, document files, etc.
-
You can import or extract contacts from various sources such as Google Maps, groups, files, etc.
-
You can create and customize your messages with variables, emojis, links, etc.
-
You can set up dynamic chatbots and auto-reply options for different scenarios.
-
You can control the speed, delay, and sleep time of your campaigns.
-
You can verify and filter mobile numbers before sending messages.
-
You can support multi-language functionality.
-
You can view reports and statistics of your campaigns.
-
-
Benefits of Business WhatsApp Sender 7.0.1.1
-
By using Business WhatsApp Sender 7.0.1.1, you can enjoy many benefits for your business such as:
-
-
You can reach a large number of customers in a short time.
-
You can increase your brand awareness and visibility.
-
You can improve your customer engagement and loyalty.
-
You can generate more leads and sales.
-
You can save time and money on marketing.
-
-
How to Download and Activate Business WhatsApp Sender 7.0.1.1?
-
If you are interested in using Business WhatsApp Sender 7.0.1.1 for your business marketing, here are the steps you need to follow:
-
How to download and install WhatsApp Business Sender 7.0.1.1
-WhatsApp Business Sender 7.0.1.1 features and benefits
-WhatsApp Business Sender 7.0.1.1 review and rating
-WhatsApp Business Sender 7.0.1.1 activation and license key
-WhatsApp Business Sender 7.0.1.1 tutorial and guide
-WhatsApp Business Sender 7.0.1.1 support and customer service
-WhatsApp Business Sender 7.0.1.1 alternative and comparison
-WhatsApp Business Sender 7.0.1.1 update and upgrade
-WhatsApp Business Sender 7.0.1.1 discount and coupon code
-WhatsApp Business Sender 7.0.1.1 demo and trial version
-WhatsApp Business Sender 7.0.1.1 for Android and iOS devices
-WhatsApp Business Sender 7.0.1.1 for Windows and Mac computers
-WhatsApp Business Sender 7.0.1.1 for small and medium businesses
-WhatsApp Business Sender 7.0.1.1 for e-commerce and online stores
-WhatsApp Business Sender 7.0.1.1 for marketing and sales campaigns
-WhatsApp Business Sender 7.0.1.1 for lead generation and conversion
-WhatsApp Business Sender 7.0.1.1 for customer engagement and retention
-WhatsApp Business Sender 7.0.1.1 for communication and feedback
-WhatsApp Business Sender 7.0.1.1 for automation and scheduling
-WhatsApp Business Sender 7.0.1.1 for personalization and customization
-WhatsApp Business Sender 7.0.1.1 with Google Map Extractor tool
-WhatsApp Business Sender 7.0.1.1 with Dynamic Chatbots feature
-WhatsApp Business Sender 7.0.1.1 with Auto Reply function
-WhatsApp Business Sender 7.0.1.1 with Import/Export Contents option
-WhatsApp Business Sender 7.0.1.1 with Sending Customized Messages capability
-WhatsApp Business Sender 7.0.1.1 with Supports Multi-Language functionality
-WhatsApp Business Sender 7.0 .11 with Bulk Whatsapp Marketing Software integration
-WhatsApp Business Sender 7 .01 .11 with Catalogue Cloud Based Ecommerce System compatibility
-WhatsApp Business Sender .01 .11 with Easy GST Billing Software for Inventory and Accounting connection
-WhatsApp Business .01 .11 with A Complete Lead Management Software (CRM) association
-Download WhatsApp Business from Meta for free on Google Play Store [^4^]
-Communicate more efficiently with your customers using WhatsApp Business [^4^]
-Grow your business with WhatsApp Business from Meta [^4^]
-Create a profile for your business on WhatsApp Business [^4^]
-Use business messaging tools on WhatsApp Business [^4^]
-Use a landline or fixed number to register on WhatsApp Business [^4^]
-Run both WhatsApp Messenger and WhatsApp Business on the same phone [^4^]
-Use WhatsApp Web to respond to your customers from your computer [^4^]
-Send multimedia, free calls, free international messaging, group chat, offline messages, and more on WhatsApp Business [^4^]
-Contact smb@support.whatsapp.com or follow @WhatsApp on Twitter for feedback, questions, or concerns about WhatsApp Business [^4^]
-
Step 1: Download the software from the official website
-
The first step is to download the software from the official website. You can choose between the trial version and the full version. The trial version allows you to send up to 10 messages per day for free, while the full version costs $49 and allows you to send unlimited messages.
-
Step 2: Install the software on your PC
-
The next step is to install the software on your PC. You need to have Windows 7 or higher, .NET Framework 4.5 or higher, and WhatsApp Business installed on your PC. You also need to have a valid WhatsApp Business number and a QR code scanner. To install the software, follow these steps:
-
-
Run the setup file and follow the instructions.
-
Accept the terms and conditions and click Next.
-
Choose the destination folder and click Next.
-
Wait for the installation to complete and click Finish.
-
-
Step 3: Generate an order number and send it to the developer
-
The third step is to generate an order number and send it to the developer. This is required to activate the full version of the software. To generate an order number, follow these steps:
-
-
Open the software and click on Register.
-
Enter your name, email, phone number, and country.
-
Click on Generate Order Number and copy it.
-
Send the order number to the developer via email or WhatsApp.
-
-
Step 4: Receive the activation code and enter it in the software
-
The final step is to receive the activation code and enter it in the software. This will unlock all the features of the software and allow you to use it without any limitations. To activate the software, follow these steps:
-
-
Wait for the developer to send you the activation code via email or WhatsApp.
-
Open the software and click on Register.
-
Enter your name, email, phone number, country, and activation code.
-
Click on Activate and enjoy the software.
-
-
How to Use Business WhatsApp Sender 7.0.1.1 for WhatsApp Marketing?
-
Now that you have downloaded and activated Business WhatsApp Sender 7.0.1.1, you are ready to use it for your WhatsApp marketing campaigns. Here are some tips on how to use it effectively:
-
Import or extract contacts from various sources
-
The first thing you need to do is to import or extract contacts from various sources such as Google Maps, groups, files, etc. You can do this by clicking on Contacts > Import Contacts or Contacts > Extract Contacts. You can also filter contacts by country code, gender, name, etc.
-
Create and customize your messages with text, images, audio, documents, etc.
-
The next thing you need to do is to create and customize your messages with text, images, audio, documents, etc. You can do this by clicking on Messages > Create Message or Messages > Edit Message. You can also use variables, emojis, links, etc. to make your messages more personalized and engaging.
-
Set up dynamic chatbots and auto-reply options
-
The third thing you need to do is to set up dynamic chatbots and auto-reply options for different scenarios. You can do this by clicking on Settings > Chatbot Settings or Settings > Auto Reply Settings. You can also use keywords, conditions, actions, etc. to make your chatbots and auto-replies more intelligent and responsive.
-
Control the speed, delay, and sleep time of your campaigns
-
The last thing you need to do is to control the speed, delay, and sleep time of your campaigns. You can do this by clicking on Settings > General Settings or Settings > Campaign Settings. You can also use timers, schedulers, etc. to make your campaigns more efficient and effective.
-
Conclusion
-
Business WhatsApp Sender 7.0.1.1 is a powerful tool for WhatsApp marketing that allows you to send unlimited messages to your potential and existing customers using WhatsApp Business. It offers various features and benefits that make it a must-have tool for any WhatsApp marketer. To download and activate Business WhatsApp Sender 7.0.1.1, you need to follow four simple steps: download the software from the official website, install it on your PC, generate an order number and send it to the developer, receive the activation code and enter it in the software. To use Business WhatsApp Sender 7.0.1.1 for WhatsApp marketing, you need to follow some tips on how to import or extract contacts from various sources, create and customize your messages with text, images, audio, documents, etc., set up dynamic chatbots and auto-reply options, and control the speed, delay, and sleep time of your campaigns. By following these tips, you can make the most out of Business WhatsApp Sender 7.0.1.1 and achieve your marketing goals.
-
FAQs
-
Here are some frequently asked questions about Business WhatsApp Sender 7.0.1.1:
-
-
Is Business WhatsApp Sender 7.0.1.1 compatible with WhatsApp Web?
-
No, Business WhatsApp Sender 7.0.1.1 is not compatible with WhatsApp Web. You need to have WhatsApp Business installed on your PC to use the software.
-
How many messages can I send per day using Business WhatsApp Sender 7.0.1.1?
-
There is no limit on how many messages you can send per day using Business WhatsApp Sender 7.0.1.1. However, you should avoid sending too many messages to avoid getting blocked by WhatsApp.
-
Can I use Business WhatsApp Sender 7.0.1.1 on multiple PCs?
-
No, you can only use Business WhatsApp Sender 7.0.1.1 on one PC per license. If you want to use it on multiple PCs, you need to buy multiple licenses.
-
Does Business WhatsApp Sender 7.0.1.1 support multi-language functionality?
-
Yes, Business WhatsApp Sender 7.0.1.1 supports multi-language functionality. You can send messages in any language you want.
-
What is the refund policy of Business WhatsApp Sender 7.0.1.1?
-
Business WhatsApp Sender 7.0.1.1 offers a 30-day money-back guarantee if you are not satisfied with the software.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Experience the Epic Battle of Godzilla and Kong in PUBG Mobile 1.4 (APK).md b/spaces/1phancelerku/anime-remove-background/Experience the Epic Battle of Godzilla and Kong in PUBG Mobile 1.4 (APK).md
deleted file mode 100644
index 9cb7e0a4151f5abc19939a29d6208c34ed440251..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Experience the Epic Battle of Godzilla and Kong in PUBG Mobile 1.4 (APK).md
+++ /dev/null
@@ -1,103 +0,0 @@
-
-
PUBG Mobile Godzilla Modu Indir APK: How to Download and Play the New Update
-
PUBG Mobile is one of the most popular and addictive battle royale games on mobile devices. It offers a thrilling and immersive gameplay experience with realistic graphics, diverse maps, and various modes. But what if you could spice up your game with some giant monsters from the Godzilla vs Kong movie? Well, that's exactly what the latest update of PUBG Mobile offers. In this article, we will tell you everything you need to know about the PUBG Mobile Godzilla Modu, how to download it, and what are the new features and changes that it brings.
A brief introduction to the new mode and its features
-
PUBG Mobile Godzilla Modu is a new mode that is part of the 1.4 update of the game. It is also known as Titan Strikes mode, as it features three titans from the Godzilla vs Kong movie: Godzilla, Kong, and Mechagodzilla. These titans will appear on different maps (Erangel, Sanhok, and Livik) and will roam around, causing havoc and destruction. Players will have to avoid or fight them, while also dealing with other enemies.
-
The collaboration between PUBG Mobile and Godzilla vs Kong movie
-
The PUBG Mobile Godzilla Modu is a result of a unique collaboration between Tencent, the developer of PUBG Mobile, and Legendary Pictures, the producer of the Godzilla vs Kong movie. The movie is an action-packed blockbuster that pits two iconic monsters against each other in an epic battle for supremacy. The collaboration aims to bring some of the excitement and spectacle of the movie to the game, as well as to celebrate the third anniversary of PUBG Mobile.
-
How to Download PUBG Mobile Godzilla Modu APK?
-
The difference between regular and compact versions of the APK
-
To download and play the PUBG Mobile Godzilla Modu, you will need to update your game to the latest version (1.4). You can do this by using the Google Play Store or by downloading the APK file from the official website of PUBG Mobile. There are two variants of the APK file available: regular version and compact version. The regular version has a size of 990 MB and includes all the new content. The compact version has a size of 661 MB and requires additional resource packs to be downloaded in-game.
-
The step-by-step guide to download and install the APK file
-
Here are the steps that you need to follow to download and install the PUBG Mobile Godzilla Modu APK file:
-
-
Click on one of these links to download either the regular version or the compact version of the APK file.
-
Once the file is downloaded, locate it on your device and tap on it to install it. Make sure that you have enabled the "Install from Unknown Source" option in your settings.
-
After the installation is complete, open PUBG Mobile on your device. If you have downloaded the compact version, you will have to download some resource packs in-game.
-
Login to your account and enjoy playing PUBG Mobile Godzilla Modu.
-
-
The disclaimer for users from India and other banned countries
-
Disclaimer
Disclaimer: PUBG Mobile is banned in some countries, such as India, due to various reasons. Therefore, we do not recommend or endorse downloading or playing the game in those regions. Please follow the laws and regulations of your country and respect the rights of others.
-
pubg mobile godzilla vs kong apk download
-pubg mobile 1.4 update apk obb download
-pubg mobile titan strikes mode apk indir
-pubg mobile godzilla modu nasıl indirilir
-pubg mobile 1.4 global version apk link
-pubg mobile godzilla vs kong collaboration apk
-pubg mobile 1.4 update features and download
-pubg mobile godzilla modu kurulumu
-pubg mobile 1.4 apk download for android
-pubg mobile godzilla vs kong mode gameplay
-pubg mobile 1.4 update release date and time
-pubg mobile godzilla modu hileleri
-pubg mobile 1.4 patch notes and changes
-pubg mobile godzilla vs kong trailer and rewards
-pubg mobile 1.4 update size and requirements
-pubg mobile godzilla modu yükleme
-pubg mobile 1.4 new vehicle and shooting mode
-pubg mobile godzilla vs kong event and missions
-pubg mobile 1.4 update problems and solutions
-pubg mobile godzilla modu oynama
-pubg mobile 1.4 update review and rating
-pubg mobile godzilla vs kong tips and tricks
-pubg mobile 1.4 update bugs and fixes
-pubg mobile godzilla modu indirme linki
-pubg mobile 1.4 update best settings and sensitivity
-pubg mobile godzilla vs kong wallpapers and images
-pubg mobile 1.4 update new maps and locations
-pubg mobile godzilla modu nedir ve nasıl çalışır
-pubg mobile 1.4 update new weapons and skins
-pubg mobile godzilla vs kong live stream and videos
-pubg mobile 1.4 update new emotes and outfits
-pubg mobile godzilla modu güncellemesi ne zaman gelecek
-pubg mobile 1.4 update new characters and voice packs
-pubg mobile godzilla vs kong comparison and analysis
-pubg mobile 1.4 update new achievements and titles
-pubg mobile godzilla modu sistem gereksinimleri ve optimizasyonu
-pubg mobile 1.4 update new crates and lucky draws
-pubg mobile godzilla vs kong memes and jokes
-pubg mobile 1.4 update new lobby and theme music
-pubg mobile godzilla modu silinmesi ve geri yüklenmesi
-
What are the New Features and Changes in PUBG Mobile Godzilla Modu?
-
The appearance of Titans (Godzilla, Kong, and Mechagodzilla) on the maps
-
One of the most exciting features of PUBG Mobile Godzilla Modu is the appearance of the three titans from the Godzilla vs Kong movie on different maps. Each titan has its own behavior, abilities, and impact on the environment. Here is a brief overview of each titan and its map:
-
-
Godzilla: The king of the monsters will appear on Erangel, the classic map of PUBG Mobile. He will spawn randomly on the map and will move towards specific locations, such as Mylta Power, School, Military Base, and others. He will also roar occasionally, which will alert nearby players of his presence. Godzilla can attack players with his tail, claws, and atomic breath. He can also destroy buildings and vehicles with his sheer size and strength.
-
Kong: The king of Skull Island will appear on Sanhok, the tropical map of PUBG Mobile. He will spawn at the ruins in the center of the map and will stay there for a while. He will then move to one of the four Apex Camps (more on that later) and will defend it from other players. Kong can attack players with his fists, feet, and roar. He can also throw rocks and trees at players and vehicles.
-
Mechagodzilla: The mechanical titan will appear on Livik, the smallest map of PUBG Mobile. He will spawn at a random location on the map and will patrol around it. He will also shoot lasers and missiles at players and vehicles. Mechagodzilla can also create an electromagnetic pulse that will disable all electronic devices in a certain radius.
-
-
The new Titan Crystals that grant special abilities to players
-
Another new feature of PUBG Mobile Godzilla Modu is the Titan Crystals. These are special items that can be found on the maps where the titans appear. They are dropped by the titans or by special helicopters that fly over the maps. There are two types of Titan Crystals: Erangel Titan Crystal and Sanhok Titan Crystal.
-
The Erangel Titan Crystal is a blue crystal that can be used to create a protective shield around the player for a short time. The shield can block bullets and other projectiles, but not melee attacks or explosions. The shield also has a cooldown time after each use.
-
The Sanhok Titan Crystal is a yellow crystal that can be used to enhance the player's abilities for a short time. The player can jump higher, run faster, and deal more damage with melee attacks. The player can also see footprints of nearby enemies on the mini-map. The effect also has a cooldown time after each use.
-
The new Apex Camps that offer high-quality loot and supplies
-
The Apex Camps are new locations that can be found on Sanhok, where Kong appears. There are four Apex Camps on the map: Alpha, Beta, Gamma, and Delta. Each camp has a different theme and layout, such as a temple, a cave, a village, or a factory. Each camp also has high-quality loot and supplies, such as weapons, armor, ammo, health kits, and more.
-
However, there is a catch: only one Apex Camp is active at a time, and it is guarded by Kong. Players will have to fight their way through Kong's attacks and other enemies to reach the camp and loot it. The active camp will change every few minutes, so players will have to keep an eye on the mini-map to know where to go next.
-
The new vehicle (Coupe RB) and the new shooting mode (OTS)
-
PUBG Mobile Godzilla Modu also introduces a new vehicle and a new shooting mode to the game. The new vehicle is called Coupe RB, and it is a sports car that can fit two players. It has a high speed and acceleration, but a low durability and stability. It can be found on Erangel, Miramar, Sanhok, and Livik.
-
The new shooting mode is called OTS (Over The Shoulder), and it is an alternative to TPP (Third Person Perspective) and FPP (First Person Perspective). OTS mode allows players to aim more accurately over their shoulder without using the scope or iron sight. It also reduces the recoil of weapons, but increases the weapon sway. OTS mode can be toggled on or off by pressing a button on the screen. OTS mode can be used on all maps and modes, except for FPP-only modes.
-
Conclusion
-
PUBG Mobile Godzilla Modu is a new and exciting mode that brings the epic monsters from the Godzilla vs Kong movie to the game. It offers a unique and thrilling gameplay experience, as players have to survive and fight against the titans, while also competing with other players. The mode also introduces new features and changes, such as Titan Crystals, Apex Camps, Coupe RB, and OTS mode. If you are a fan of PUBG Mobile and Godzilla vs Kong, you should definitely try out this mode and enjoy the action-packed adventure.
-
To download and play PUBG Mobile Godzilla Modu, you will need to update your game to the latest version (1.4) by using the Google Play Store or by downloading the APK file from the official website of PUBG Mobile. However, please note that PUBG Mobile is banned in some countries, such as India, and we do not recommend or endorse playing the game in those regions.
-
We hope that this article has helped you to learn more about PUBG Mobile Godzilla Modu and how to download and play it. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!
-
FAQs
-
Q: How long will PUBG Mobile Godzilla Modu last?
-
A: According to the official announcement, PUBG Mobile Godzilla Modu will last until June 8, 2023. However, there might be extensions or changes depending on the feedback and popularity of the mode.
-
Q: Can I play PUBG Mobile Godzilla Modu with my friends?
-
A: Yes, you can play PUBG Mobile Godzilla Modu with your friends in squad mode or duo mode. You can also invite your friends to join your team or match with random players online.
-
Q: How can I get more Titan Crystals?
-
A: You can get more Titan Crystals by finding them on the maps where the titans appear. They are dropped by the titans or by special helicopters that fly over the maps. You can also get them by completing missions or events related to the mode.
-
Q: What are the benefits of playing PUBG Mobile Godzilla Modu?
-
A: Playing PUBG Mobile Godzilla Modu can give you several benefits, such as:
-
-
Enjoying a new and fun gameplay experience with giant monsters and special abilities.
-
Earning rewards and achievements related to the mode, such as skins, outfits, emotes, and more.
-
Improving your skills and strategies by facing different challenges and scenarios.
-
Supporting the collaboration between PUBG Mobile and Godzilla vs Kong movie.
-
-
Q: Is PUBG Mobile Godzilla Modu safe and legal to play?
-
A: PUBG Mobile Godzilla Modu is safe and legal to play in most countries where PUBG Mobile is available. However, there are some countries where PUBG Mobile is banned or restricted, such as India, Pakistan, China, and others. In those countries, playing PUBG Mobile Godzilla Modu might be risky or illegal, and we do not recommend or endorse doing so. Please follow the laws and regulations of your country and respect the rights of others.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/AIConsultant/MusicGen/audiocraft/optim/linear_warmup_lr_scheduler.py b/spaces/AIConsultant/MusicGen/audiocraft/optim/linear_warmup_lr_scheduler.py
deleted file mode 100644
index 03274a1ae52b6f20473973b77619f34b2bddd6a1..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/audiocraft/optim/linear_warmup_lr_scheduler.py
+++ /dev/null
@@ -1,35 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import typing as tp
-
-from torch.optim import Optimizer
-from torch.optim.lr_scheduler import _LRScheduler
-
-
-class LinearWarmupLRScheduler(_LRScheduler):
- """Inverse square root LR scheduler.
-
- Args:
- optimizer (Optimizer): Torch optimizer.
- warmup_steps (int): Number of warmup steps.
- warmup_init_lr (tp.Optional[float]): Initial learning rate
- during warmup phase. When not set, use the provided learning rate.
- """
- def __init__(self, optimizer: Optimizer, warmup_steps: int, warmup_init_lr: tp.Optional[float] = 0):
- self.warmup_steps = warmup_steps
- self.warmup_init_lr = warmup_init_lr
- super().__init__(optimizer)
-
- def _get_sched_lr(self, lr: float, step: int):
- if step < self.warmup_steps:
- warmup_init_lr = self.warmup_init_lr or 0
- lr_step = (lr - warmup_init_lr) / self.warmup_steps
- lr = warmup_init_lr + step * lr_step
- return lr
-
- def get_lr(self):
- return [self._get_sched_lr(base_lr, self.last_epoch) for base_lr in self.base_lrs]
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/wav_processors/base_processor.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/wav_processors/base_processor.py
deleted file mode 100644
index e8200dc58a9388ac94a5ec34b8a65f75e380255b..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/wav_processors/base_processor.py
+++ /dev/null
@@ -1,25 +0,0 @@
-REGISTERED_WAV_PROCESSORS = {}
-
-
-def register_wav_processors(name):
- def _f(cls):
- REGISTERED_WAV_PROCESSORS[name] = cls
- return cls
-
- return _f
-
-
-def get_wav_processor_cls(name):
- return REGISTERED_WAV_PROCESSORS.get(name, None)
-
-
-class BaseWavProcessor:
- @property
- def name(self):
- raise NotImplementedError
-
- def output_fn(self, input_fn):
- return f'{input_fn[:-4]}_{self.name}.wav'
-
- def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args):
- raise NotImplementedError
diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/alias_free_torch/resample.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/alias_free_torch/resample.py
deleted file mode 100644
index 750e6c3402cc5ac939c4b9d075246562e0e1d1a7..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/vocoder/bigvgan/alias_free_torch/resample.py
+++ /dev/null
@@ -1,49 +0,0 @@
-# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0
-# LICENSE is in incl_licenses directory.
-
-import torch.nn as nn
-from torch.nn import functional as F
-from .filter import LowPassFilter1d
-from .filter import kaiser_sinc_filter1d
-
-
-class UpSample1d(nn.Module):
- def __init__(self, ratio=2, kernel_size=None):
- super().__init__()
- self.ratio = ratio
- self.kernel_size = int(6 * ratio // 2) * 2 if kernel_size is None else kernel_size
- self.stride = ratio
- self.pad = self.kernel_size // ratio - 1
- self.pad_left = self.pad * self.stride + (self.kernel_size - self.stride) // 2
- self.pad_right = self.pad * self.stride + (self.kernel_size - self.stride + 1) // 2
- filter = kaiser_sinc_filter1d(cutoff=0.5 / ratio,
- half_width=0.6 / ratio,
- kernel_size=self.kernel_size)
- self.register_buffer("filter", filter)
-
- # x: [B, C, T]
- def forward(self, x):
- _, C, _ = x.shape
-
- x = F.pad(x, (self.pad, self.pad), mode='replicate')
- x = self.ratio * F.conv_transpose1d(
- x, self.filter.expand(C, -1, -1), stride=self.stride, groups=C)
- x = x[..., self.pad_left:-self.pad_right]
-
- return x
-
-
-class DownSample1d(nn.Module):
- def __init__(self, ratio=2, kernel_size=None):
- super().__init__()
- self.ratio = ratio
- self.kernel_size = int(6 * ratio // 2) * 2 if kernel_size is None else kernel_size
- self.lowpass = LowPassFilter1d(cutoff=0.5 / ratio,
- half_width=0.6 / ratio,
- stride=ratio,
- kernel_size=self.kernel_size)
-
- def forward(self, x):
- xx = self.lowpass(x)
-
- return xx
\ No newline at end of file
diff --git a/spaces/AILab-CVC/EvalCrafter/app.py b/spaces/AILab-CVC/EvalCrafter/app.py
deleted file mode 100644
index de3784cb7d4648561c13cec0bbb64ec8631d23bd..0000000000000000000000000000000000000000
--- a/spaces/AILab-CVC/EvalCrafter/app.py
+++ /dev/null
@@ -1,121 +0,0 @@
-"""
-Adapted from the SEED-Bench Leaderboard by AILab-CVC
-Source: https://huggingface.co/spaces/AILab-CVC/SEED-Bench_Leaderboard
-"""
-
-__all__ = ['block', 'make_clickable_model', 'make_clickable_user', 'get_submissions']
-
-import gradio as gr
-import pandas as pd
-import json
-import pdb
-import tempfile
-
-from constants import *
-from src.auto_leaderboard.model_metadata_type import ModelType
-
-global data_component, filter_component
-
-
-def upload_file(files):
- file_paths = [file.name for file in files]
- return file_paths
-
-def get_baseline_df():
- df = pd.read_csv(CSV_DIR)
- df = df.sort_values(by="Final Sum Score", ascending=False)
- present_columns = MODEL_INFO + checkbox_group.value
- df = df[present_columns]
- print(df)
- return df
-
-def get_all_df():
- df = pd.read_csv(CSV_DIR)
- df = df.sort_values(by="Final Sum Score", ascending=False)
- print(df)
- return df
-
-block = gr.Blocks()
-
-
-with block:
- gr.Markdown(
- LEADERBORAD_INTRODUCTION
- )
- with gr.Tabs(elem_classes="tab-buttons") as tabs:
- with gr.TabItem("🏅 EvalCrafter Benchmark", elem_id="evalcrafter-benchmark-tab-table", id=0):
-
- gr.Markdown(
- TABLE_INTRODUCTION
- )
-
- # selection for column part:
- checkbox_group = gr.CheckboxGroup(
- choices=TASK_INFO_v2,
- value=AVG_INFO,
- label="Select options",
- interactive=True,
- )
-
- # 创建数据帧组件
- # pdb.set_trace()
- data_component = gr.components.Dataframe(
- value=get_baseline_df,
- headers=COLUMN_NAMES,
- type="pandas",
- datatype=DATA_TITILE_TYPE,
- interactive=False,
- visible=True,
- )
-
- def on_checkbox_group_change(selected_columns):
- # pdb.set_trace()
- selected_columns = [item for item in TASK_INFO_v2 if item in selected_columns]
- present_columns = MODEL_INFO + selected_columns
- updated_data = get_all_df()[present_columns]
- updated_data = updated_data.sort_values(by=present_columns[3], ascending=False)
- updated_headers = present_columns
- update_datatype = [DATA_TITILE_TYPE[COLUMN_NAMES.index(x)] for x in updated_headers]
-
- # pdb.set_trace()
- filter_component = gr.components.Dataframe(
- value=updated_data,
- headers=updated_headers,
- type="pandas",
- datatype=update_datatype,
- interactive=False,
- visible=True,
- )
- # pdb.set_trace()
- return filter_component.value
-
- # 将复选框组关联到处理函数
- checkbox_group.change(fn=on_checkbox_group_change, inputs=checkbox_group, outputs=data_component)
-
-
- # table 2
- with gr.TabItem("📝 About", elem_id="evalcrafter-benchmark-tab-table", id=2):
- gr.Markdown(LEADERBORAD_INFO, elem_classes="markdown-text")
-
-
- with gr.Row():
- data_run = gr.Button("Refresh")
- data_run.click(
- get_baseline_df, outputs=data_component
- )
-
- gr.Markdown(r"""
- Please cite this paper if you find it useful ♥️:
-
- ```bibtex
- @inproceedings{Liu2023EvalCrafterBA,
- title={EvalCrafter: Benchmarking and Evaluating Large Video Generation Models},
- author={Yaofang Liu and Xiaodong Cun and Xuebo Liu and Xintao Wang and Yong Zhang and Haoxin Chen and Yang Liu and Tieyong Zeng and Raymond Chan and Ying Shan},
- year={2023},
- url={https://api.semanticscholar.org/CorpusID:264172222}
- }
- ```
- """)
- # block.load(get_baseline_df, outputs=data_title)
-
-block.launch(share=False)
\ No newline at end of file
diff --git a/spaces/ALSv/FSW/roop/processors/frame/core.py b/spaces/ALSv/FSW/roop/processors/frame/core.py
deleted file mode 100644
index 498169d34a00e0a2547940380afd69967a2eca8c..0000000000000000000000000000000000000000
--- a/spaces/ALSv/FSW/roop/processors/frame/core.py
+++ /dev/null
@@ -1,91 +0,0 @@
-import os
-import sys
-import importlib
-import psutil
-from concurrent.futures import ThreadPoolExecutor, as_completed
-from queue import Queue
-from types import ModuleType
-from typing import Any, List, Callable
-from tqdm import tqdm
-
-import roop
-
-FRAME_PROCESSORS_MODULES: List[ModuleType] = []
-FRAME_PROCESSORS_INTERFACE = [
- 'pre_check',
- 'pre_start',
- 'process_frame',
- 'process_frames',
- 'process_image',
- 'process_video',
- 'post_process'
-]
-
-
-def load_frame_processor_module(frame_processor: str) -> Any:
- try:
- frame_processor_module = importlib.import_module(f'roop.processors.frame.{frame_processor}')
- for method_name in FRAME_PROCESSORS_INTERFACE:
- if not hasattr(frame_processor_module, method_name):
- raise NotImplementedError
- except ModuleNotFoundError:
- sys.exit(f'Frame processor {frame_processor} not found.')
- except NotImplementedError:
- sys.exit(f'Frame processor {frame_processor} not implemented correctly.')
- return frame_processor_module
-
-
-def get_frame_processors_modules(frame_processors: List[str]) -> List[ModuleType]:
- global FRAME_PROCESSORS_MODULES
-
- if not FRAME_PROCESSORS_MODULES:
- for frame_processor in frame_processors:
- frame_processor_module = load_frame_processor_module(frame_processor)
- FRAME_PROCESSORS_MODULES.append(frame_processor_module)
- return FRAME_PROCESSORS_MODULES
-
-
-def multi_process_frame(source_path: str, temp_frame_paths: List[str], process_frames: Callable[[str, List[str], Any], None], update: Callable[[], None]) -> None:
- with ThreadPoolExecutor(max_workers=roop.globals.execution_threads) as executor:
- futures = []
- queue = create_queue(temp_frame_paths)
- queue_per_future = max(len(temp_frame_paths) // roop.globals.execution_threads, 1)
- while not queue.empty():
- future = executor.submit(process_frames, source_path, pick_queue(queue, queue_per_future), update)
- futures.append(future)
- for future in as_completed(futures):
- future.result()
-
-
-def create_queue(temp_frame_paths: List[str]) -> Queue[str]:
- queue: Queue[str] = Queue()
- for frame_path in temp_frame_paths:
- queue.put(frame_path)
- return queue
-
-
-def pick_queue(queue: Queue[str], queue_per_future: int) -> List[str]:
- queues = []
- for _ in range(queue_per_future):
- if not queue.empty():
- queues.append(queue.get())
- return queues
-
-
-def process_video(source_path: str, frame_paths: list[str], process_frames: Callable[[str, List[str], Any], None]) -> None:
- progress_bar_format = '{l_bar}{bar}| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}]'
- total = len(frame_paths)
- with tqdm(total=total, desc='Processing', unit='frame', dynamic_ncols=True, bar_format=progress_bar_format) as progress:
- multi_process_frame(source_path, frame_paths, process_frames, lambda: update_progress(progress))
-
-
-def update_progress(progress: Any = None) -> None:
- process = psutil.Process(os.getpid())
- memory_usage = process.memory_info().rss / 1024 / 1024 / 1024
- progress.set_postfix({
- 'memory_usage': '{:.2f}'.format(memory_usage).zfill(5) + 'GB',
- 'execution_providers': roop.globals.execution_providers,
- 'execution_threads': roop.globals.execution_threads
- })
- progress.refresh()
- progress.update(1)
diff --git a/spaces/AONYLMR/White-box-Cartoonization/wbc/guided_filter.py b/spaces/AONYLMR/White-box-Cartoonization/wbc/guided_filter.py
deleted file mode 100644
index fd019d145efc7f308cd96de90f4e7b648f6820b4..0000000000000000000000000000000000000000
--- a/spaces/AONYLMR/White-box-Cartoonization/wbc/guided_filter.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import tensorflow as tf
-import numpy as np
-
-
-
-
-def tf_box_filter(x, r):
- k_size = int(2*r+1)
- ch = x.get_shape().as_list()[-1]
- weight = 1/(k_size**2)
- box_kernel = weight*np.ones((k_size, k_size, ch, 1))
- box_kernel = np.array(box_kernel).astype(np.float32)
- output = tf.nn.depthwise_conv2d(x, box_kernel, [1, 1, 1, 1], 'SAME')
- return output
-
-
-
-def guided_filter(x, y, r, eps=1e-2):
-
- x_shape = tf.shape(x)
- #y_shape = tf.shape(y)
-
- N = tf_box_filter(tf.ones((1, x_shape[1], x_shape[2], 1), dtype=x.dtype), r)
-
- mean_x = tf_box_filter(x, r) / N
- mean_y = tf_box_filter(y, r) / N
- cov_xy = tf_box_filter(x * y, r) / N - mean_x * mean_y
- var_x = tf_box_filter(x * x, r) / N - mean_x * mean_x
-
- A = cov_xy / (var_x + eps)
- b = mean_y - A * mean_x
-
- mean_A = tf_box_filter(A, r) / N
- mean_b = tf_box_filter(b, r) / N
-
- output = mean_A * x + mean_b
-
- return output
-
-
-
-def fast_guided_filter(lr_x, lr_y, hr_x, r=1, eps=1e-8):
-
- #assert lr_x.shape.ndims == 4 and lr_y.shape.ndims == 4 and hr_x.shape.ndims == 4
-
- lr_x_shape = tf.shape(lr_x)
- #lr_y_shape = tf.shape(lr_y)
- hr_x_shape = tf.shape(hr_x)
-
- N = tf_box_filter(tf.ones((1, lr_x_shape[1], lr_x_shape[2], 1), dtype=lr_x.dtype), r)
-
- mean_x = tf_box_filter(lr_x, r) / N
- mean_y = tf_box_filter(lr_y, r) / N
- cov_xy = tf_box_filter(lr_x * lr_y, r) / N - mean_x * mean_y
- var_x = tf_box_filter(lr_x * lr_x, r) / N - mean_x * mean_x
-
- A = cov_xy / (var_x + eps)
- b = mean_y - A * mean_x
-
- mean_A = tf.image.resize_images(A, hr_x_shape[1: 3])
- mean_b = tf.image.resize_images(b, hr_x_shape[1: 3])
-
- output = mean_A * hr_x + mean_b
-
- return output
-
-
-if __name__ == '__main__':
- import cv2
- from tqdm import tqdm
-
- input_photo = tf.placeholder(tf.float32, [1, None, None, 3])
- #input_superpixel = tf.placeholder(tf.float32, [16, 256, 256, 3])
- output = guided_filter(input_photo, input_photo, 5, eps=1)
- image = cv2.imread('output_figure1/cartoon2.jpg')
- image = image/127.5 - 1
- image = np.expand_dims(image, axis=0)
-
- config = tf.ConfigProto()
- config.gpu_options.allow_growth = True
- sess = tf.Session(config=config)
- sess.run(tf.global_variables_initializer())
-
- out = sess.run(output, feed_dict={input_photo: image})
- out = (np.squeeze(out)+1)*127.5
- out = np.clip(out, 0, 255).astype(np.uint8)
- cv2.imwrite('output_figure1/cartoon2_filter.jpg', out)
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Acytoo.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Acytoo.py
deleted file mode 100644
index d36ca6da22ddfa43690abdd0db27e6f971320f93..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Acytoo.py
+++ /dev/null
@@ -1,51 +0,0 @@
-from __future__ import annotations
-
-from aiohttp import ClientSession
-
-from ..typing import AsyncGenerator
-from .base_provider import AsyncGeneratorProvider
-
-
-class Acytoo(AsyncGeneratorProvider):
- url = 'https://chat.acytoo.com'
- working = True
- supports_gpt_35_turbo = True
-
- @classmethod
- async def create_async_generator(
- cls,
- model: str,
- messages: list[dict[str, str]],
- proxy: str = None,
- **kwargs
- ) -> AsyncGenerator:
-
- async with ClientSession(
- headers=_create_header()
- ) as session:
- async with session.post(
- cls.url + '/api/completions',
- proxy=proxy,
- json=_create_payload(messages, **kwargs)
- ) as response:
- response.raise_for_status()
- async for stream in response.content.iter_any():
- if stream:
- yield stream.decode()
-
-
-def _create_header():
- return {
- 'accept': '*/*',
- 'content-type': 'application/json',
- }
-
-
-def _create_payload(messages: list[dict[str, str]], temperature: float = 0.5, **kwargs):
- return {
- 'key' : '',
- 'model' : 'gpt-3.5-turbo',
- 'messages' : messages,
- 'temperature' : temperature,
- 'password' : ''
- }
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridtable/input/PointerUpDownCell.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridtable/input/PointerUpDownCell.js
deleted file mode 100644
index 00cb1480b93aa4c6ae6316d94579cb4818fe9b94..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridtable/input/PointerUpDownCell.js
+++ /dev/null
@@ -1,13 +0,0 @@
-import EmitCellEvent from './EmitCellEvent.js';
-
-var PointerUpDownCell = function (table, tableConfig) {
- table
- .on('pointerdown', function (pointer, localX, localY, event) {
- EmitCellEvent(this.eventEmitter, 'cell.down', table, pointer.worldX, pointer.worldY, pointer, event);
- }, this)
- .on('pointerup', function (pointer, localX, localY, event) {
- EmitCellEvent(this.eventEmitter, 'cell.up', table, pointer.worldX, pointer.worldY, pointer, event);
- }, this)
-}
-
-export default PointerUpDownCell;
\ No newline at end of file
diff --git a/spaces/Alpaca233/SadTalker/src/audio2pose_models/res_unet.py b/spaces/Alpaca233/SadTalker/src/audio2pose_models/res_unet.py
deleted file mode 100644
index f2611e1d1a9bf233507427b34928fca60e094224..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/SadTalker/src/audio2pose_models/res_unet.py
+++ /dev/null
@@ -1,65 +0,0 @@
-import torch
-import torch.nn as nn
-from src.audio2pose_models.networks import ResidualConv, Upsample
-
-
-class ResUnet(nn.Module):
- def __init__(self, channel=1, filters=[32, 64, 128, 256]):
- super(ResUnet, self).__init__()
-
- self.input_layer = nn.Sequential(
- nn.Conv2d(channel, filters[0], kernel_size=3, padding=1),
- nn.BatchNorm2d(filters[0]),
- nn.ReLU(),
- nn.Conv2d(filters[0], filters[0], kernel_size=3, padding=1),
- )
- self.input_skip = nn.Sequential(
- nn.Conv2d(channel, filters[0], kernel_size=3, padding=1)
- )
-
- self.residual_conv_1 = ResidualConv(filters[0], filters[1], stride=(2,1), padding=1)
- self.residual_conv_2 = ResidualConv(filters[1], filters[2], stride=(2,1), padding=1)
-
- self.bridge = ResidualConv(filters[2], filters[3], stride=(2,1), padding=1)
-
- self.upsample_1 = Upsample(filters[3], filters[3], kernel=(2,1), stride=(2,1))
- self.up_residual_conv1 = ResidualConv(filters[3] + filters[2], filters[2], stride=1, padding=1)
-
- self.upsample_2 = Upsample(filters[2], filters[2], kernel=(2,1), stride=(2,1))
- self.up_residual_conv2 = ResidualConv(filters[2] + filters[1], filters[1], stride=1, padding=1)
-
- self.upsample_3 = Upsample(filters[1], filters[1], kernel=(2,1), stride=(2,1))
- self.up_residual_conv3 = ResidualConv(filters[1] + filters[0], filters[0], stride=1, padding=1)
-
- self.output_layer = nn.Sequential(
- nn.Conv2d(filters[0], 1, 1, 1),
- nn.Sigmoid(),
- )
-
- def forward(self, x):
- # Encode
- x1 = self.input_layer(x) + self.input_skip(x)
- x2 = self.residual_conv_1(x1)
- x3 = self.residual_conv_2(x2)
- # Bridge
- x4 = self.bridge(x3)
-
- # Decode
- x4 = self.upsample_1(x4)
- x5 = torch.cat([x4, x3], dim=1)
-
- x6 = self.up_residual_conv1(x5)
-
- x6 = self.upsample_2(x6)
- x7 = torch.cat([x6, x2], dim=1)
-
- x8 = self.up_residual_conv2(x7)
-
- x8 = self.upsample_3(x8)
- x9 = torch.cat([x8, x1], dim=1)
-
- x10 = self.up_residual_conv3(x9)
-
- output = self.output_layer(x10)
-
- return output
\ No newline at end of file
diff --git a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/hubert_model.py b/spaces/Alycer/VITS-Umamusume-voice-synthesizer/hubert_model.py
deleted file mode 100644
index 6c7f8716c268d0f371f5a9f7995f59bd4b9082d1..0000000000000000000000000000000000000000
--- a/spaces/Alycer/VITS-Umamusume-voice-synthesizer/hubert_model.py
+++ /dev/null
@@ -1,221 +0,0 @@
-import copy
-from typing import Optional, Tuple
-import random
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present
-
-class Hubert(nn.Module):
- def __init__(self, num_label_embeddings: int = 100, mask: bool = True):
- super().__init__()
- self._mask = mask
- self.feature_extractor = FeatureExtractor()
- self.feature_projection = FeatureProjection()
- self.positional_embedding = PositionalConvEmbedding()
- self.norm = nn.LayerNorm(768)
- self.dropout = nn.Dropout(0.1)
- self.encoder = TransformerEncoder(
- nn.TransformerEncoderLayer(
- 768, 12, 3072, activation="gelu", batch_first=True
- ),
- 12,
- )
- self.proj = nn.Linear(768, 256)
-
- self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_())
- self.label_embedding = nn.Embedding(num_label_embeddings, 256)
-
- def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
- mask = None
- if self.training and self._mask:
- mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2)
- x[mask] = self.masked_spec_embed.to(x.dtype)
- return x, mask
-
- def encode(
- self, x: torch.Tensor, layer: Optional[int] = None
- ) -> Tuple[torch.Tensor, torch.Tensor]:
- x = self.feature_extractor(x)
- x = self.feature_projection(x.transpose(1, 2))
- x, mask = self.mask(x)
- x = x + self.positional_embedding(x)
- x = self.dropout(self.norm(x))
- x = self.encoder(x, output_layer=layer)
- return x, mask
-
- def logits(self, x: torch.Tensor) -> torch.Tensor:
- logits = torch.cosine_similarity(
- x.unsqueeze(2),
- self.label_embedding.weight.unsqueeze(0).unsqueeze(0),
- dim=-1,
- )
- return logits / 0.1
-
- def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
- x, mask = self.encode(x)
- x = self.proj(x)
- logits = self.logits(x)
- return logits, mask
-
-
-class HubertSoft(Hubert):
- def __init__(self):
- super().__init__()
-
- @torch.inference_mode()
- def units(self, wav: torch.Tensor) -> torch.Tensor:
- wav = F.pad(wav, ((400 - 320) // 2, (400 - 320) // 2))
- x, _ = self.encode(wav)
- return self.proj(x)
-
-
-class FeatureExtractor(nn.Module):
- def __init__(self):
- super().__init__()
- self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False)
- self.norm0 = nn.GroupNorm(512, 512)
- self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False)
- self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False)
- self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = F.gelu(self.norm0(self.conv0(x)))
- x = F.gelu(self.conv1(x))
- x = F.gelu(self.conv2(x))
- x = F.gelu(self.conv3(x))
- x = F.gelu(self.conv4(x))
- x = F.gelu(self.conv5(x))
- x = F.gelu(self.conv6(x))
- return x
-
-
-class FeatureProjection(nn.Module):
- def __init__(self):
- super().__init__()
- self.norm = nn.LayerNorm(512)
- self.projection = nn.Linear(512, 768)
- self.dropout = nn.Dropout(0.1)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = self.norm(x)
- x = self.projection(x)
- x = self.dropout(x)
- return x
-
-
-class PositionalConvEmbedding(nn.Module):
- def __init__(self):
- super().__init__()
- self.conv = nn.Conv1d(
- 768,
- 768,
- kernel_size=128,
- padding=128 // 2,
- groups=16,
- )
- self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = self.conv(x.transpose(1, 2))
- x = F.gelu(x[:, :, :-1])
- return x.transpose(1, 2)
-
-
-class TransformerEncoder(nn.Module):
- def __init__(
- self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int
- ) -> None:
- super(TransformerEncoder, self).__init__()
- self.layers = nn.ModuleList(
- [copy.deepcopy(encoder_layer) for _ in range(num_layers)]
- )
- self.num_layers = num_layers
-
- def forward(
- self,
- src: torch.Tensor,
- mask: torch.Tensor = None,
- src_key_padding_mask: torch.Tensor = None,
- output_layer: Optional[int] = None,
- ) -> torch.Tensor:
- output = src
- for layer in self.layers[:output_layer]:
- output = layer(
- output, src_mask=mask, src_key_padding_mask=src_key_padding_mask
- )
- return output
-
-
-def _compute_mask(
- shape: Tuple[int, int],
- mask_prob: float,
- mask_length: int,
- device: torch.device,
- min_masks: int = 0,
-) -> torch.Tensor:
- batch_size, sequence_length = shape
-
- if mask_length < 1:
- raise ValueError("`mask_length` has to be bigger than 0.")
-
- if mask_length > sequence_length:
- raise ValueError(
- f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`"
- )
-
- # compute number of masked spans in batch
- num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random())
- num_masked_spans = max(num_masked_spans, min_masks)
-
- # make sure num masked indices <= sequence_length
- if num_masked_spans * mask_length > sequence_length:
- num_masked_spans = sequence_length // mask_length
-
- # SpecAugment mask to fill
- mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool)
-
- # uniform distribution to sample from, make sure that offset samples are < sequence_length
- uniform_dist = torch.ones(
- (batch_size, sequence_length - (mask_length - 1)), device=device
- )
-
- # get random indices to mask
- mask_indices = torch.multinomial(uniform_dist, num_masked_spans)
-
- # expand masked indices to masked spans
- mask_indices = (
- mask_indices.unsqueeze(dim=-1)
- .expand((batch_size, num_masked_spans, mask_length))
- .reshape(batch_size, num_masked_spans * mask_length)
- )
- offsets = (
- torch.arange(mask_length, device=device)[None, None, :]
- .expand((batch_size, num_masked_spans, mask_length))
- .reshape(batch_size, num_masked_spans * mask_length)
- )
- mask_idxs = mask_indices + offsets
-
- # scatter indices to mask
- mask = mask.scatter(1, mask_idxs, True)
-
- return mask
-
-
-def hubert_soft(
- path: str
-) -> HubertSoft:
- r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`.
- Args:
- path (str): path of a pretrained model
- """
- hubert = HubertSoft()
- checkpoint = torch.load(path)
- consume_prefix_in_state_dict_if_present(checkpoint, "module.")
- hubert.load_state_dict(checkpoint)
- hubert.eval()
- return hubert
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/singlestep_dpm_solver.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/singlestep_dpm_solver.md
deleted file mode 100644
index 7142e0ded5a7833fd61bcbc1ae7018e0472c6fde..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/singlestep_dpm_solver.md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-
-# Singlestep DPM-Solver
-
-## Overview
-
-Original paper can be found [here](https://arxiv.org/abs/2206.00927) and the [improved version](https://arxiv.org/abs/2211.01095). The original implementation can be found [here](https://github.com/LuChengTHU/dpm-solver).
-
-## DPMSolverSinglestepScheduler
-[[autodoc]] DPMSolverSinglestepScheduler
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
deleted file mode 100644
index cad82cb71940a28e78e70419ed80ebb2f55cb144..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
+++ /dev/null
@@ -1,494 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import warnings
-from typing import Callable, List, Optional, Union
-
-import numpy as np
-import PIL
-import torch
-import torch.nn.functional as F
-from transformers import CLIPTextModel, CLIPTokenizer
-
-from ...image_processor import VaeImageProcessor
-from ...models import AutoencoderKL, UNet2DConditionModel
-from ...schedulers import EulerDiscreteScheduler
-from ...utils import logging, randn_tensor
-from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_upscale.preprocess
-def preprocess(image):
- warnings.warn(
- "The preprocess method is deprecated and will be removed in a future version. Please"
- " use VaeImageProcessor.preprocess instead",
- FutureWarning,
- )
- if isinstance(image, torch.Tensor):
- return image
- elif isinstance(image, PIL.Image.Image):
- image = [image]
-
- if isinstance(image[0], PIL.Image.Image):
- w, h = image[0].size
- w, h = (x - x % 64 for x in (w, h)) # resize to integer multiple of 64
-
- image = [np.array(i.resize((w, h)))[None, :] for i in image]
- image = np.concatenate(image, axis=0)
- image = np.array(image).astype(np.float32) / 255.0
- image = image.transpose(0, 3, 1, 2)
- image = 2.0 * image - 1.0
- image = torch.from_numpy(image)
- elif isinstance(image[0], torch.Tensor):
- image = torch.cat(image, dim=0)
- return image
-
-
-class StableDiffusionLatentUpscalePipeline(DiffusionPipeline):
- r"""
- Pipeline for upscaling Stable Diffusion output image resolution by a factor of 2.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
- implemented for all pipelines (downloading, saving, running on a particular device, etc.).
-
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- text_encoder ([`~transformers.CLIPTextModel`]):
- Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- tokenizer ([`~transformers.CLIPTokenizer`]):
- A `CLIPTokenizer` to tokenize text.
- unet ([`UNet2DConditionModel`]):
- A `UNet2DConditionModel` to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A [`EulerDiscreteScheduler`] to be used in combination with `unet` to denoise the encoded image latents.
- """
-
- def __init__(
- self,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- tokenizer: CLIPTokenizer,
- unet: UNet2DConditionModel,
- scheduler: EulerDiscreteScheduler,
- ):
- super().__init__()
-
- self.register_modules(
- vae=vae,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- )
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
- self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor, resample="bicubic")
-
- def _encode_prompt(self, prompt, device, do_classifier_free_guidance, negative_prompt):
- r"""
- Encodes the prompt into text encoder hidden states.
-
- Args:
- prompt (`str` or `list(int)`):
- prompt to be encoded
- device: (`torch.device`):
- torch device
- do_classifier_free_guidance (`bool`):
- whether to use classifier free guidance or not
- negative_prompt (`str` or `List[str]`):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- """
- batch_size = len(prompt) if isinstance(prompt, list) else 1
-
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_length=True,
- return_tensors="pt",
- )
- text_input_ids = text_inputs.input_ids
-
- untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
-
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids):
- removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
-
- text_encoder_out = self.text_encoder(
- text_input_ids.to(device),
- output_hidden_states=True,
- )
- text_embeddings = text_encoder_out.hidden_states[-1]
- text_pooler_out = text_encoder_out.pooler_output
-
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance:
- uncond_tokens: List[str]
- if negative_prompt is None:
- uncond_tokens = [""] * batch_size
- elif type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt]
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = negative_prompt
-
- max_length = text_input_ids.shape[-1]
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_length=True,
- return_tensors="pt",
- )
-
- uncond_encoder_out = self.text_encoder(
- uncond_input.input_ids.to(device),
- output_hidden_states=True,
- )
-
- uncond_embeddings = uncond_encoder_out.hidden_states[-1]
- uncond_pooler_out = uncond_encoder_out.pooler_output
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
- text_pooler_out = torch.cat([uncond_pooler_out, text_pooler_out])
-
- return text_embeddings, text_pooler_out
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents
- def decode_latents(self, latents):
- warnings.warn(
- "The decode_latents method is deprecated and will be removed in a future version. Please"
- " use VaeImageProcessor instead",
- FutureWarning,
- )
- latents = 1 / self.vae.config.scaling_factor * latents
- image = self.vae.decode(latents, return_dict=False)[0]
- image = (image / 2 + 0.5).clamp(0, 1)
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
- return image
-
- def check_inputs(self, prompt, image, callback_steps):
- if not isinstance(prompt, str) and not isinstance(prompt, list):
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if (
- not isinstance(image, torch.Tensor)
- and not isinstance(image, PIL.Image.Image)
- and not isinstance(image, list)
- ):
- raise ValueError(
- f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or `list` but is {type(image)}"
- )
-
- # verify batch size of prompt and image are same if image is a list or tensor
- if isinstance(image, list) or isinstance(image, torch.Tensor):
- if isinstance(prompt, str):
- batch_size = 1
- else:
- batch_size = len(prompt)
- if isinstance(image, list):
- image_batch_size = len(image)
- else:
- image_batch_size = image.shape[0] if image.ndim == 4 else 1
- if batch_size != image_batch_size:
- raise ValueError(
- f"`prompt` has batch size {batch_size} and `image` has batch size {image_batch_size}."
- " Please make sure that passed `prompt` matches the batch size of `image`."
- )
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_upscale.StableDiffusionUpscalePipeline.prepare_latents
- def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
- shape = (batch_size, num_channels_latents, height, width)
- if latents is None:
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
- else:
- if latents.shape != shape:
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
- latents = latents.to(device)
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * self.scheduler.init_noise_sigma
- return latents
-
- @torch.no_grad()
- def __call__(
- self,
- prompt: Union[str, List[str]],
- image: Union[
- torch.FloatTensor,
- PIL.Image.Image,
- np.ndarray,
- List[torch.FloatTensor],
- List[PIL.Image.Image],
- List[np.ndarray],
- ] = None,
- num_inference_steps: int = 75,
- guidance_scale: float = 9.0,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- latents: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- ):
- r"""
- The call function to the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`):
- The prompt or prompts to guide image upscaling.
- image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`):
- `Image` or tensor representing an image batch to be upscaled. If it's a tensor, it can be either a
- latent output from a Stable Diffusion model or an image tensor in the range `[-1, 1]`. It is considered
- a `latent` if `image.shape[1]` is `4`; otherwise, it is considered to be an image representation and
- encoded using this pipeline's `vae` encoder.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- A higher guidance scale value encourages the model to generate images closely linked to the text
- `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts to guide what to not include in image generation. If not defined, you need to
- pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
- to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
- generation deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor is generated by sampling using the supplied random `generator`.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that calls every `callback_steps` steps during inference. The function is called with the
- following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function is called. If not specified, the callback is called at
- every step.
-
- Examples:
- ```py
- >>> from diffusers import StableDiffusionLatentUpscalePipeline, StableDiffusionPipeline
- >>> import torch
-
-
- >>> pipeline = StableDiffusionPipeline.from_pretrained(
- ... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16
- ... )
- >>> pipeline.to("cuda")
-
- >>> model_id = "stabilityai/sd-x2-latent-upscaler"
- >>> upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16)
- >>> upscaler.to("cuda")
-
- >>> prompt = "a photo of an astronaut high resolution, unreal engine, ultra realistic"
- >>> generator = torch.manual_seed(33)
-
- >>> low_res_latents = pipeline(prompt, generator=generator, output_type="latent").images
-
- >>> with torch.no_grad():
- ... image = pipeline.decode_latents(low_res_latents)
- >>> image = pipeline.numpy_to_pil(image)[0]
-
- >>> image.save("../images/a1.png")
-
- >>> upscaled_image = upscaler(
- ... prompt=prompt,
- ... image=low_res_latents,
- ... num_inference_steps=20,
- ... guidance_scale=0,
- ... generator=generator,
- ... ).images[0]
-
- >>> upscaled_image.save("../images/a2.png")
- ```
-
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- If `return_dict` is `True`, [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] is returned,
- otherwise a `tuple` is returned where the first element is a list with the generated images.
- """
-
- # 1. Check inputs
- self.check_inputs(prompt, image, callback_steps)
-
- # 2. Define call parameters
- batch_size = 1 if isinstance(prompt, str) else len(prompt)
- device = self._execution_device
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- if guidance_scale == 0:
- prompt = [""] * batch_size
-
- # 3. Encode input prompt
- text_embeddings, text_pooler_out = self._encode_prompt(
- prompt, device, do_classifier_free_guidance, negative_prompt
- )
-
- # 4. Preprocess image
- image = self.image_processor.preprocess(image)
- image = image.to(dtype=text_embeddings.dtype, device=device)
- if image.shape[1] == 3:
- # encode image if not in latent-space yet
- image = self.vae.encode(image).latent_dist.sample() * self.vae.config.scaling_factor
-
- # 5. set timesteps
- self.scheduler.set_timesteps(num_inference_steps, device=device)
- timesteps = self.scheduler.timesteps
-
- batch_multiplier = 2 if do_classifier_free_guidance else 1
- image = image[None, :] if image.ndim == 3 else image
- image = torch.cat([image] * batch_multiplier)
-
- # 5. Add noise to image (set to be 0):
- # (see below notes from the author):
- # "the This step theoretically can make the model work better on out-of-distribution inputs, but mostly just seems to make it match the input less, so it's turned off by default."
- noise_level = torch.tensor([0.0], dtype=torch.float32, device=device)
- noise_level = torch.cat([noise_level] * image.shape[0])
- inv_noise_level = (noise_level**2 + 1) ** (-0.5)
-
- image_cond = F.interpolate(image, scale_factor=2, mode="nearest") * inv_noise_level[:, None, None, None]
- image_cond = image_cond.to(text_embeddings.dtype)
-
- noise_level_embed = torch.cat(
- [
- torch.ones(text_pooler_out.shape[0], 64, dtype=text_pooler_out.dtype, device=device),
- torch.zeros(text_pooler_out.shape[0], 64, dtype=text_pooler_out.dtype, device=device),
- ],
- dim=1,
- )
-
- timestep_condition = torch.cat([noise_level_embed, text_pooler_out], dim=1)
-
- # 6. Prepare latent variables
- height, width = image.shape[2:]
- num_channels_latents = self.vae.config.latent_channels
- latents = self.prepare_latents(
- batch_size,
- num_channels_latents,
- height * 2, # 2x upscale
- width * 2,
- text_embeddings.dtype,
- device,
- generator,
- latents,
- )
-
- # 7. Check that sizes of image and latents match
- num_channels_image = image.shape[1]
- if num_channels_latents + num_channels_image != self.unet.config.in_channels:
- raise ValueError(
- f"Incorrect configuration settings! The config of `pipeline.unet`: {self.unet.config} expects"
- f" {self.unet.config.in_channels} but received `num_channels_latents`: {num_channels_latents} +"
- f" `num_channels_image`: {num_channels_image} "
- f" = {num_channels_latents+num_channels_image}. Please verify the config of"
- " `pipeline.unet` or your `image` input."
- )
-
- # 9. Denoising loop
- num_warmup_steps = 0
-
- with self.progress_bar(total=num_inference_steps) as progress_bar:
- for i, t in enumerate(timesteps):
- sigma = self.scheduler.sigmas[i]
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
- scaled_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- scaled_model_input = torch.cat([scaled_model_input, image_cond], dim=1)
- # preconditioning parameter based on Karras et al. (2022) (table 1)
- timestep = torch.log(sigma) * 0.25
-
- noise_pred = self.unet(
- scaled_model_input,
- timestep,
- encoder_hidden_states=text_embeddings,
- timestep_cond=timestep_condition,
- ).sample
-
- # in original repo, the output contains a variance channel that's not used
- noise_pred = noise_pred[:, :-1]
-
- # apply preconditioning, based on table 1 in Karras et al. (2022)
- inv_sigma = 1 / (sigma**2 + 1)
- noise_pred = inv_sigma * latent_model_input + self.scheduler.scale_model_input(sigma, t) * noise_pred
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents).prev_sample
-
- # call the callback, if provided
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
- progress_bar.update()
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- if not output_type == "latent":
- image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
- else:
- image = latents
-
- image = self.image_processor.postprocess(image, output_type=output_type)
-
- if not return_dict:
- return (image,)
-
- return ImagePipelineOutput(images=image)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_flax.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_flax.py
deleted file mode 100644
index 8db8ec7810068aab4517fe2066e3fab10a52f6f7..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_flax.py
+++ /dev/null
@@ -1,99 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import gc
-import unittest
-
-from diffusers import FlaxDPMSolverMultistepScheduler, FlaxStableDiffusionPipeline
-from diffusers.utils import is_flax_available, slow
-from diffusers.utils.testing_utils import require_flax
-
-
-if is_flax_available():
- import jax
- import jax.numpy as jnp
- from flax.jax_utils import replicate
- from flax.training.common_utils import shard
-
-
-@slow
-@require_flax
-class FlaxStableDiffusion2PipelineIntegrationTests(unittest.TestCase):
- def tearDown(self):
- # clean up the VRAM after each test
- super().tearDown()
- gc.collect()
-
- def test_stable_diffusion_flax(self):
- sd_pipe, params = FlaxStableDiffusionPipeline.from_pretrained(
- "stabilityai/stable-diffusion-2",
- revision="bf16",
- dtype=jnp.bfloat16,
- )
-
- prompt = "A painting of a squirrel eating a burger"
- num_samples = jax.device_count()
- prompt = num_samples * [prompt]
- prompt_ids = sd_pipe.prepare_inputs(prompt)
-
- params = replicate(params)
- prompt_ids = shard(prompt_ids)
-
- prng_seed = jax.random.PRNGKey(0)
- prng_seed = jax.random.split(prng_seed, jax.device_count())
-
- images = sd_pipe(prompt_ids, params, prng_seed, num_inference_steps=25, jit=True)[0]
- assert images.shape == (jax.device_count(), 1, 768, 768, 3)
-
- images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:])
- image_slice = images[0, 253:256, 253:256, -1]
-
- output_slice = jnp.asarray(jax.device_get(image_slice.flatten()))
- expected_slice = jnp.array([0.4238, 0.4414, 0.4395, 0.4453, 0.4629, 0.4590, 0.4531, 0.45508, 0.4512])
- print(f"output_slice: {output_slice}")
- assert jnp.abs(output_slice - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_dpm_flax(self):
- model_id = "stabilityai/stable-diffusion-2"
- scheduler, scheduler_params = FlaxDPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler")
- sd_pipe, params = FlaxStableDiffusionPipeline.from_pretrained(
- model_id,
- scheduler=scheduler,
- revision="bf16",
- dtype=jnp.bfloat16,
- )
- params["scheduler"] = scheduler_params
-
- prompt = "A painting of a squirrel eating a burger"
- num_samples = jax.device_count()
- prompt = num_samples * [prompt]
- prompt_ids = sd_pipe.prepare_inputs(prompt)
-
- params = replicate(params)
- prompt_ids = shard(prompt_ids)
-
- prng_seed = jax.random.PRNGKey(0)
- prng_seed = jax.random.split(prng_seed, jax.device_count())
-
- images = sd_pipe(prompt_ids, params, prng_seed, num_inference_steps=25, jit=True)[0]
- assert images.shape == (jax.device_count(), 1, 768, 768, 3)
-
- images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:])
- image_slice = images[0, 253:256, 253:256, -1]
-
- output_slice = jnp.asarray(jax.device_get(image_slice.flatten()))
- expected_slice = jnp.array([0.4336, 0.42969, 0.4453, 0.4199, 0.4297, 0.4531, 0.4434, 0.4434, 0.4297])
- print(f"output_slice: {output_slice}")
- assert jnp.abs(output_slice - expected_slice).max() < 1e-2
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/res2net/mask_rcnn_r2_101_fpn_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/res2net/mask_rcnn_r2_101_fpn_2x_coco.py
deleted file mode 100644
index a620188807218a9c80ad89ac6002dda3ea4b830c..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/res2net/mask_rcnn_r2_101_fpn_2x_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = '../mask_rcnn/mask_rcnn_r50_fpn_2x_coco.py'
-model = dict(
- pretrained='open-mmlab://res2net101_v1d_26w_4s',
- backbone=dict(type='Res2Net', depth=101, scales=4, base_width=26))
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/double_roi_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/double_roi_head.py
deleted file mode 100644
index a1aa6c8244a889fbbed312a89574c3e11be294f0..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/double_roi_head.py
+++ /dev/null
@@ -1,33 +0,0 @@
-from ..builder import HEADS
-from .standard_roi_head import StandardRoIHead
-
-
-@HEADS.register_module()
-class DoubleHeadRoIHead(StandardRoIHead):
- """RoI head for Double Head RCNN.
-
- https://arxiv.org/abs/1904.06493
- """
-
- def __init__(self, reg_roi_scale_factor, **kwargs):
- super(DoubleHeadRoIHead, self).__init__(**kwargs)
- self.reg_roi_scale_factor = reg_roi_scale_factor
-
- def _bbox_forward(self, x, rois):
- """Box head forward function used in both training and testing time."""
- bbox_cls_feats = self.bbox_roi_extractor(
- x[:self.bbox_roi_extractor.num_inputs], rois)
- bbox_reg_feats = self.bbox_roi_extractor(
- x[:self.bbox_roi_extractor.num_inputs],
- rois,
- roi_scale_factor=self.reg_roi_scale_factor)
- if self.with_shared_head:
- bbox_cls_feats = self.shared_head(bbox_cls_feats)
- bbox_reg_feats = self.shared_head(bbox_reg_feats)
- cls_score, bbox_pred = self.bbox_head(bbox_cls_feats, bbox_reg_feats)
-
- bbox_results = dict(
- cls_score=cls_score,
- bbox_pred=bbox_pred,
- bbox_feats=bbox_cls_feats)
- return bbox_results
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/AutoGPTQ_loader.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/AutoGPTQ_loader.py
deleted file mode 100644
index 987f5ba7971b0d14bd94c9c9523c6a8ba2fecfe9..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/AutoGPTQ_loader.py
+++ /dev/null
@@ -1,72 +0,0 @@
-from pathlib import Path
-
-from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
-
-import modules.shared as shared
-from modules.logging_colors import logger
-from modules.models import get_max_memory_dict
-
-
-def load_quantized(model_name):
- path_to_model = Path(f'{shared.args.model_dir}/{model_name}')
- pt_path = None
-
- # Find the model checkpoint
- if shared.args.checkpoint:
- pt_path = Path(shared.args.checkpoint)
- else:
- for ext in ['.safetensors', '.pt', '.bin']:
- found = list(path_to_model.glob(f"*{ext}"))
- if len(found) > 0:
- if len(found) > 1:
- logger.warning(f'More than one {ext} model has been found. The last one will be selected. It could be wrong.')
-
- pt_path = found[-1]
- break
-
- if pt_path is None:
- logger.error("The model could not be loaded because its checkpoint file in .bin/.pt/.safetensors format could not be located.")
- return
-
- use_safetensors = pt_path.suffix == '.safetensors'
- if not (path_to_model / "quantize_config.json").exists():
- quantize_config = BaseQuantizeConfig(
- bits=bits if (bits := shared.args.wbits) > 0 else 4,
- group_size=gs if (gs := shared.args.groupsize) > 0 else -1,
- desc_act=shared.args.desc_act
- )
- else:
- quantize_config = None
-
- # Define the params for AutoGPTQForCausalLM.from_quantized
- params = {
- 'model_basename': pt_path.stem,
- 'device': "cuda:0" if not shared.args.cpu else "cpu",
- 'use_triton': shared.args.triton,
- 'inject_fused_attention': not shared.args.no_inject_fused_attention,
- 'inject_fused_mlp': not shared.args.no_inject_fused_mlp,
- 'use_safetensors': use_safetensors,
- 'trust_remote_code': shared.args.trust_remote_code,
- 'max_memory': get_max_memory_dict(),
- 'quantize_config': quantize_config,
- 'use_cuda_fp16': not shared.args.no_use_cuda_fp16,
- 'disable_exllama': shared.args.disable_exllama,
- }
-
- logger.info(f"The AutoGPTQ params are: {params}")
- model = AutoGPTQForCausalLM.from_quantized(path_to_model, **params)
-
- # These lines fix the multimodal extension when used with AutoGPTQ
- if hasattr(model, 'model'):
- if not hasattr(model, 'dtype'):
- if hasattr(model.model, 'dtype'):
- model.dtype = model.model.dtype
-
- if hasattr(model.model, 'model') and hasattr(model.model.model, 'embed_tokens'):
- if not hasattr(model, 'embed_tokens'):
- model.embed_tokens = model.model.model.embed_tokens
-
- if not hasattr(model.model, 'embed_tokens'):
- model.model.embed_tokens = model.model.model.embed_tokens
-
- return model
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/fileio/parse.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/fileio/parse.py
deleted file mode 100644
index f60f0d611b8d75692221d0edd7dc993b0a6445c9..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/fileio/parse.py
+++ /dev/null
@@ -1,97 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-
-from io import StringIO
-
-from .file_client import FileClient
-
-
-def list_from_file(filename,
- prefix='',
- offset=0,
- max_num=0,
- encoding='utf-8',
- file_client_args=None):
- """Load a text file and parse the content as a list of strings.
-
- Note:
- In v1.3.16 and later, ``list_from_file`` supports loading a text file
- which can be storaged in different backends and parsing the content as
- a list for strings.
-
- Args:
- filename (str): Filename.
- prefix (str): The prefix to be inserted to the beginning of each item.
- offset (int): The offset of lines.
- max_num (int): The maximum number of lines to be read,
- zeros and negatives mean no limitation.
- encoding (str): Encoding used to open the file. Default utf-8.
- file_client_args (dict, optional): Arguments to instantiate a
- FileClient. See :class:`mmcv.fileio.FileClient` for details.
- Default: None.
-
- Examples:
- >>> list_from_file('/path/of/your/file') # disk
- ['hello', 'world']
- >>> list_from_file('s3://path/of/your/file') # ceph or petrel
- ['hello', 'world']
-
- Returns:
- list[str]: A list of strings.
- """
- cnt = 0
- item_list = []
- file_client = FileClient.infer_client(file_client_args, filename)
- with StringIO(file_client.get_text(filename, encoding)) as f:
- for _ in range(offset):
- f.readline()
- for line in f:
- if 0 < max_num <= cnt:
- break
- item_list.append(prefix + line.rstrip('\n\r'))
- cnt += 1
- return item_list
-
-
-def dict_from_file(filename,
- key_type=str,
- encoding='utf-8',
- file_client_args=None):
- """Load a text file and parse the content as a dict.
-
- Each line of the text file will be two or more columns split by
- whitespaces or tabs. The first column will be parsed as dict keys, and
- the following columns will be parsed as dict values.
-
- Note:
- In v1.3.16 and later, ``dict_from_file`` supports loading a text file
- which can be storaged in different backends and parsing the content as
- a dict.
-
- Args:
- filename(str): Filename.
- key_type(type): Type of the dict keys. str is user by default and
- type conversion will be performed if specified.
- encoding (str): Encoding used to open the file. Default utf-8.
- file_client_args (dict, optional): Arguments to instantiate a
- FileClient. See :class:`mmcv.fileio.FileClient` for details.
- Default: None.
-
- Examples:
- >>> dict_from_file('/path/of/your/file') # disk
- {'key1': 'value1', 'key2': 'value2'}
- >>> dict_from_file('s3://path/of/your/file') # ceph or petrel
- {'key1': 'value1', 'key2': 'value2'}
-
- Returns:
- dict: The parsed contents.
- """
- mapping = {}
- file_client = FileClient.infer_client(file_client_args, filename)
- with StringIO(file_client.get_text(filename, encoding)) as f:
- for line in f:
- items = line.rstrip('\n').split()
- assert len(items) >= 2
- key = key_type(items[0])
- val = items[1:] if len(items) > 2 else items[1]
- mapping[key] = val
- return mapping
diff --git a/spaces/Anustup/NS_AI_LABS/src/vad.py b/spaces/Anustup/NS_AI_LABS/src/vad.py
deleted file mode 100644
index 4d11c28eb32953f3829ff4f8a4e4030e22a22140..0000000000000000000000000000000000000000
--- a/spaces/Anustup/NS_AI_LABS/src/vad.py
+++ /dev/null
@@ -1,477 +0,0 @@
-from abc import ABC, abstractmethod
-from collections import Counter, deque
-
-from typing import Any, Deque, Iterator, List, Dict
-
-from pprint import pprint
-
-from src.segments import merge_timestamps
-
-# Workaround for https://github.com/tensorflow/tensorflow/issues/48797
-try:
- import tensorflow as tf
-except ModuleNotFoundError:
- # Error handling
- pass
-
-import torch
-
-import ffmpeg
-import numpy as np
-
-from src.utils import format_timestamp
-from enum import Enum
-
-class NonSpeechStrategy(Enum):
- """
- Ignore non-speech frames segments.
- """
- SKIP = 1
- """
- Just treat non-speech segments as speech.
- """
- CREATE_SEGMENT = 2
- """
- Expand speech segments into subsequent non-speech segments.
- """
- EXPAND_SEGMENT = 3
-
-# Defaults for Silero
-SPEECH_TRESHOLD = 0.3
-
-# Minimum size of segments to process
-MIN_SEGMENT_DURATION = 1
-
-# The maximum time for texts from old segments to be used in the next segment
-MAX_PROMPT_WINDOW = 0 # seconds (0 = disabled)
-PROMPT_NO_SPEECH_PROB = 0.1 # Do not pass the text from segments with a no speech probability higher than this
-
-VAD_MAX_PROCESSING_CHUNK = 60 * 60 # 60 minutes of audio
-
-class TranscriptionConfig(ABC):
- def __init__(self, non_speech_strategy: NonSpeechStrategy = NonSpeechStrategy.SKIP,
- segment_padding_left: float = None, segment_padding_right = None, max_silent_period: float = None,
- max_merge_size: float = None, max_prompt_window: float = None):
- self.non_speech_strategy = non_speech_strategy
- self.segment_padding_left = segment_padding_left
- self.segment_padding_right = segment_padding_right
- self.max_silent_period = max_silent_period
- self.max_merge_size = max_merge_size
- self.max_prompt_window = max_prompt_window
-
-class PeriodicTranscriptionConfig(TranscriptionConfig):
- def __init__(self, periodic_duration: float, non_speech_strategy: NonSpeechStrategy = NonSpeechStrategy.SKIP,
- segment_padding_left: float = None, segment_padding_right = None, max_silent_period: float = None,
- max_merge_size: float = None, max_prompt_window: float = None):
- super().__init__(non_speech_strategy, segment_padding_left, segment_padding_right, max_silent_period, max_merge_size, max_prompt_window)
- self.periodic_duration = periodic_duration
-
-class AbstractTranscription(ABC):
- def __init__(self, sampling_rate: int = 16000):
- self.sampling_rate = sampling_rate
-
- def get_audio_segment(self, str, start_time: str = None, duration: str = None):
- return load_audio(str, self.sampling_rate, start_time, duration)
-
- @abstractmethod
- def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig):
- """
- Get the start and end timestamps of the sections that should be transcribed by this VAD method.
-
- Parameters
- ----------
- audio: str
- The audio file.
- config: TranscriptionConfig
- The transcription configuration.
-
- Returns
- -------
- A list of start and end timestamps, in fractional seconds.
- """
- return
-
- def transcribe(self, audio: str, whisperCallable, config: TranscriptionConfig):
- """
- Transcribe the given audo file.
-
- Parameters
- ----------
- audio: str
- The audio file.
-
- whisperCallable: Callable[[Union[str, np.ndarray, torch.Tensor], int, str, str], dict[str, Union[dict, Any]]]
- The callback that is used to invoke Whisper on an audio file/buffer. The first parameter is the audio file/buffer,
- the second parameter is an optional text prompt, and the last is the current detected language. The return value is the result of the Whisper call.
-
- Returns
- -------
- A list of start and end timestamps, in fractional seconds.
- """
-
- # get speech timestamps from full audio file
- seconds_timestamps = self.get_transcribe_timestamps(audio, config)
-
- #for seconds_timestamp in seconds_timestamps:
- # print("VAD timestamp ", format_timestamp(seconds_timestamp['start']), " to ", format_timestamp(seconds_timestamp['end']))
-
- merged = merge_timestamps(seconds_timestamps, config.max_silent_period, config.max_merge_size, config.segment_padding_left, config.segment_padding_right)
-
- # A deque of transcribed segments that is passed to the next segment as a prompt
- prompt_window = deque()
-
- print("Timestamps:")
- pprint(merged)
-
- if config.non_speech_strategy != NonSpeechStrategy.SKIP:
- max_audio_duration = get_audio_duration(audio)
-
- # Expand segments to include the gaps between them
- if (config.non_speech_strategy == NonSpeechStrategy.CREATE_SEGMENT):
- # When we have a prompt window, we create speech segments betwen each segment if we exceed the merge size
- merged = self.fill_gaps(merged, total_duration=max_audio_duration, max_expand_size=config.max_merge_size)
- elif config.non_speech_strategy == NonSpeechStrategy.EXPAND_SEGMENT:
- # With no prompt window, it is better to just expand the segments (this effectively passes the prompt to the next segment)
- merged = self.expand_gaps(merged, total_duration=max_audio_duration)
- else:
- raise Exception("Unknown non-speech strategy: " + str(config.non_speech_strategy))
-
- print("Transcribing non-speech:")
- pprint(merged)
-
- result = {
- 'text': "",
- 'segments': [],
- 'language': ""
- }
- languageCounter = Counter()
- detected_language = None
-
- segment_index = -1
-
- # For each time segment, run whisper
- for segment in merged:
- segment_index += 1
- segment_start = segment['start']
- segment_end = segment['end']
- segment_expand_amount = segment.get('expand_amount', 0)
- segment_gap = segment.get('gap', False)
-
- segment_duration = segment_end - segment_start
-
- if segment_duration < MIN_SEGMENT_DURATION:
- continue;
-
- # Audio to run on Whisper
- segment_audio = self.get_audio_segment(audio, start_time = str(segment_start), duration = str(segment_duration))
- # Previous segments to use as a prompt
- segment_prompt = ' '.join([segment['text'] for segment in prompt_window]) if len(prompt_window) > 0 else None
-
- # Detected language
- detected_language = languageCounter.most_common(1)[0][0] if len(languageCounter) > 0 else None
-
- print("Running whisper from ", format_timestamp(segment_start), " to ", format_timestamp(segment_end), ", duration: ",
- segment_duration, "expanded: ", segment_expand_amount, "prompt: ", segment_prompt, "language: ", detected_language)
- segment_result = whisperCallable(segment_audio, segment_index, segment_prompt, detected_language)
-
- adjusted_segments = self.adjust_timestamp(segment_result["segments"], adjust_seconds=segment_start, max_source_time=segment_duration)
-
- # Propagate expand amount to the segments
- if (segment_expand_amount > 0):
- segment_without_expansion = segment_duration - segment_expand_amount
-
- for adjusted_segment in adjusted_segments:
- adjusted_segment_end = adjusted_segment['end']
-
- # Add expand amount if the segment got expanded
- if (adjusted_segment_end > segment_without_expansion):
- adjusted_segment["expand_amount"] = adjusted_segment_end - segment_without_expansion
-
- # Append to output
- result['text'] += segment_result['text']
- result['segments'].extend(adjusted_segments)
-
- # Increment detected language
- if not segment_gap:
- languageCounter[segment_result['language']] += 1
-
- # Update prompt window
- self.__update_prompt_window(prompt_window, adjusted_segments, segment_end, segment_gap, config)
-
- if detected_language is not None:
- result['language'] = detected_language
-
- return result
-
- def __update_prompt_window(self, prompt_window: Deque, adjusted_segments: List, segment_end: float, segment_gap: bool, config: TranscriptionConfig):
- if (config.max_prompt_window is not None and config.max_prompt_window > 0):
- # Add segments to the current prompt window (unless it is a speech gap)
- if not segment_gap:
- for segment in adjusted_segments:
- if segment.get('no_speech_prob', 0) <= PROMPT_NO_SPEECH_PROB:
- prompt_window.append(segment)
-
- while (len(prompt_window) > 0):
- first_end_time = prompt_window[0].get('end', 0)
- # Time expanded in the segments should be discounted from the prompt window
- first_expand_time = prompt_window[0].get('expand_amount', 0)
-
- if (first_end_time - first_expand_time < segment_end - config.max_prompt_window):
- prompt_window.popleft()
- else:
- break
-
- def include_gaps(self, segments: Iterator[dict], min_gap_length: float, total_duration: float):
- result = []
- last_end_time = 0
-
- for segment in segments:
- segment_start = float(segment['start'])
- segment_end = float(segment['end'])
-
- if (last_end_time != segment_start):
- delta = segment_start - last_end_time
-
- if (min_gap_length is None or delta >= min_gap_length):
- result.append( { 'start': last_end_time, 'end': segment_start, 'gap': True } )
-
- last_end_time = segment_end
- result.append(segment)
-
- # Also include total duration if specified
- if (total_duration is not None and last_end_time < total_duration):
- delta = total_duration - segment_start
-
- if (min_gap_length is None or delta >= min_gap_length):
- result.append( { 'start': last_end_time, 'end': total_duration, 'gap': True } )
-
- return result
-
- # Expand the end time of each segment to the start of the next segment
- def expand_gaps(self, segments: List[Dict[str, Any]], total_duration: float):
- result = []
-
- if len(segments) == 0:
- return result
-
- # Add gap at the beginning if needed
- if (segments[0]['start'] > 0):
- result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } )
-
- for i in range(len(segments) - 1):
- current_segment = segments[i]
- next_segment = segments[i + 1]
-
- delta = next_segment['start'] - current_segment['end']
-
- # Expand if the gap actually exists
- if (delta >= 0):
- current_segment = current_segment.copy()
- current_segment['expand_amount'] = delta
- current_segment['end'] = next_segment['start']
-
- result.append(current_segment)
-
- # Add last segment
- last_segment = segments[-1]
- result.append(last_segment)
-
- # Also include total duration if specified
- if (total_duration is not None):
- last_segment = result[-1]
-
- if (last_segment['end'] < total_duration):
- last_segment = last_segment.copy()
- last_segment['end'] = total_duration
- result[-1] = last_segment
-
- return result
-
- def fill_gaps(self, segments: List[Dict[str, Any]], total_duration: float, max_expand_size: float = None):
- result = []
-
- if len(segments) == 0:
- return result
-
- # Add gap at the beginning if needed
- if (segments[0]['start'] > 0):
- result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } )
-
- for i in range(len(segments) - 1):
- expanded = False
- current_segment = segments[i]
- next_segment = segments[i + 1]
-
- delta = next_segment['start'] - current_segment['end']
-
- if (max_expand_size is not None and delta <= max_expand_size):
- # Just expand the current segment
- current_segment = current_segment.copy()
- current_segment['expand_amount'] = delta
- current_segment['end'] = next_segment['start']
- expanded = True
-
- result.append(current_segment)
-
- # Add a gap to the next segment if needed
- if (delta >= 0 and not expanded):
- result.append({ 'start': current_segment['end'], 'end': next_segment['start'], 'gap': True } )
-
- # Add last segment
- last_segment = segments[-1]
- result.append(last_segment)
-
- # Also include total duration if specified
- if (total_duration is not None):
- last_segment = result[-1]
-
- delta = total_duration - last_segment['end']
-
- if (delta > 0):
- if (max_expand_size is not None and delta <= max_expand_size):
- # Expand the last segment
- last_segment = last_segment.copy()
- last_segment['expand_amount'] = delta
- last_segment['end'] = total_duration
- result[-1] = last_segment
- else:
- result.append({ 'start': last_segment['end'], 'end': total_duration, 'gap': True } )
-
- return result
-
- def adjust_timestamp(self, segments: Iterator[dict], adjust_seconds: float, max_source_time: float = None):
- result = []
-
- for segment in segments:
- segment_start = float(segment['start'])
- segment_end = float(segment['end'])
-
- # Filter segments?
- if (max_source_time is not None):
- if (segment_start > max_source_time):
- continue
- segment_end = min(max_source_time, segment_end)
-
- new_segment = segment.copy()
-
- # Add to start and end
- new_segment['start'] = segment_start + adjust_seconds
- new_segment['end'] = segment_end + adjust_seconds
- result.append(new_segment)
- return result
-
- def multiply_timestamps(self, timestamps: List[Dict[str, Any]], factor: float):
- result = []
-
- for entry in timestamps:
- start = entry['start']
- end = entry['end']
-
- result.append({
- 'start': start * factor,
- 'end': end * factor
- })
- return result
-
-class VadSileroTranscription(AbstractTranscription):
- def __init__(self, sampling_rate: int = 16000):
- super().__init__(sampling_rate=sampling_rate)
-
- self.model, utils = torch.hub.load(repo_or_dir='snakers4/silero-vad', model='silero_vad')
- (self.get_speech_timestamps, _, _, _, _) = utils
-
-
- def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig):
- audio_duration = get_audio_duration(audio)
- result = []
-
- # Divide procesisng of audio into chunks
- chunk_start = 0.0
-
- while (chunk_start < audio_duration):
- chunk_duration = min(audio_duration - chunk_start, VAD_MAX_PROCESSING_CHUNK)
-
- print("Processing VAD in chunk from {} to {}".format(format_timestamp(chunk_start), format_timestamp(chunk_start + chunk_duration)))
- wav = self.get_audio_segment(audio, str(chunk_start), str(chunk_duration))
-
- sample_timestamps = self.get_speech_timestamps(wav, self.model, sampling_rate=self.sampling_rate, threshold=SPEECH_TRESHOLD)
- seconds_timestamps = self.multiply_timestamps(sample_timestamps, factor=1 / self.sampling_rate)
- adjusted = self.adjust_timestamp(seconds_timestamps, adjust_seconds=chunk_start, max_source_time=chunk_start + chunk_duration)
-
- #pprint(adjusted)
-
- result.extend(adjusted)
- chunk_start += chunk_duration
-
- return result
-
-# A very simple VAD that just marks every N seconds as speech
-class VadPeriodicTranscription(AbstractTranscription):
- def __init__(self, sampling_rate: int = 16000):
- super().__init__(sampling_rate=sampling_rate)
-
- def get_transcribe_timestamps(self, audio: str, config: PeriodicTranscriptionConfig):
- # Get duration in seconds
- audio_duration = get_audio_duration(audio)
- result = []
-
- # Generate a timestamp every N seconds
- start_timestamp = 0
-
- while (start_timestamp < audio_duration):
- end_timestamp = min(start_timestamp + config.periodic_duration, audio_duration)
- segment_duration = end_timestamp - start_timestamp
-
- # Minimum duration is 1 second
- if (segment_duration >= 1):
- result.append( { 'start': start_timestamp, 'end': end_timestamp } )
-
- start_timestamp = end_timestamp
-
- return result
-
-def get_audio_duration(file: str):
- return float(ffmpeg.probe(file)["format"]["duration"])
-
-def load_audio(file: str, sample_rate: int = 16000,
- start_time: str = None, duration: str = None):
- """
- Open an audio file and read as mono waveform, resampling as necessary
-
- Parameters
- ----------
- file: str
- The audio file to open
-
- sr: int
- The sample rate to resample the audio if necessary
-
- start_time: str
- The start time, using the standard FFMPEG time duration syntax, or None to disable.
-
- duration: str
- The duration, using the standard FFMPEG time duration syntax, or None to disable.
-
- Returns
- -------
- A NumPy array containing the audio waveform, in float32 dtype.
- """
- try:
- inputArgs = {'threads': 0}
-
- if (start_time is not None):
- inputArgs['ss'] = start_time
- if (duration is not None):
- inputArgs['t'] = duration
-
- # This launches a subprocess to decode audio while down-mixing and resampling as necessary.
- # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed.
- out, _ = (
- ffmpeg.input(file, **inputArgs)
- .output("-", format="s16le", acodec="pcm_s16le", ac=1, ar=sample_rate)
- .run(cmd="ffmpeg", capture_stdout=True, capture_stderr=True)
- )
- except ffmpeg.Error as e:
- raise RuntimeError(f"Failed to load audio: {e.stderr.decode()}")
-
- return np.frombuffer(out, np.int16).flatten().astype(np.float32) / 32768.0
\ No newline at end of file
diff --git a/spaces/Ariharasudhan/YoloV5/utils/loss.py b/spaces/Ariharasudhan/YoloV5/utils/loss.py
deleted file mode 100644
index 9b9c3d9f80181d1ad5b54d2700f32ba042368c31..0000000000000000000000000000000000000000
--- a/spaces/Ariharasudhan/YoloV5/utils/loss.py
+++ /dev/null
@@ -1,234 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Loss functions
-"""
-
-import torch
-import torch.nn as nn
-
-from utils.metrics import bbox_iou
-from utils.torch_utils import de_parallel
-
-
-def smooth_BCE(eps=0.1): # https://github.com/ultralytics/yolov3/issues/238#issuecomment-598028441
- # return positive, negative label smoothing BCE targets
- return 1.0 - 0.5 * eps, 0.5 * eps
-
-
-class BCEBlurWithLogitsLoss(nn.Module):
- # BCEwithLogitLoss() with reduced missing label effects.
- def __init__(self, alpha=0.05):
- super().__init__()
- self.loss_fcn = nn.BCEWithLogitsLoss(reduction='none') # must be nn.BCEWithLogitsLoss()
- self.alpha = alpha
-
- def forward(self, pred, true):
- loss = self.loss_fcn(pred, true)
- pred = torch.sigmoid(pred) # prob from logits
- dx = pred - true # reduce only missing label effects
- # dx = (pred - true).abs() # reduce missing label and false label effects
- alpha_factor = 1 - torch.exp((dx - 1) / (self.alpha + 1e-4))
- loss *= alpha_factor
- return loss.mean()
-
-
-class FocalLoss(nn.Module):
- # Wraps focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
- def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
- super().__init__()
- self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss()
- self.gamma = gamma
- self.alpha = alpha
- self.reduction = loss_fcn.reduction
- self.loss_fcn.reduction = 'none' # required to apply FL to each element
-
- def forward(self, pred, true):
- loss = self.loss_fcn(pred, true)
- # p_t = torch.exp(-loss)
- # loss *= self.alpha * (1.000001 - p_t) ** self.gamma # non-zero power for gradient stability
-
- # TF implementation https://github.com/tensorflow/addons/blob/v0.7.1/tensorflow_addons/losses/focal_loss.py
- pred_prob = torch.sigmoid(pred) # prob from logits
- p_t = true * pred_prob + (1 - true) * (1 - pred_prob)
- alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)
- modulating_factor = (1.0 - p_t) ** self.gamma
- loss *= alpha_factor * modulating_factor
-
- if self.reduction == 'mean':
- return loss.mean()
- elif self.reduction == 'sum':
- return loss.sum()
- else: # 'none'
- return loss
-
-
-class QFocalLoss(nn.Module):
- # Wraps Quality focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
- def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
- super().__init__()
- self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss()
- self.gamma = gamma
- self.alpha = alpha
- self.reduction = loss_fcn.reduction
- self.loss_fcn.reduction = 'none' # required to apply FL to each element
-
- def forward(self, pred, true):
- loss = self.loss_fcn(pred, true)
-
- pred_prob = torch.sigmoid(pred) # prob from logits
- alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)
- modulating_factor = torch.abs(true - pred_prob) ** self.gamma
- loss *= alpha_factor * modulating_factor
-
- if self.reduction == 'mean':
- return loss.mean()
- elif self.reduction == 'sum':
- return loss.sum()
- else: # 'none'
- return loss
-
-
-class ComputeLoss:
- sort_obj_iou = False
-
- # Compute losses
- def __init__(self, model, autobalance=False):
- device = next(model.parameters()).device # get model device
- h = model.hyp # hyperparameters
-
- # Define criteria
- BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device))
- BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device))
-
- # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3
- self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets
-
- # Focal loss
- g = h['fl_gamma'] # focal loss gamma
- if g > 0:
- BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g)
-
- m = de_parallel(model).model[-1] # Detect() module
- self.balance = {3: [4.0, 1.0, 0.4]}.get(m.nl, [4.0, 1.0, 0.25, 0.06, 0.02]) # P3-P7
- self.ssi = list(m.stride).index(16) if autobalance else 0 # stride 16 index
- self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, 1.0, h, autobalance
- self.na = m.na # number of anchors
- self.nc = m.nc # number of classes
- self.nl = m.nl # number of layers
- self.anchors = m.anchors
- self.device = device
-
- def __call__(self, p, targets): # predictions, targets
- lcls = torch.zeros(1, device=self.device) # class loss
- lbox = torch.zeros(1, device=self.device) # box loss
- lobj = torch.zeros(1, device=self.device) # object loss
- tcls, tbox, indices, anchors = self.build_targets(p, targets) # targets
-
- # Losses
- for i, pi in enumerate(p): # layer index, layer predictions
- b, a, gj, gi = indices[i] # image, anchor, gridy, gridx
- tobj = torch.zeros(pi.shape[:4], dtype=pi.dtype, device=self.device) # target obj
-
- n = b.shape[0] # number of targets
- if n:
- # pxy, pwh, _, pcls = pi[b, a, gj, gi].tensor_split((2, 4, 5), dim=1) # faster, requires torch 1.8.0
- pxy, pwh, _, pcls = pi[b, a, gj, gi].split((2, 2, 1, self.nc), 1) # target-subset of predictions
-
- # Regression
- pxy = pxy.sigmoid() * 2 - 0.5
- pwh = (pwh.sigmoid() * 2) ** 2 * anchors[i]
- pbox = torch.cat((pxy, pwh), 1) # predicted box
- iou = bbox_iou(pbox, tbox[i], CIoU=True).squeeze() # iou(prediction, target)
- lbox += (1.0 - iou).mean() # iou loss
-
- # Objectness
- iou = iou.detach().clamp(0).type(tobj.dtype)
- if self.sort_obj_iou:
- j = iou.argsort()
- b, a, gj, gi, iou = b[j], a[j], gj[j], gi[j], iou[j]
- if self.gr < 1:
- iou = (1.0 - self.gr) + self.gr * iou
- tobj[b, a, gj, gi] = iou # iou ratio
-
- # Classification
- if self.nc > 1: # cls loss (only if multiple classes)
- t = torch.full_like(pcls, self.cn, device=self.device) # targets
- t[range(n), tcls[i]] = self.cp
- lcls += self.BCEcls(pcls, t) # BCE
-
- # Append targets to text file
- # with open('targets.txt', 'a') as file:
- # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)]
-
- obji = self.BCEobj(pi[..., 4], tobj)
- lobj += obji * self.balance[i] # obj loss
- if self.autobalance:
- self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item()
-
- if self.autobalance:
- self.balance = [x / self.balance[self.ssi] for x in self.balance]
- lbox *= self.hyp['box']
- lobj *= self.hyp['obj']
- lcls *= self.hyp['cls']
- bs = tobj.shape[0] # batch size
-
- return (lbox + lobj + lcls) * bs, torch.cat((lbox, lobj, lcls)).detach()
-
- def build_targets(self, p, targets):
- # Build targets for compute_loss(), input targets(image,class,x,y,w,h)
- na, nt = self.na, targets.shape[0] # number of anchors, targets
- tcls, tbox, indices, anch = [], [], [], []
- gain = torch.ones(7, device=self.device) # normalized to gridspace gain
- ai = torch.arange(na, device=self.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt)
- targets = torch.cat((targets.repeat(na, 1, 1), ai[..., None]), 2) # append anchor indices
-
- g = 0.5 # bias
- off = torch.tensor(
- [
- [0, 0],
- [1, 0],
- [0, 1],
- [-1, 0],
- [0, -1], # j,k,l,m
- # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm
- ],
- device=self.device).float() * g # offsets
-
- for i in range(self.nl):
- anchors, shape = self.anchors[i], p[i].shape
- gain[2:6] = torch.tensor(shape)[[3, 2, 3, 2]] # xyxy gain
-
- # Match targets to anchors
- t = targets * gain # shape(3,n,7)
- if nt:
- # Matches
- r = t[..., 4:6] / anchors[:, None] # wh ratio
- j = torch.max(r, 1 / r).max(2)[0] < self.hyp['anchor_t'] # compare
- # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2))
- t = t[j] # filter
-
- # Offsets
- gxy = t[:, 2:4] # grid xy
- gxi = gain[[2, 3]] - gxy # inverse
- j, k = ((gxy % 1 < g) & (gxy > 1)).T
- l, m = ((gxi % 1 < g) & (gxi > 1)).T
- j = torch.stack((torch.ones_like(j), j, k, l, m))
- t = t.repeat((5, 1, 1))[j]
- offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j]
- else:
- t = targets[0]
- offsets = 0
-
- # Define
- bc, gxy, gwh, a = t.chunk(4, 1) # (image, class), grid xy, grid wh, anchors
- a, (b, c) = a.long().view(-1), bc.long().T # anchors, image, class
- gij = (gxy - offsets).long()
- gi, gj = gij.T # grid indices
-
- # Append
- indices.append((b, a, gj.clamp_(0, shape[2] - 1), gi.clamp_(0, shape[3] - 1))) # image, anchor, grid
- tbox.append(torch.cat((gxy - gij, gwh), 1)) # box
- anch.append(anchors[a]) # anchors
- tcls.append(c) # class
-
- return tcls, tbox, indices, anch
diff --git a/spaces/Arnx/MusicGenXvAKN/audiocraft/modules/conv.py b/spaces/Arnx/MusicGenXvAKN/audiocraft/modules/conv.py
deleted file mode 100644
index 972938ab84712eb06e1b10cea25444eee51d6637..0000000000000000000000000000000000000000
--- a/spaces/Arnx/MusicGenXvAKN/audiocraft/modules/conv.py
+++ /dev/null
@@ -1,245 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-import typing as tp
-import warnings
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-from torch.nn.utils import spectral_norm, weight_norm
-
-
-CONV_NORMALIZATIONS = frozenset(['none', 'weight_norm', 'spectral_norm',
- 'time_group_norm'])
-
-
-def apply_parametrization_norm(module: nn.Module, norm: str = 'none'):
- assert norm in CONV_NORMALIZATIONS
- if norm == 'weight_norm':
- return weight_norm(module)
- elif norm == 'spectral_norm':
- return spectral_norm(module)
- else:
- # We already check was in CONV_NORMALIZATION, so any other choice
- # doesn't need reparametrization.
- return module
-
-
-def get_norm_module(module: nn.Module, causal: bool = False, norm: str = 'none', **norm_kwargs):
- """Return the proper normalization module. If causal is True, this will ensure the returned
- module is causal, or return an error if the normalization doesn't support causal evaluation.
- """
- assert norm in CONV_NORMALIZATIONS
- if norm == 'time_group_norm':
- if causal:
- raise ValueError("GroupNorm doesn't support causal evaluation.")
- assert isinstance(module, nn.modules.conv._ConvNd)
- return nn.GroupNorm(1, module.out_channels, **norm_kwargs)
- else:
- return nn.Identity()
-
-
-def get_extra_padding_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int,
- padding_total: int = 0) -> int:
- """See `pad_for_conv1d`.
- """
- length = x.shape[-1]
- n_frames = (length - kernel_size + padding_total) / stride + 1
- ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total)
- return ideal_length - length
-
-
-def pad_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, padding_total: int = 0):
- """Pad for a convolution to make sure that the last window is full.
- Extra padding is added at the end. This is required to ensure that we can rebuild
- an output of the same length, as otherwise, even with padding, some time steps
- might get removed.
- For instance, with total padding = 4, kernel size = 4, stride = 2:
- 0 0 1 2 3 4 5 0 0 # (0s are padding)
- 1 2 3 # (output frames of a convolution, last 0 is never used)
- 0 0 1 2 3 4 5 0 # (output of tr. conv., but pos. 5 is going to get removed as padding)
- 1 2 3 4 # once you removed padding, we are missing one time step !
- """
- extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total)
- return F.pad(x, (0, extra_padding))
-
-
-def pad1d(x: torch.Tensor, paddings: tp.Tuple[int, int], mode: str = 'constant', value: float = 0.):
- """Tiny wrapper around F.pad, just to allow for reflect padding on small input.
- If this is the case, we insert extra 0 padding to the right before the reflection happen.
- """
- length = x.shape[-1]
- padding_left, padding_right = paddings
- assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right)
- if mode == 'reflect':
- max_pad = max(padding_left, padding_right)
- extra_pad = 0
- if length <= max_pad:
- extra_pad = max_pad - length + 1
- x = F.pad(x, (0, extra_pad))
- padded = F.pad(x, paddings, mode, value)
- end = padded.shape[-1] - extra_pad
- return padded[..., :end]
- else:
- return F.pad(x, paddings, mode, value)
-
-
-def unpad1d(x: torch.Tensor, paddings: tp.Tuple[int, int]):
- """Remove padding from x, handling properly zero padding. Only for 1d!
- """
- padding_left, padding_right = paddings
- assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right)
- assert (padding_left + padding_right) <= x.shape[-1]
- end = x.shape[-1] - padding_right
- return x[..., padding_left: end]
-
-
-class NormConv1d(nn.Module):
- """Wrapper around Conv1d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, causal: bool = False, norm: str = 'none',
- norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.conv = apply_parametrization_norm(nn.Conv1d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.conv, causal, norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.conv(x)
- x = self.norm(x)
- return x
-
-
-class NormConv2d(nn.Module):
- """Wrapper around Conv2d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.conv = apply_parametrization_norm(nn.Conv2d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.conv, causal=False, norm=norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.conv(x)
- x = self.norm(x)
- return x
-
-
-class NormConvTranspose1d(nn.Module):
- """Wrapper around ConvTranspose1d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, causal: bool = False, norm: str = 'none',
- norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.convtr = apply_parametrization_norm(nn.ConvTranspose1d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.convtr, causal, norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.convtr(x)
- x = self.norm(x)
- return x
-
-
-class NormConvTranspose2d(nn.Module):
- """Wrapper around ConvTranspose2d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.convtr = apply_parametrization_norm(nn.ConvTranspose2d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.convtr, causal=False, norm=norm, **norm_kwargs)
-
- def forward(self, x):
- x = self.convtr(x)
- x = self.norm(x)
- return x
-
-
-class StreamableConv1d(nn.Module):
- """Conv1d with some builtin handling of asymmetric or causal padding
- and normalization.
- """
- def __init__(self, in_channels: int, out_channels: int,
- kernel_size: int, stride: int = 1, dilation: int = 1,
- groups: int = 1, bias: bool = True, causal: bool = False,
- norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {},
- pad_mode: str = 'reflect'):
- super().__init__()
- # warn user on unusual setup between dilation and stride
- if stride > 1 and dilation > 1:
- warnings.warn('StreamableConv1d has been initialized with stride > 1 and dilation > 1'
- f' (kernel_size={kernel_size} stride={stride}, dilation={dilation}).')
- self.conv = NormConv1d(in_channels, out_channels, kernel_size, stride,
- dilation=dilation, groups=groups, bias=bias, causal=causal,
- norm=norm, norm_kwargs=norm_kwargs)
- self.causal = causal
- self.pad_mode = pad_mode
-
- def forward(self, x):
- B, C, T = x.shape
- kernel_size = self.conv.conv.kernel_size[0]
- stride = self.conv.conv.stride[0]
- dilation = self.conv.conv.dilation[0]
- kernel_size = (kernel_size - 1) * dilation + 1 # effective kernel size with dilations
- padding_total = kernel_size - stride
- extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total)
- if self.causal:
- # Left padding for causal
- x = pad1d(x, (padding_total, extra_padding), mode=self.pad_mode)
- else:
- # Asymmetric padding required for odd strides
- padding_right = padding_total // 2
- padding_left = padding_total - padding_right
- x = pad1d(x, (padding_left, padding_right + extra_padding), mode=self.pad_mode)
- return self.conv(x)
-
-
-class StreamableConvTranspose1d(nn.Module):
- """ConvTranspose1d with some builtin handling of asymmetric or causal padding
- and normalization.
- """
- def __init__(self, in_channels: int, out_channels: int,
- kernel_size: int, stride: int = 1, causal: bool = False,
- norm: str = 'none', trim_right_ratio: float = 1.,
- norm_kwargs: tp.Dict[str, tp.Any] = {}):
- super().__init__()
- self.convtr = NormConvTranspose1d(in_channels, out_channels, kernel_size, stride,
- causal=causal, norm=norm, norm_kwargs=norm_kwargs)
- self.causal = causal
- self.trim_right_ratio = trim_right_ratio
- assert self.causal or self.trim_right_ratio == 1., \
- "`trim_right_ratio` != 1.0 only makes sense for causal convolutions"
- assert self.trim_right_ratio >= 0. and self.trim_right_ratio <= 1.
-
- def forward(self, x):
- kernel_size = self.convtr.convtr.kernel_size[0]
- stride = self.convtr.convtr.stride[0]
- padding_total = kernel_size - stride
-
- y = self.convtr(x)
-
- # We will only trim fixed padding. Extra padding from `pad_for_conv1d` would be
- # removed at the very end, when keeping only the right length for the output,
- # as removing it here would require also passing the length at the matching layer
- # in the encoder.
- if self.causal:
- # Trim the padding on the right according to the specified ratio
- # if trim_right_ratio = 1.0, trim everything from right
- padding_right = math.ceil(padding_total * self.trim_right_ratio)
- padding_left = padding_total - padding_right
- y = unpad1d(y, (padding_left, padding_right))
- else:
- # Asymmetric padding required for odd strides
- padding_right = padding_total // 2
- padding_left = padding_total - padding_right
- y = unpad1d(y, (padding_left, padding_right))
- return y
diff --git a/spaces/Ash123/stable-diffusion-nano/share_btn.py b/spaces/Ash123/stable-diffusion-nano/share_btn.py
deleted file mode 100644
index 4c9aa8a91b1d0f86746fb118c19b03df86d424a3..0000000000000000000000000000000000000000
--- a/spaces/Ash123/stable-diffusion-nano/share_btn.py
+++ /dev/null
@@ -1,60 +0,0 @@
-community_icon_html = """"""
-
-loading_icon_html = """"""
-
-share_js = """async () => {
- async function uploadFile(file){
- const UPLOAD_URL = 'https://huggingface.co/uploads';
- const response = await fetch(UPLOAD_URL, {
- method: 'POST',
- headers: {
- 'Content-Type': file.type,
- 'X-Requested-With': 'XMLHttpRequest',
- },
- body: file, /// <- File inherits from Blob
- });
- const url = await response.text();
- return url;
- }
- const gradioEl = document.querySelector('body > gradio-app');
- const imgEls = gradioEl.querySelectorAll('#gallery img');
- const promptTxt = gradioEl.querySelector('#prompt-text-input input').value;
- const shareBtnEl = gradioEl.querySelector('#share-btn');
- const shareIconEl = gradioEl.querySelector('#share-btn-share-icon');
- const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon');
- if(!imgEls.length){
- return;
- };
- shareBtnEl.style.pointerEvents = 'none';
- shareIconEl.style.display = 'none';
- loadingIconEl.style.removeProperty('display');
- const files = await Promise.all(
- [...imgEls].map(async (imgEl) => {
- const res = await fetch(imgEl.src);
- const blob = await res.blob();
- const imgId = Date.now() % 200;
- const fileName = `diffuse-the-rest-${{imgId}}.jpg`;
- return new File([blob], fileName, { type: 'image/jpeg' });
- })
- );
- const urls = await Promise.all(files.map((f) => uploadFile(f)));
- const htmlImgs = urls.map(url => ``);
- const descriptionMd = `
-${htmlImgs.join(`\n`)}
-
`;
- const params = new URLSearchParams({
- title: promptTxt,
- description: descriptionMd,
- });
- const paramsStr = params.toString();
- window.open(`https://huggingface.co/spaces/stabilityai/stable-diffusion/discussions/new?${paramsStr}`, '_blank');
- shareBtnEl.style.removeProperty('pointer-events');
- shareIconEl.style.removeProperty('display');
- loadingIconEl.style.display = 'none';
-}"""
\ No newline at end of file
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/appdirs.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/appdirs.py
deleted file mode 100644
index 16933bf8afedcbe3e9d4fcc04e5f7246228c56fc..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/appdirs.py
+++ /dev/null
@@ -1,52 +0,0 @@
-"""
-This code wraps the vendored appdirs module to so the return values are
-compatible for the current pip code base.
-
-The intention is to rewrite current usages gradually, keeping the tests pass,
-and eventually drop this after all usages are changed.
-"""
-
-import os
-import sys
-from typing import List
-
-from pip._vendor import platformdirs as _appdirs
-
-
-def user_cache_dir(appname: str) -> str:
- return _appdirs.user_cache_dir(appname, appauthor=False)
-
-
-def _macos_user_config_dir(appname: str, roaming: bool = True) -> str:
- # Use ~/Application Support/pip, if the directory exists.
- path = _appdirs.user_data_dir(appname, appauthor=False, roaming=roaming)
- if os.path.isdir(path):
- return path
-
- # Use a Linux-like ~/.config/pip, by default.
- linux_like_path = "~/.config/"
- if appname:
- linux_like_path = os.path.join(linux_like_path, appname)
-
- return os.path.expanduser(linux_like_path)
-
-
-def user_config_dir(appname: str, roaming: bool = True) -> str:
- if sys.platform == "darwin":
- return _macos_user_config_dir(appname, roaming)
-
- return _appdirs.user_config_dir(appname, appauthor=False, roaming=roaming)
-
-
-# for the discussion regarding site_config_dir locations
-# see
-def site_config_dirs(appname: str) -> List[str]:
- if sys.platform == "darwin":
- return [_appdirs.site_data_dir(appname, appauthor=False, multipath=True)]
-
- dirval = _appdirs.site_config_dir(appname, appauthor=False, multipath=True)
- if sys.platform == "win32":
- return [dirval]
-
- # Unix-y system. Look in /etc as well.
- return dirval.split(os.pathsep) + ["/etc"]
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/tomli/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/tomli/__init__.py
deleted file mode 100644
index 4c6ec97ec6961bcf184b6e0b2437b9924db0b9de..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/tomli/__init__.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# SPDX-License-Identifier: MIT
-# SPDX-FileCopyrightText: 2021 Taneli Hukkinen
-# Licensed to PSF under a Contributor Agreement.
-
-__all__ = ("loads", "load", "TOMLDecodeError")
-__version__ = "2.0.1" # DO NOT EDIT THIS LINE MANUALLY. LET bump2version UTILITY DO IT
-
-from ._parser import TOMLDecodeError, load, loads
-
-# Pretend this exception was created here.
-TOMLDecodeError.__module__ = __name__
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py
deleted file mode 100644
index 8f369a2afedb6c6e69fd52ff9a9a6b1cdf965937..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py
+++ /dev/null
@@ -1,14 +0,0 @@
-from .mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ import (
- dataloader,
- lr_multiplier,
- model,
- optimizer,
- train,
-)
-
-train.max_iter *= 4 # 100ep -> 400ep
-
-lr_multiplier.scheduler.milestones = [
- milestone * 4 for milestone in lr_multiplier.scheduler.milestones
-]
-lr_multiplier.scheduler.num_updates = train.max_iter
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_backbone.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_backbone.py
deleted file mode 100644
index 3bb100f9bd5b4939e4646821c5a60d51c8ea65fd..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_backbone.py
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-
-import unittest
-import torch
-
-import detectron2.export.torchscript # apply patch # noqa
-from detectron2 import model_zoo
-from detectron2.config import get_cfg
-from detectron2.layers import ShapeSpec
-from detectron2.modeling.backbone import build_resnet_backbone
-from detectron2.modeling.backbone.fpn import build_resnet_fpn_backbone
-
-
-class TestBackBone(unittest.TestCase):
- def test_resnet_scriptability(self):
- cfg = get_cfg()
- resnet = build_resnet_backbone(cfg, ShapeSpec(channels=3))
-
- scripted_resnet = torch.jit.script(resnet)
-
- inp = torch.rand(2, 3, 100, 100)
- out1 = resnet(inp)["res4"]
- out2 = scripted_resnet(inp)["res4"]
- self.assertTrue(torch.allclose(out1, out2))
-
- def test_fpn_scriptability(self):
- cfg = model_zoo.get_config("Misc/scratch_mask_rcnn_R_50_FPN_3x_gn.yaml")
- bb = build_resnet_fpn_backbone(cfg, ShapeSpec(channels=3))
- bb_s = torch.jit.script(bb)
-
- inp = torch.rand(2, 3, 128, 128)
- out1 = bb(inp)["p5"]
- out2 = bb_s(inp)["p5"]
- self.assertTrue(torch.allclose(out1, out2))
diff --git a/spaces/Bart92/RVC_HF/demucs/__main__.py b/spaces/Bart92/RVC_HF/demucs/__main__.py
deleted file mode 100644
index 5148f20623bdaa827777558844796ded1876d7d0..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/demucs/__main__.py
+++ /dev/null
@@ -1,317 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import json
-import math
-import os
-import sys
-import time
-from dataclasses import dataclass, field
-
-import torch as th
-from torch import distributed, nn
-from torch.nn.parallel.distributed import DistributedDataParallel
-
-from .augment import FlipChannels, FlipSign, Remix, Scale, Shift
-from .compressed import get_compressed_datasets
-from .model import Demucs
-from .parser import get_name, get_parser
-from .raw import Rawset
-from .repitch import RepitchedWrapper
-from .pretrained import load_pretrained, SOURCES
-from .tasnet import ConvTasNet
-from .test import evaluate
-from .train import train_model, validate_model
-from .utils import (human_seconds, load_model, save_model, get_state,
- save_state, sizeof_fmt, get_quantizer)
-from .wav import get_wav_datasets, get_musdb_wav_datasets
-
-
-@dataclass
-class SavedState:
- metrics: list = field(default_factory=list)
- last_state: dict = None
- best_state: dict = None
- optimizer: dict = None
-
-
-def main():
- parser = get_parser()
- args = parser.parse_args()
- name = get_name(parser, args)
- print(f"Experiment {name}")
-
- if args.musdb is None and args.rank == 0:
- print(
- "You must provide the path to the MusDB dataset with the --musdb flag. "
- "To download the MusDB dataset, see https://sigsep.github.io/datasets/musdb.html.",
- file=sys.stderr)
- sys.exit(1)
-
- eval_folder = args.evals / name
- eval_folder.mkdir(exist_ok=True, parents=True)
- args.logs.mkdir(exist_ok=True)
- metrics_path = args.logs / f"{name}.json"
- eval_folder.mkdir(exist_ok=True, parents=True)
- args.checkpoints.mkdir(exist_ok=True, parents=True)
- args.models.mkdir(exist_ok=True, parents=True)
-
- if args.device is None:
- device = "cpu"
- if th.cuda.is_available():
- device = "cuda"
- else:
- device = args.device
-
- th.manual_seed(args.seed)
- # Prevents too many threads to be started when running `museval` as it can be quite
- # inefficient on NUMA architectures.
- os.environ["OMP_NUM_THREADS"] = "1"
- os.environ["MKL_NUM_THREADS"] = "1"
-
- if args.world_size > 1:
- if device != "cuda" and args.rank == 0:
- print("Error: distributed training is only available with cuda device", file=sys.stderr)
- sys.exit(1)
- th.cuda.set_device(args.rank % th.cuda.device_count())
- distributed.init_process_group(backend="nccl",
- init_method="tcp://" + args.master,
- rank=args.rank,
- world_size=args.world_size)
-
- checkpoint = args.checkpoints / f"{name}.th"
- checkpoint_tmp = args.checkpoints / f"{name}.th.tmp"
- if args.restart and checkpoint.exists() and args.rank == 0:
- checkpoint.unlink()
-
- if args.test or args.test_pretrained:
- args.epochs = 1
- args.repeat = 0
- if args.test:
- model = load_model(args.models / args.test)
- else:
- model = load_pretrained(args.test_pretrained)
- elif args.tasnet:
- model = ConvTasNet(audio_channels=args.audio_channels,
- samplerate=args.samplerate, X=args.X,
- segment_length=4 * args.samples,
- sources=SOURCES)
- else:
- model = Demucs(
- audio_channels=args.audio_channels,
- channels=args.channels,
- context=args.context,
- depth=args.depth,
- glu=args.glu,
- growth=args.growth,
- kernel_size=args.kernel_size,
- lstm_layers=args.lstm_layers,
- rescale=args.rescale,
- rewrite=args.rewrite,
- stride=args.conv_stride,
- resample=args.resample,
- normalize=args.normalize,
- samplerate=args.samplerate,
- segment_length=4 * args.samples,
- sources=SOURCES,
- )
- model.to(device)
- if args.init:
- model.load_state_dict(load_pretrained(args.init).state_dict())
-
- if args.show:
- print(model)
- size = sizeof_fmt(4 * sum(p.numel() for p in model.parameters()))
- print(f"Model size {size}")
- return
-
- try:
- saved = th.load(checkpoint, map_location='cpu')
- except IOError:
- saved = SavedState()
-
- optimizer = th.optim.Adam(model.parameters(), lr=args.lr)
-
- quantizer = None
- quantizer = get_quantizer(model, args, optimizer)
-
- if saved.last_state is not None:
- model.load_state_dict(saved.last_state, strict=False)
- if saved.optimizer is not None:
- optimizer.load_state_dict(saved.optimizer)
-
- model_name = f"{name}.th"
- if args.save_model:
- if args.rank == 0:
- model.to("cpu")
- model.load_state_dict(saved.best_state)
- save_model(model, quantizer, args, args.models / model_name)
- return
- elif args.save_state:
- model_name = f"{args.save_state}.th"
- if args.rank == 0:
- model.to("cpu")
- model.load_state_dict(saved.best_state)
- state = get_state(model, quantizer)
- save_state(state, args.models / model_name)
- return
-
- if args.rank == 0:
- done = args.logs / f"{name}.done"
- if done.exists():
- done.unlink()
-
- augment = [Shift(args.data_stride)]
- if args.augment:
- augment += [FlipSign(), FlipChannels(), Scale(),
- Remix(group_size=args.remix_group_size)]
- augment = nn.Sequential(*augment).to(device)
- print("Agumentation pipeline:", augment)
-
- if args.mse:
- criterion = nn.MSELoss()
- else:
- criterion = nn.L1Loss()
-
- # Setting number of samples so that all convolution windows are full.
- # Prevents hard to debug mistake with the prediction being shifted compared
- # to the input mixture.
- samples = model.valid_length(args.samples)
- print(f"Number of training samples adjusted to {samples}")
- samples = samples + args.data_stride
- if args.repitch:
- # We need a bit more audio samples, to account for potential
- # tempo change.
- samples = math.ceil(samples / (1 - 0.01 * args.max_tempo))
-
- args.metadata.mkdir(exist_ok=True, parents=True)
- if args.raw:
- train_set = Rawset(args.raw / "train",
- samples=samples,
- channels=args.audio_channels,
- streams=range(1, len(model.sources) + 1),
- stride=args.data_stride)
-
- valid_set = Rawset(args.raw / "valid", channels=args.audio_channels)
- elif args.wav:
- train_set, valid_set = get_wav_datasets(args, samples, model.sources)
- elif args.is_wav:
- train_set, valid_set = get_musdb_wav_datasets(args, samples, model.sources)
- else:
- train_set, valid_set = get_compressed_datasets(args, samples)
-
- if args.repitch:
- train_set = RepitchedWrapper(
- train_set,
- proba=args.repitch,
- max_tempo=args.max_tempo)
-
- best_loss = float("inf")
- for epoch, metrics in enumerate(saved.metrics):
- print(f"Epoch {epoch:03d}: "
- f"train={metrics['train']:.8f} "
- f"valid={metrics['valid']:.8f} "
- f"best={metrics['best']:.4f} "
- f"ms={metrics.get('true_model_size', 0):.2f}MB "
- f"cms={metrics.get('compressed_model_size', 0):.2f}MB "
- f"duration={human_seconds(metrics['duration'])}")
- best_loss = metrics['best']
-
- if args.world_size > 1:
- dmodel = DistributedDataParallel(model,
- device_ids=[th.cuda.current_device()],
- output_device=th.cuda.current_device())
- else:
- dmodel = model
-
- for epoch in range(len(saved.metrics), args.epochs):
- begin = time.time()
- model.train()
- train_loss, model_size = train_model(
- epoch, train_set, dmodel, criterion, optimizer, augment,
- quantizer=quantizer,
- batch_size=args.batch_size,
- device=device,
- repeat=args.repeat,
- seed=args.seed,
- diffq=args.diffq,
- workers=args.workers,
- world_size=args.world_size)
- model.eval()
- valid_loss = validate_model(
- epoch, valid_set, model, criterion,
- device=device,
- rank=args.rank,
- split=args.split_valid,
- overlap=args.overlap,
- world_size=args.world_size)
-
- ms = 0
- cms = 0
- if quantizer and args.rank == 0:
- ms = quantizer.true_model_size()
- cms = quantizer.compressed_model_size(num_workers=min(40, args.world_size * 10))
-
- duration = time.time() - begin
- if valid_loss < best_loss and ms <= args.ms_target:
- best_loss = valid_loss
- saved.best_state = {
- key: value.to("cpu").clone()
- for key, value in model.state_dict().items()
- }
-
- saved.metrics.append({
- "train": train_loss,
- "valid": valid_loss,
- "best": best_loss,
- "duration": duration,
- "model_size": model_size,
- "true_model_size": ms,
- "compressed_model_size": cms,
- })
- if args.rank == 0:
- json.dump(saved.metrics, open(metrics_path, "w"))
-
- saved.last_state = model.state_dict()
- saved.optimizer = optimizer.state_dict()
- if args.rank == 0 and not args.test:
- th.save(saved, checkpoint_tmp)
- checkpoint_tmp.rename(checkpoint)
-
- print(f"Epoch {epoch:03d}: "
- f"train={train_loss:.8f} valid={valid_loss:.8f} best={best_loss:.4f} ms={ms:.2f}MB "
- f"cms={cms:.2f}MB "
- f"duration={human_seconds(duration)}")
-
- if args.world_size > 1:
- distributed.barrier()
-
- del dmodel
- model.load_state_dict(saved.best_state)
- if args.eval_cpu:
- device = "cpu"
- model.to(device)
- model.eval()
- evaluate(model, args.musdb, eval_folder,
- is_wav=args.is_wav,
- rank=args.rank,
- world_size=args.world_size,
- device=device,
- save=args.save,
- split=args.split_valid,
- shifts=args.shifts,
- overlap=args.overlap,
- workers=args.eval_workers)
- model.to("cpu")
- if args.rank == 0:
- if not (args.test or args.test_pretrained):
- save_model(model, quantizer, args, args.models / model_name)
- print("done")
- done.write_text("done")
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Benson/text-generation/Examples/Blockman Go Skyblock Hack Apk.md b/spaces/Benson/text-generation/Examples/Blockman Go Skyblock Hack Apk.md
deleted file mode 100644
index ffd30e7f784e1f50f3f218bc95434a3ff33fad75..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Blockman Go Skyblock Hack Apk.md
+++ /dev/null
@@ -1,86 +0,0 @@
-
-
Blockman Go Skyblock Hack Apk: Cómo obtener dinero ilimitado y más
-
¿Te encanta jugar Blockman Go Skyblock, pero te gustaría tener más recursos y opciones para disfrutar del juego? Si es así, usted podría estar interesado en Blockman Go Skyblock Hack Apk, una versión modificada del juego que le da dinero ilimitado, un menú de mod, y no hay anuncios. En este artículo, le diremos lo que es Blockman Go Skyblock, por qué necesita el hack apk, cómo descargar e instalar, y cómo usarlo. ¡Vamos a empezar!
Blockman Go Skyblock es un popular juego móvil que permite a los jugadores construir y gestionar sus propias ciudades virtuales. El juego se basa en el modo Skyblock de Minecraft, donde se comienza con una pequeña isla en el cielo y tiene que expandirse utilizando recursos limitados. También puedes interactuar con otros jugadores, unirte a minijuegos, intercambiar objetos y chatear con amigos.
-
El juego tiene muchas características que lo hacen divertido y adictivo, como:
-
-
Varios bloques y elementos para crear y usar
-
Pieles y trajes personalizables para tu personaje
-
Diferentes modos y mapas para explorar y jugar
-
Una comunidad amistosa y activa de jugadores
-
Actualizaciones y eventos regulares
-
-
¿Por qué necesita Blockman Go Skyblock Hack Apk?
-
Aunque Blockman Go Skyblock es un gran juego, también tiene algunos inconvenientes que pueden limitar su disfrute. Por ejemplo, necesitas gastar dinero real o ver anuncios para obtener más monedas, que se utilizan para comprar bloques, artículos, pieles y otras cosas. También tienes que lidiar con anuncios molestos que aparecen de vez en cuando. Y algunas de las funciones de mod están bloqueadas o restringidas a menos que pagues por ellas.
-
-
Por eso es posible que desee probar Blockman Go Skyblock Hack Apk, una versión modificada del juego que le da acceso a dinero ilimitado, un menú de mod, y sin anuncios. Con este hack apk, usted puede:
-
Dinero ilimitado
-
-
Menú de mods
-
Puedes acceder a un menú mod que te permite personalizar y controlar varios aspectos del juego. Por ejemplo, puedes activar o desactivar la gravedad, el modo volar, el modo velocidad, el modo dios, etc. También puedes cambiar el clima, el tiempo, la dificultad, etc. También puedes generar objetos, turbas, animales, etc. El menú mod te da más libertad y diversión en el juego.
-
No hay anuncios
-
Puedes disfrutar del juego sin interrupciones ni distracciones de los anuncios. No tienes que ver ningún anuncio para obtener más monedas o desbloquear funciones. Puedes jugar el juego sin problemas y en paz.
-
Cómo descargar e instalar Blockman Go Skyblock Hack Apk?
-
Si usted está interesado en la descarga y la instalación de Blockman Go Skyblock Hack Apk en su dispositivo, Estos son los pasos que debe seguir:
-
Requisitos
-
-
Un dispositivo Android con al menos 4 GB de RAM y 100 MB de espacio de almacenamiento gratuito
-
Una conexión a Internet estable
-
Una aplicación de administrador de archivos
-
Permitir la instalación de aplicaciones de fuentes desconocidas en la configuración del dispositivo
-
-
Enlace de descarga
-
Puede descargar el Blockman Go Skyblock Hack Apk archivo desde este enlace: Blockman Go Skyblock Hack Apk Download. Este enlace es seguro y seguro, y le dirigirá a un sitio de buena reputación donde se puede obtener la última versión del archivo apk modded.
-
Proceso de instalación
-
Una vez que haya descargado el archivo apk, siga estos pasos para instalarlo en su dispositivo:
-
-
Busque el archivo apk en su aplicación de administrador de archivos y toque en él.
-
Aparecerá una ventana emergente pidiéndole que confirme la instalación. Toque en "Instalar".
-
Espere a que se complete la instalación. Puede tardar unos segundos o minutos dependiendo del dispositivo.
-
Una vez realizada la instalación, toque en "Abrir" para iniciar el juego.
-
Disfrutar de Blockman Go Skyblock Hack Apk!
-
-
Cómo utilizar Blockman Go Skyblock Hack Apk?
-
-
Opciones de menú Mod
-
Para acceder al menú mod, toque en el icono que parece un engranaje en la esquina superior derecha de la pantalla. Aparecerá una lista de opciones, como:
-
-
gravedad: puede activar o desactivar la gravedad en el juego. Si lo apaga, puede volar libremente.
-
Modo mosca: Puede activar o desactivar el modo mosca en el juego. Si lo habilita, puede volar pulsando dos veces el botón de salto.
-
Modo de velocidad: Puedes aumentar o disminuir la velocidad de tu personaje en el juego. Puedes elegir entre normal, rápido o súper rápido.
-
Modo Dios: Puedes activar o desactivar el modo dios en el juego. Si lo activas, serás invencible e inmune a cualquier daño.
-
Tiempo: Puedes cambiar el tiempo en el juego. Puedes elegir entre soleado, lluvioso, nevado o tormentoso.
-
Tiempo: Puedes cambiar la hora en el juego. Puedes elegir entre día, noche, amanecer o atardecer.
-
Dificultad: Puedes cambiar el nivel de dificultad en el juego. Puedes elegir entre fácil, normal, duro o extremo.
-
Spawn: Puedes generar cualquier objeto, mafia, animal o NPC en el juego. Puede elegir entre una amplia gama de opciones y personalizar su cantidad y ubicación.
-
-
También puede cerrar el menú mod tocando el icono de nuevo.
-
Consejos y trucos
-
Aquí hay algunos consejos y trucos para sacar el máximo provecho de Blockman Go Skyblock Hack Apk:
-
-
Usa dinero ilimitado para comprar todo lo que quieras en el juego. También puedes usarlo para actualizar tu isla más rápido y fácil.
-
Utiliza las opciones de menú mod para personalizar y controlar tu experiencia de juego. También puedes usarlas para divertirte más y desafiarte a ti mismo.
-
No use anuncios para disfrutar del juego sin interrupciones ni distracciones. También puede guardar sus datos y duración de la batería evitando los anuncios.
-
-
Tenga cuidado de no abusar o abusar de las características de mod en el juego. Es posible que otros jugadores lo prohíban o lo denuncien.
-
-
Conclusión
-
En conclusión, Blockman Go Skyblock Hack Apk es una gran manera de disfrutar de Blockman Go Skyblock con más características y opciones. Puede obtener dinero ilimitado, un menú de mod, y no hay anuncios con esta versión modificada del juego. También puede descargarlo e instalarlo fácilmente en su dispositivo siguiendo nuestra guía. Y puedes usarlo de forma sencilla y segura siguiendo nuestros consejos y trucos. ¿Qué estás esperando? Descargar Blockman Go Skyblock Hack Apk ahora y divertirse!
-
Preguntas frecuentes
-
Aquí están algunas de las preguntas más comunes que la gente pregunta acerca de Blockman Go Skyblock Hack Apk:
-
-
¿Es seguro Blockman Go Skyblock Hack Apk?
-
Sí, Blockman Go Skyblock Hack Apk es seguro de usar, siempre y cuando se descarga desde una fuente de confianza y siga las instrucciones de instalación. El archivo apk modded no contiene ningún virus, malware o spyware que puede dañar su dispositivo o datos. Sin embargo, siempre debe tener cuidado al descargar e instalar cualquier archivo apk desde Internet, y escanearlo con una aplicación antivirus antes de abrirlo.
-
Es Blockman Go Skyblock Hack Apk libre?
-
Sí, Blockman Go Skyblock Hack Apk es libre de descargar y usar. No tienes que pagar dinero o ver ningún anuncio para obtener la versión modificada del juego. Usted puede disfrutar de todas las características y beneficios de la apk hack sin gastar un centavo.
-
¿Funciona Blockman Go Skyblock Hack Apk en dispositivos iOS?
-
No, Blockman Go Skyblock Hack Apk solo funciona en dispositivos Android. No es compatible con dispositivos iOS, como iPhones o iPads. Si quieres jugar a Blockman Go Skyblock en tu dispositivo iOS, tendrás que descargar la versión original del juego desde la App Store.
-
¿Puedo jugar Blockman Go Skyblock Hack Apk offline?
-
-
¿Puedo actualizar Blockman Go Skyblock Hack Apk?
-
Sí, puede actualizar Blockman Go Skyblock Hack Apk siempre que haya una nueva versión disponible. Sin embargo, usted tendrá que descargar e instalar el nuevo archivo apk manualmente, como el juego no se actualizará automáticamente. Puedes consultar las actualizaciones visitando el enlace de descarga o siguiendo nuestro blog.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Conseguir Sobre l Descarga Gratuita 2023 Apk.md b/spaces/Benson/text-generation/Examples/Conseguir Sobre l Descarga Gratuita 2023 Apk.md
deleted file mode 100644
index eeef76388d0950585c8d26bfefc2c7fd641ed4f8..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Conseguir Sobre l Descarga Gratuita 2023 Apk.md
+++ /dev/null
@@ -1,65 +0,0 @@
-
-
Cómo superar la descarga gratuita 2023 APK: Cómo jugar el juego más frustrante en su dispositivo Android
-
Si estás buscando un juego que ponga a prueba tu paciencia, habilidad y cordura, no busques más allá de Cómo superarlo con Bennett Foddy. Este juego es conocido por ser uno de los juegos más difíciles y provocadores de ira jamás creados. Y ahora, se puede jugar en su dispositivo Android de forma gratuita con la ayuda de un archivo APK. En este artículo, te contaremos todo lo que necesitas saber sobre Getting Over It, por qué querrías jugarlo en tu dispositivo móvil y cómo descargarlo e instalarlo gratis en 2023.
-
¿Qué es superar con Bennett Foddy?
-
Getting Over It with Bennett Foddy es un juego que fue lanzado en 2017 por el desarrollador indie Bennett Foddy. El juego es un homenaje a un juego de 2002 llamado Sexy Hiking, que también era conocido por su extrema dificultad y absurdo.
La jugabilidad de Getting Over es simple pero desafiante. Controlas a un hombre llamado Diógenes, que está atrapado en un caldero y sostiene un martillo. Su objetivo es utilizar el martillo para escalar varios obstáculos, como rocas, árboles, tuberías y muebles. El juego no tiene puntos de control, así que si te caes, tienes que empezar de nuevo desde el principio. El juego tampoco tiene fin, así que puedes seguir escalando todo el tiempo que quieras.
-
El juego está diseñado para ser frustrante e implacable. La física es realista pero impredecible, por lo que nunca se sabe cómo reaccionará el martillo al medio ambiente. Los controles también son difíciles de dominar, ya que tienes que usar el ratón o la pantalla táctil para balancear el martillo en diferentes direcciones. El juego también cuenta con un comentario de voz en off por el propio Bennett Foddy, quien se burlará, alentará o filosofará sobre su progreso (o falta de él).
-
La historia y la recepción de Cómo superarlo
-
-
A partir de junio de 2023, el juego ha vendido más de 5 millones de copias en varias plataformas, incluyendo Windows, Mac, iOS y Android. El juego también ha ganado varios premios, como el Nuovo Award en el Independent Games Festival en 2018 y el Best Design Award en los Game Developers Choice Awards en 2019.
-
¿Por qué quieres jugar a Getting Over It en tu dispositivo Android?
-
Superar Es un juego que se puede disfrutar (o sufrir) en cualquier plataforma, pero jugar en su dispositivo Android tiene algunas ventajas y desventajas.
-
Los beneficios de jugar Cómo superarlo en el móvil
-
-
Se puede jugar en cualquier lugar y en cualquier momento. No necesitas un PC o una consola para experimentar la emoción (o agonía) de Cómo superarlo - Puedes desafiarte a ti mismo con un esquema de control diferente. Usar la pantalla táctil para balancear el martillo puede ser más intuitivo o frustrante, dependiendo de tu preferencia. - Puedes ahorrar algo de dinero. El juego cuesta $7.99 en Steam, pero puedes descargarlo gratis en tu dispositivo Android con un archivo APK.
-
-
Los desafíos de jugar Cómo superarlo en el móvil
-
-
Necesitas un dispositivo compatible. No todos los dispositivos Android pueden ejecutar el juego sin problemas, por lo que debe verificar las especificaciones y la compatibilidad antes de descargar e instalar el juego. - Necesitas suficiente espacio de almacenamiento. El juego ocupa unos 200 MB de espacio en tu dispositivo, por lo que debes asegurarte de tener suficiente espacio para él. - Necesita una conexión a Internet estable. El juego requiere una conexión a Internet para verificar la licencia y acceder a algunas características, como las tablas de clasificación y el chat. - Usted puede encontrar algunos errores o fallos. El juego no es oficialmente compatible con el desarrollador en dispositivos Android, por lo que puede experimentar algunos problemas o errores al jugar el juego.
-
-
Cómo descargar e instalar Getting Over It APK gratis en 2023
-
-
Los requisitos para descargar e instalar Cómo superar APK
-
Antes de proceder, asegúrese de que tiene los siguientes requisitos:
-
-
Un dispositivo Android que se ejecuta en Android 5.0 o superior y tiene al menos 1 GB de RAM y 200 MB de espacio de almacenamiento libre. - Una conexión a Internet fiable para descargar el archivo APK y acceder a las características del juego. - Una aplicación de administrador de archivos que puede abrir e instalar archivos APK. - La voluntad de asumir el riesgo de descargar e instalar una versión no oficial del juego que puede contener malware o virus.
-
-
Los pasos para descargar e instalar Cómo superarlo APK
-
Una vez que haya cumplido con los requisitos, siga estos pasos para descargar e instalar Getting Over It APK:
-
-
Paso 1: Encontrar una fuente confiable para Cómo superarlo APK
-
El primer paso es encontrar un sitio web de confianza que ofrece Cómo superar APK gratis. Hay muchos sitios web que afirman proporcionar el juego, pero no todos ellos son seguros o legítimos. Algunos de ellos pueden contener archivos falsos o obsoletos, o peor, malware o virus que pueden dañar su dispositivo o robar sus datos.
-
Para evitar estos riesgos, usted debe hacer una investigación y comprobar las revisiones y calificaciones del sitio web antes de descargar nada de él. También debe buscar signos de credibilidad, como una URL segura (https), una política de privacidad clara y una información de contacto.
-
Un ejemplo de una fuente confiable para Getting Over It APK es [APKPure], que es un sitio web popular que ofrece varios archivos APK para los usuarios de Android. Puedes visitar su sitio web y buscar Cómo superarlo con Bennett Foddy en su barra de búsqueda.
-
Paso 2: Descargar el archivo APK a su dispositivo
-
El siguiente paso es descargar el archivo APK a su dispositivo. Una vez que haya encontrado una fuente confiable, haga clic en el botón de descarga y espere a que se descargue el archivo. El tamaño del archivo es de aproximadamente 200 MB, por lo que puede tomar algún tiempo dependiendo de su velocidad de Internet.
-
-
Paso 3: Habilitar fuentes desconocidas en la configuración del dispositivo
-
El tercer paso es habilitar fuentes desconocidas en la configuración del dispositivo. Esto es necesario porque los dispositivos Android normalmente bloquean la instalación de aplicaciones desde fuentes distintas de Google Play Store. Para permitir la instalación de Getting Over It APK, es necesario cambiar esta configuración.
-
Para hacer esto, vaya a la configuración del dispositivo y busque la opción de seguridad o privacidad. Entonces, encontrar la opción que dice "Fuentes desconocidas" o "Instalar aplicaciones desconocidas" y alternar en. Es posible que vea un mensaje de advertencia que dice "Su teléfono y los datos personales son más vulnerables a los ataques de aplicaciones de fuentes desconocidas. Usted acepta que usted es el único responsable de cualquier daño a su teléfono o pérdida de datos que pueda resultar del uso de estas aplicaciones." Toque en Aceptar o Permitir proceder.
Paso 4: Instalar el archivo APK y lanzar el juego
-
El paso final es instalar el archivo APK y lanzar el juego. Para hacer esto, toque en el archivo APK y siga las instrucciones en la pantalla. Usted puede ver un mensaje que dice "¿Desea instalar esta aplicación? No requiere ningún acceso especial." Toque en Instalar y espere a que termine la instalación.
-
Después de la instalación se ha completado, usted debe ver un mensaje que dice "App instalado". Toca Abrir para iniciar el juego, o ve al cajón de la aplicación y busca el icono Cómo superarlo. También puedes ver un acceso directo en la pantalla de inicio.
-
Felicitaciones, que ha descargado con éxito e instalado Cómo Sobre Ella APK de forma gratuita en 2023. Ahora, puedes disfrutar (o soportar) el juego más frustrante en tu dispositivo Android.
-
Conclusión
-
-
En este artículo, hemos explicado qué es Getting Over It, por qué querrías reproducirlo en tu dispositivo móvil y cómo descargarlo e instalarlo gratis en 2023. También le hemos proporcionado una fuente confiable para Getting Over It APK, así como los requisitos y pasos para descargarlo e instalarlo.
-
Esperamos que haya encontrado este artículo útil e informativo. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. Y si eres lo suficientemente valiente como para intentar superarlo en tu dispositivo Android, te deseamos buena suerte y que te diviertas (o no).
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre cómo superar APK:
-
-
Q: ¿Es Getting Over It APK seguro para descargar e instalar?
-- A: Cómo superarlo APK es seguro para descargar e instalar si lo obtiene de una fuente confiable, como [APKPure]. Sin embargo, siempre debe tener cuidado al descargar e instalar cualquier archivo APK de fuentes desconocidas, ya que pueden contener malware o virus que pueden dañar su dispositivo o robar sus datos. También debe escanear el archivo con una aplicación antivirus antes de instalarlo.
-
P: ¿Es legal descargar e instalar APK?
-- A: Getting Over It APK no es legal para descargar e instalar, ya que es una versión no oficial del juego que viola los derechos del desarrollador y los términos de servicio. Al descargar e instalar Getting Over It APK, usted está arriesgando la acción legal del desarrollador o las autoridades. También estás privando al desarrollador de sus ingresos legítimos de las ventas del juego. Por lo tanto, no recomendamos o avalar la descarga e instalación de Getting Over It APK.
-
Q: ¿Está superando APK compatible con todos los dispositivos Android?
-
-
Q: ¿Cómo puedo actualizar Cómo superarlo APK?
-- A: Getting Over It APK no tiene una función de actualización automática, por lo que tendrá que actualizar manualmente cada vez que una nueva versión está disponible. Para ello, tendrá que repetir los pasos para descargar e instalar Getting Over It APK desde una fuente confiable. También es posible que tenga que desinstalar la versión anterior del juego antes de instalar el nuevo.
-
Q: ¿Cómo puedo desinstalar Cómo superar APK?
-- A: Si desea desinstalar Getting Over It APK de su dispositivo, puede hacerlo siguiendo estos pasos: - Ir a la configuración de su dispositivo y buscar la opción de aplicaciones o aplicaciones. - Encuentra y toca en Cómo superarlo con Bennett Foddy en la lista de aplicaciones. - Toque en Desinstalar y confirmar su elección. - Espere a que la desinstalación termine y luego elimine el archivo APK de su dispositivo.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distlib/resources.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distlib/resources.py
deleted file mode 100644
index fef52aa103ea369c96567b9af2a5a0ba14db5cb9..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distlib/resources.py
+++ /dev/null
@@ -1,358 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# Copyright (C) 2013-2017 Vinay Sajip.
-# Licensed to the Python Software Foundation under a contributor agreement.
-# See LICENSE.txt and CONTRIBUTORS.txt.
-#
-from __future__ import unicode_literals
-
-import bisect
-import io
-import logging
-import os
-import pkgutil
-import sys
-import types
-import zipimport
-
-from . import DistlibException
-from .util import cached_property, get_cache_base, Cache
-
-logger = logging.getLogger(__name__)
-
-
-cache = None # created when needed
-
-
-class ResourceCache(Cache):
- def __init__(self, base=None):
- if base is None:
- # Use native string to avoid issues on 2.x: see Python #20140.
- base = os.path.join(get_cache_base(), str('resource-cache'))
- super(ResourceCache, self).__init__(base)
-
- def is_stale(self, resource, path):
- """
- Is the cache stale for the given resource?
-
- :param resource: The :class:`Resource` being cached.
- :param path: The path of the resource in the cache.
- :return: True if the cache is stale.
- """
- # Cache invalidation is a hard problem :-)
- return True
-
- def get(self, resource):
- """
- Get a resource into the cache,
-
- :param resource: A :class:`Resource` instance.
- :return: The pathname of the resource in the cache.
- """
- prefix, path = resource.finder.get_cache_info(resource)
- if prefix is None:
- result = path
- else:
- result = os.path.join(self.base, self.prefix_to_dir(prefix), path)
- dirname = os.path.dirname(result)
- if not os.path.isdir(dirname):
- os.makedirs(dirname)
- if not os.path.exists(result):
- stale = True
- else:
- stale = self.is_stale(resource, path)
- if stale:
- # write the bytes of the resource to the cache location
- with open(result, 'wb') as f:
- f.write(resource.bytes)
- return result
-
-
-class ResourceBase(object):
- def __init__(self, finder, name):
- self.finder = finder
- self.name = name
-
-
-class Resource(ResourceBase):
- """
- A class representing an in-package resource, such as a data file. This is
- not normally instantiated by user code, but rather by a
- :class:`ResourceFinder` which manages the resource.
- """
- is_container = False # Backwards compatibility
-
- def as_stream(self):
- """
- Get the resource as a stream.
-
- This is not a property to make it obvious that it returns a new stream
- each time.
- """
- return self.finder.get_stream(self)
-
- @cached_property
- def file_path(self):
- global cache
- if cache is None:
- cache = ResourceCache()
- return cache.get(self)
-
- @cached_property
- def bytes(self):
- return self.finder.get_bytes(self)
-
- @cached_property
- def size(self):
- return self.finder.get_size(self)
-
-
-class ResourceContainer(ResourceBase):
- is_container = True # Backwards compatibility
-
- @cached_property
- def resources(self):
- return self.finder.get_resources(self)
-
-
-class ResourceFinder(object):
- """
- Resource finder for file system resources.
- """
-
- if sys.platform.startswith('java'):
- skipped_extensions = ('.pyc', '.pyo', '.class')
- else:
- skipped_extensions = ('.pyc', '.pyo')
-
- def __init__(self, module):
- self.module = module
- self.loader = getattr(module, '__loader__', None)
- self.base = os.path.dirname(getattr(module, '__file__', ''))
-
- def _adjust_path(self, path):
- return os.path.realpath(path)
-
- def _make_path(self, resource_name):
- # Issue #50: need to preserve type of path on Python 2.x
- # like os.path._get_sep
- if isinstance(resource_name, bytes): # should only happen on 2.x
- sep = b'/'
- else:
- sep = '/'
- parts = resource_name.split(sep)
- parts.insert(0, self.base)
- result = os.path.join(*parts)
- return self._adjust_path(result)
-
- def _find(self, path):
- return os.path.exists(path)
-
- def get_cache_info(self, resource):
- return None, resource.path
-
- def find(self, resource_name):
- path = self._make_path(resource_name)
- if not self._find(path):
- result = None
- else:
- if self._is_directory(path):
- result = ResourceContainer(self, resource_name)
- else:
- result = Resource(self, resource_name)
- result.path = path
- return result
-
- def get_stream(self, resource):
- return open(resource.path, 'rb')
-
- def get_bytes(self, resource):
- with open(resource.path, 'rb') as f:
- return f.read()
-
- def get_size(self, resource):
- return os.path.getsize(resource.path)
-
- def get_resources(self, resource):
- def allowed(f):
- return (f != '__pycache__' and not
- f.endswith(self.skipped_extensions))
- return set([f for f in os.listdir(resource.path) if allowed(f)])
-
- def is_container(self, resource):
- return self._is_directory(resource.path)
-
- _is_directory = staticmethod(os.path.isdir)
-
- def iterator(self, resource_name):
- resource = self.find(resource_name)
- if resource is not None:
- todo = [resource]
- while todo:
- resource = todo.pop(0)
- yield resource
- if resource.is_container:
- rname = resource.name
- for name in resource.resources:
- if not rname:
- new_name = name
- else:
- new_name = '/'.join([rname, name])
- child = self.find(new_name)
- if child.is_container:
- todo.append(child)
- else:
- yield child
-
-
-class ZipResourceFinder(ResourceFinder):
- """
- Resource finder for resources in .zip files.
- """
- def __init__(self, module):
- super(ZipResourceFinder, self).__init__(module)
- archive = self.loader.archive
- self.prefix_len = 1 + len(archive)
- # PyPy doesn't have a _files attr on zipimporter, and you can't set one
- if hasattr(self.loader, '_files'):
- self._files = self.loader._files
- else:
- self._files = zipimport._zip_directory_cache[archive]
- self.index = sorted(self._files)
-
- def _adjust_path(self, path):
- return path
-
- def _find(self, path):
- path = path[self.prefix_len:]
- if path in self._files:
- result = True
- else:
- if path and path[-1] != os.sep:
- path = path + os.sep
- i = bisect.bisect(self.index, path)
- try:
- result = self.index[i].startswith(path)
- except IndexError:
- result = False
- if not result:
- logger.debug('_find failed: %r %r', path, self.loader.prefix)
- else:
- logger.debug('_find worked: %r %r', path, self.loader.prefix)
- return result
-
- def get_cache_info(self, resource):
- prefix = self.loader.archive
- path = resource.path[1 + len(prefix):]
- return prefix, path
-
- def get_bytes(self, resource):
- return self.loader.get_data(resource.path)
-
- def get_stream(self, resource):
- return io.BytesIO(self.get_bytes(resource))
-
- def get_size(self, resource):
- path = resource.path[self.prefix_len:]
- return self._files[path][3]
-
- def get_resources(self, resource):
- path = resource.path[self.prefix_len:]
- if path and path[-1] != os.sep:
- path += os.sep
- plen = len(path)
- result = set()
- i = bisect.bisect(self.index, path)
- while i < len(self.index):
- if not self.index[i].startswith(path):
- break
- s = self.index[i][plen:]
- result.add(s.split(os.sep, 1)[0]) # only immediate children
- i += 1
- return result
-
- def _is_directory(self, path):
- path = path[self.prefix_len:]
- if path and path[-1] != os.sep:
- path += os.sep
- i = bisect.bisect(self.index, path)
- try:
- result = self.index[i].startswith(path)
- except IndexError:
- result = False
- return result
-
-
-_finder_registry = {
- type(None): ResourceFinder,
- zipimport.zipimporter: ZipResourceFinder
-}
-
-try:
- # In Python 3.6, _frozen_importlib -> _frozen_importlib_external
- try:
- import _frozen_importlib_external as _fi
- except ImportError:
- import _frozen_importlib as _fi
- _finder_registry[_fi.SourceFileLoader] = ResourceFinder
- _finder_registry[_fi.FileFinder] = ResourceFinder
- # See issue #146
- _finder_registry[_fi.SourcelessFileLoader] = ResourceFinder
- del _fi
-except (ImportError, AttributeError):
- pass
-
-
-def register_finder(loader, finder_maker):
- _finder_registry[type(loader)] = finder_maker
-
-
-_finder_cache = {}
-
-
-def finder(package):
- """
- Return a resource finder for a package.
- :param package: The name of the package.
- :return: A :class:`ResourceFinder` instance for the package.
- """
- if package in _finder_cache:
- result = _finder_cache[package]
- else:
- if package not in sys.modules:
- __import__(package)
- module = sys.modules[package]
- path = getattr(module, '__path__', None)
- if path is None:
- raise DistlibException('You cannot get a finder for a module, '
- 'only for a package')
- loader = getattr(module, '__loader__', None)
- finder_maker = _finder_registry.get(type(loader))
- if finder_maker is None:
- raise DistlibException('Unable to locate finder for %r' % package)
- result = finder_maker(module)
- _finder_cache[package] = result
- return result
-
-
-_dummy_module = types.ModuleType(str('__dummy__'))
-
-
-def finder_for_path(path):
- """
- Return a resource finder for a path, which should represent a container.
-
- :param path: The path.
- :return: A :class:`ResourceFinder` instance for the path.
- """
- result = None
- # calls any path hooks, gets importer into cache
- pkgutil.get_importer(path)
- loader = sys.path_importer_cache.get(path)
- finder = _finder_registry.get(type(loader))
- if finder:
- module = _dummy_module
- module.__file__ = os.path.join(path, '')
- module.__loader__ = loader
- result = finder(module)
- return result
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/segment.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/segment.py
deleted file mode 100644
index e125798463512ce4322a2cc139b4e5c1515e5c05..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/segment.py
+++ /dev/null
@@ -1,739 +0,0 @@
-from enum import IntEnum
-from functools import lru_cache
-from itertools import filterfalse
-from logging import getLogger
-from operator import attrgetter
-from typing import (
- TYPE_CHECKING,
- Dict,
- Iterable,
- List,
- NamedTuple,
- Optional,
- Sequence,
- Tuple,
- Type,
- Union,
-)
-
-from .cells import (
- _is_single_cell_widths,
- cached_cell_len,
- cell_len,
- get_character_cell_size,
- set_cell_size,
-)
-from .repr import Result, rich_repr
-from .style import Style
-
-if TYPE_CHECKING:
- from .console import Console, ConsoleOptions, RenderResult
-
-log = getLogger("rich")
-
-
-class ControlType(IntEnum):
- """Non-printable control codes which typically translate to ANSI codes."""
-
- BELL = 1
- CARRIAGE_RETURN = 2
- HOME = 3
- CLEAR = 4
- SHOW_CURSOR = 5
- HIDE_CURSOR = 6
- ENABLE_ALT_SCREEN = 7
- DISABLE_ALT_SCREEN = 8
- CURSOR_UP = 9
- CURSOR_DOWN = 10
- CURSOR_FORWARD = 11
- CURSOR_BACKWARD = 12
- CURSOR_MOVE_TO_COLUMN = 13
- CURSOR_MOVE_TO = 14
- ERASE_IN_LINE = 15
- SET_WINDOW_TITLE = 16
-
-
-ControlCode = Union[
- Tuple[ControlType],
- Tuple[ControlType, Union[int, str]],
- Tuple[ControlType, int, int],
-]
-
-
-@rich_repr()
-class Segment(NamedTuple):
- """A piece of text with associated style. Segments are produced by the Console render process and
- are ultimately converted in to strings to be written to the terminal.
-
- Args:
- text (str): A piece of text.
- style (:class:`~rich.style.Style`, optional): An optional style to apply to the text.
- control (Tuple[ControlCode], optional): Optional sequence of control codes.
-
- Attributes:
- cell_length (int): The cell length of this Segment.
- """
-
- text: str
- style: Optional[Style] = None
- control: Optional[Sequence[ControlCode]] = None
-
- @property
- def cell_length(self) -> int:
- """The number of terminal cells required to display self.text.
-
- Returns:
- int: A number of cells.
- """
- text, _style, control = self
- return 0 if control else cell_len(text)
-
- def __rich_repr__(self) -> Result:
- yield self.text
- if self.control is None:
- if self.style is not None:
- yield self.style
- else:
- yield self.style
- yield self.control
-
- def __bool__(self) -> bool:
- """Check if the segment contains text."""
- return bool(self.text)
-
- @property
- def is_control(self) -> bool:
- """Check if the segment contains control codes."""
- return self.control is not None
-
- @classmethod
- @lru_cache(1024 * 16)
- def _split_cells(cls, segment: "Segment", cut: int) -> Tuple["Segment", "Segment"]:
-
- text, style, control = segment
- _Segment = Segment
-
- cell_length = segment.cell_length
- if cut >= cell_length:
- return segment, _Segment("", style, control)
-
- cell_size = get_character_cell_size
-
- pos = int((cut / cell_length) * (len(text) - 1))
-
- before = text[:pos]
- cell_pos = cell_len(before)
- if cell_pos == cut:
- return (
- _Segment(before, style, control),
- _Segment(text[pos:], style, control),
- )
- while pos < len(text):
- char = text[pos]
- pos += 1
- cell_pos += cell_size(char)
- before = text[:pos]
- if cell_pos == cut:
- return (
- _Segment(before, style, control),
- _Segment(text[pos:], style, control),
- )
- if cell_pos > cut:
- return (
- _Segment(before[: pos - 1] + " ", style, control),
- _Segment(" " + text[pos:], style, control),
- )
-
- raise AssertionError("Will never reach here")
-
- def split_cells(self, cut: int) -> Tuple["Segment", "Segment"]:
- """Split segment in to two segments at the specified column.
-
- If the cut point falls in the middle of a 2-cell wide character then it is replaced
- by two spaces, to preserve the display width of the parent segment.
-
- Returns:
- Tuple[Segment, Segment]: Two segments.
- """
- text, style, control = self
-
- if _is_single_cell_widths(text):
- # Fast path with all 1 cell characters
- if cut >= len(text):
- return self, Segment("", style, control)
- return (
- Segment(text[:cut], style, control),
- Segment(text[cut:], style, control),
- )
-
- return self._split_cells(self, cut)
-
- @classmethod
- def line(cls) -> "Segment":
- """Make a new line segment."""
- return cls("\n")
-
- @classmethod
- def apply_style(
- cls,
- segments: Iterable["Segment"],
- style: Optional[Style] = None,
- post_style: Optional[Style] = None,
- ) -> Iterable["Segment"]:
- """Apply style(s) to an iterable of segments.
-
- Returns an iterable of segments where the style is replaced by ``style + segment.style + post_style``.
-
- Args:
- segments (Iterable[Segment]): Segments to process.
- style (Style, optional): Base style. Defaults to None.
- post_style (Style, optional): Style to apply on top of segment style. Defaults to None.
-
- Returns:
- Iterable[Segments]: A new iterable of segments (possibly the same iterable).
- """
- result_segments = segments
- if style:
- apply = style.__add__
- result_segments = (
- cls(text, None if control else apply(_style), control)
- for text, _style, control in result_segments
- )
- if post_style:
- result_segments = (
- cls(
- text,
- (
- None
- if control
- else (_style + post_style if _style else post_style)
- ),
- control,
- )
- for text, _style, control in result_segments
- )
- return result_segments
-
- @classmethod
- def filter_control(
- cls, segments: Iterable["Segment"], is_control: bool = False
- ) -> Iterable["Segment"]:
- """Filter segments by ``is_control`` attribute.
-
- Args:
- segments (Iterable[Segment]): An iterable of Segment instances.
- is_control (bool, optional): is_control flag to match in search.
-
- Returns:
- Iterable[Segment]: And iterable of Segment instances.
-
- """
- if is_control:
- return filter(attrgetter("control"), segments)
- else:
- return filterfalse(attrgetter("control"), segments)
-
- @classmethod
- def split_lines(cls, segments: Iterable["Segment"]) -> Iterable[List["Segment"]]:
- """Split a sequence of segments in to a list of lines.
-
- Args:
- segments (Iterable[Segment]): Segments potentially containing line feeds.
-
- Yields:
- Iterable[List[Segment]]: Iterable of segment lists, one per line.
- """
- line: List[Segment] = []
- append = line.append
-
- for segment in segments:
- if "\n" in segment.text and not segment.control:
- text, style, _ = segment
- while text:
- _text, new_line, text = text.partition("\n")
- if _text:
- append(cls(_text, style))
- if new_line:
- yield line
- line = []
- append = line.append
- else:
- append(segment)
- if line:
- yield line
-
- @classmethod
- def split_and_crop_lines(
- cls,
- segments: Iterable["Segment"],
- length: int,
- style: Optional[Style] = None,
- pad: bool = True,
- include_new_lines: bool = True,
- ) -> Iterable[List["Segment"]]:
- """Split segments in to lines, and crop lines greater than a given length.
-
- Args:
- segments (Iterable[Segment]): An iterable of segments, probably
- generated from console.render.
- length (int): Desired line length.
- style (Style, optional): Style to use for any padding.
- pad (bool): Enable padding of lines that are less than `length`.
-
- Returns:
- Iterable[List[Segment]]: An iterable of lines of segments.
- """
- line: List[Segment] = []
- append = line.append
-
- adjust_line_length = cls.adjust_line_length
- new_line_segment = cls("\n")
-
- for segment in segments:
- if "\n" in segment.text and not segment.control:
- text, segment_style, _ = segment
- while text:
- _text, new_line, text = text.partition("\n")
- if _text:
- append(cls(_text, segment_style))
- if new_line:
- cropped_line = adjust_line_length(
- line, length, style=style, pad=pad
- )
- if include_new_lines:
- cropped_line.append(new_line_segment)
- yield cropped_line
- line.clear()
- else:
- append(segment)
- if line:
- yield adjust_line_length(line, length, style=style, pad=pad)
-
- @classmethod
- def adjust_line_length(
- cls,
- line: List["Segment"],
- length: int,
- style: Optional[Style] = None,
- pad: bool = True,
- ) -> List["Segment"]:
- """Adjust a line to a given width (cropping or padding as required).
-
- Args:
- segments (Iterable[Segment]): A list of segments in a single line.
- length (int): The desired width of the line.
- style (Style, optional): The style of padding if used (space on the end). Defaults to None.
- pad (bool, optional): Pad lines with spaces if they are shorter than `length`. Defaults to True.
-
- Returns:
- List[Segment]: A line of segments with the desired length.
- """
- line_length = sum(segment.cell_length for segment in line)
- new_line: List[Segment]
-
- if line_length < length:
- if pad:
- new_line = line + [cls(" " * (length - line_length), style)]
- else:
- new_line = line[:]
- elif line_length > length:
- new_line = []
- append = new_line.append
- line_length = 0
- for segment in line:
- segment_length = segment.cell_length
- if line_length + segment_length < length or segment.control:
- append(segment)
- line_length += segment_length
- else:
- text, segment_style, _ = segment
- text = set_cell_size(text, length - line_length)
- append(cls(text, segment_style))
- break
- else:
- new_line = line[:]
- return new_line
-
- @classmethod
- def get_line_length(cls, line: List["Segment"]) -> int:
- """Get the length of list of segments.
-
- Args:
- line (List[Segment]): A line encoded as a list of Segments (assumes no '\\\\n' characters),
-
- Returns:
- int: The length of the line.
- """
- _cell_len = cell_len
- return sum(_cell_len(text) for text, style, control in line if not control)
-
- @classmethod
- def get_shape(cls, lines: List[List["Segment"]]) -> Tuple[int, int]:
- """Get the shape (enclosing rectangle) of a list of lines.
-
- Args:
- lines (List[List[Segment]]): A list of lines (no '\\\\n' characters).
-
- Returns:
- Tuple[int, int]: Width and height in characters.
- """
- get_line_length = cls.get_line_length
- max_width = max(get_line_length(line) for line in lines) if lines else 0
- return (max_width, len(lines))
-
- @classmethod
- def set_shape(
- cls,
- lines: List[List["Segment"]],
- width: int,
- height: Optional[int] = None,
- style: Optional[Style] = None,
- new_lines: bool = False,
- ) -> List[List["Segment"]]:
- """Set the shape of a list of lines (enclosing rectangle).
-
- Args:
- lines (List[List[Segment]]): A list of lines.
- width (int): Desired width.
- height (int, optional): Desired height or None for no change.
- style (Style, optional): Style of any padding added.
- new_lines (bool, optional): Padded lines should include "\n". Defaults to False.
-
- Returns:
- List[List[Segment]]: New list of lines.
- """
- _height = height or len(lines)
-
- blank = (
- [cls(" " * width + "\n", style)] if new_lines else [cls(" " * width, style)]
- )
-
- adjust_line_length = cls.adjust_line_length
- shaped_lines = lines[:_height]
- shaped_lines[:] = [
- adjust_line_length(line, width, style=style) for line in lines
- ]
- if len(shaped_lines) < _height:
- shaped_lines.extend([blank] * (_height - len(shaped_lines)))
- return shaped_lines
-
- @classmethod
- def align_top(
- cls: Type["Segment"],
- lines: List[List["Segment"]],
- width: int,
- height: int,
- style: Style,
- new_lines: bool = False,
- ) -> List[List["Segment"]]:
- """Aligns lines to top (adds extra lines to bottom as required).
-
- Args:
- lines (List[List[Segment]]): A list of lines.
- width (int): Desired width.
- height (int, optional): Desired height or None for no change.
- style (Style): Style of any padding added.
- new_lines (bool, optional): Padded lines should include "\n". Defaults to False.
-
- Returns:
- List[List[Segment]]: New list of lines.
- """
- extra_lines = height - len(lines)
- if not extra_lines:
- return lines[:]
- lines = lines[:height]
- blank = cls(" " * width + "\n", style) if new_lines else cls(" " * width, style)
- lines = lines + [[blank]] * extra_lines
- return lines
-
- @classmethod
- def align_bottom(
- cls: Type["Segment"],
- lines: List[List["Segment"]],
- width: int,
- height: int,
- style: Style,
- new_lines: bool = False,
- ) -> List[List["Segment"]]:
- """Aligns render to bottom (adds extra lines above as required).
-
- Args:
- lines (List[List[Segment]]): A list of lines.
- width (int): Desired width.
- height (int, optional): Desired height or None for no change.
- style (Style): Style of any padding added. Defaults to None.
- new_lines (bool, optional): Padded lines should include "\n". Defaults to False.
-
- Returns:
- List[List[Segment]]: New list of lines.
- """
- extra_lines = height - len(lines)
- if not extra_lines:
- return lines[:]
- lines = lines[:height]
- blank = cls(" " * width + "\n", style) if new_lines else cls(" " * width, style)
- lines = [[blank]] * extra_lines + lines
- return lines
-
- @classmethod
- def align_middle(
- cls: Type["Segment"],
- lines: List[List["Segment"]],
- width: int,
- height: int,
- style: Style,
- new_lines: bool = False,
- ) -> List[List["Segment"]]:
- """Aligns lines to middle (adds extra lines to above and below as required).
-
- Args:
- lines (List[List[Segment]]): A list of lines.
- width (int): Desired width.
- height (int, optional): Desired height or None for no change.
- style (Style): Style of any padding added.
- new_lines (bool, optional): Padded lines should include "\n". Defaults to False.
-
- Returns:
- List[List[Segment]]: New list of lines.
- """
- extra_lines = height - len(lines)
- if not extra_lines:
- return lines[:]
- lines = lines[:height]
- blank = cls(" " * width + "\n", style) if new_lines else cls(" " * width, style)
- top_lines = extra_lines // 2
- bottom_lines = extra_lines - top_lines
- lines = [[blank]] * top_lines + lines + [[blank]] * bottom_lines
- return lines
-
- @classmethod
- def simplify(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
- """Simplify an iterable of segments by combining contiguous segments with the same style.
-
- Args:
- segments (Iterable[Segment]): An iterable of segments.
-
- Returns:
- Iterable[Segment]: A possibly smaller iterable of segments that will render the same way.
- """
- iter_segments = iter(segments)
- try:
- last_segment = next(iter_segments)
- except StopIteration:
- return
-
- _Segment = Segment
- for segment in iter_segments:
- if last_segment.style == segment.style and not segment.control:
- last_segment = _Segment(
- last_segment.text + segment.text, last_segment.style
- )
- else:
- yield last_segment
- last_segment = segment
- yield last_segment
-
- @classmethod
- def strip_links(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
- """Remove all links from an iterable of styles.
-
- Args:
- segments (Iterable[Segment]): An iterable segments.
-
- Yields:
- Segment: Segments with link removed.
- """
- for segment in segments:
- if segment.control or segment.style is None:
- yield segment
- else:
- text, style, _control = segment
- yield cls(text, style.update_link(None) if style else None)
-
- @classmethod
- def strip_styles(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
- """Remove all styles from an iterable of segments.
-
- Args:
- segments (Iterable[Segment]): An iterable segments.
-
- Yields:
- Segment: Segments with styles replace with None
- """
- for text, _style, control in segments:
- yield cls(text, None, control)
-
- @classmethod
- def remove_color(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
- """Remove all color from an iterable of segments.
-
- Args:
- segments (Iterable[Segment]): An iterable segments.
-
- Yields:
- Segment: Segments with colorless style.
- """
-
- cache: Dict[Style, Style] = {}
- for text, style, control in segments:
- if style:
- colorless_style = cache.get(style)
- if colorless_style is None:
- colorless_style = style.without_color
- cache[style] = colorless_style
- yield cls(text, colorless_style, control)
- else:
- yield cls(text, None, control)
-
- @classmethod
- def divide(
- cls, segments: Iterable["Segment"], cuts: Iterable[int]
- ) -> Iterable[List["Segment"]]:
- """Divides an iterable of segments in to portions.
-
- Args:
- cuts (Iterable[int]): Cell positions where to divide.
-
- Yields:
- [Iterable[List[Segment]]]: An iterable of Segments in List.
- """
- split_segments: List["Segment"] = []
- add_segment = split_segments.append
-
- iter_cuts = iter(cuts)
-
- while True:
- cut = next(iter_cuts, -1)
- if cut == -1:
- return []
- if cut != 0:
- break
- yield []
- pos = 0
-
- segments_clear = split_segments.clear
- segments_copy = split_segments.copy
-
- _cell_len = cached_cell_len
- for segment in segments:
- text, _style, control = segment
- while text:
- end_pos = pos if control else pos + _cell_len(text)
- if end_pos < cut:
- add_segment(segment)
- pos = end_pos
- break
-
- if end_pos == cut:
- add_segment(segment)
- yield segments_copy()
- segments_clear()
- pos = end_pos
-
- cut = next(iter_cuts, -1)
- if cut == -1:
- if split_segments:
- yield segments_copy()
- return
-
- break
-
- else:
- before, segment = segment.split_cells(cut - pos)
- text, _style, control = segment
- add_segment(before)
- yield segments_copy()
- segments_clear()
- pos = cut
-
- cut = next(iter_cuts, -1)
- if cut == -1:
- if split_segments:
- yield segments_copy()
- return
-
- yield segments_copy()
-
-
-class Segments:
- """A simple renderable to render an iterable of segments. This class may be useful if
- you want to print segments outside of a __rich_console__ method.
-
- Args:
- segments (Iterable[Segment]): An iterable of segments.
- new_lines (bool, optional): Add new lines between segments. Defaults to False.
- """
-
- def __init__(self, segments: Iterable[Segment], new_lines: bool = False) -> None:
- self.segments = list(segments)
- self.new_lines = new_lines
-
- def __rich_console__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "RenderResult":
- if self.new_lines:
- line = Segment.line()
- for segment in self.segments:
- yield segment
- yield line
- else:
- yield from self.segments
-
-
-class SegmentLines:
- def __init__(self, lines: Iterable[List[Segment]], new_lines: bool = False) -> None:
- """A simple renderable containing a number of lines of segments. May be used as an intermediate
- in rendering process.
-
- Args:
- lines (Iterable[List[Segment]]): Lists of segments forming lines.
- new_lines (bool, optional): Insert new lines after each line. Defaults to False.
- """
- self.lines = list(lines)
- self.new_lines = new_lines
-
- def __rich_console__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "RenderResult":
- if self.new_lines:
- new_line = Segment.line()
- for line in self.lines:
- yield from line
- yield new_line
- else:
- for line in self.lines:
- yield from line
-
-
-if __name__ == "__main__": # pragma: no cover
- from pip._vendor.rich.console import Console
- from pip._vendor.rich.syntax import Syntax
- from pip._vendor.rich.text import Text
-
- code = """from rich.console import Console
-console = Console()
-text = Text.from_markup("Hello, [bold magenta]World[/]!")
-console.print(text)"""
-
- text = Text.from_markup("Hello, [bold magenta]World[/]!")
-
- console = Console()
-
- console.rule("rich.Segment")
- console.print(
- "A Segment is the last step in the Rich render process before generating text with ANSI codes."
- )
- console.print("\nConsider the following code:\n")
- console.print(Syntax(code, "python", line_numbers=True))
- console.print()
- console.print(
- "When you call [b]print()[/b], Rich [i]renders[/i] the object in to the following:\n"
- )
- fragments = list(console.render(text))
- console.print(fragments)
- console.print()
- console.print("The Segments are then processed to produce the following output:\n")
- console.print(text)
- console.print(
- "\nYou will only need to know this if you are implementing your own Rich renderables."
- )
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/text_file.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/text_file.py
deleted file mode 100644
index 7274d4b16e1bee16751515f42793ebefdd769b96..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/text_file.py
+++ /dev/null
@@ -1,287 +0,0 @@
-"""text_file
-
-provides the TextFile class, which gives an interface to text files
-that (optionally) takes care of stripping comments, ignoring blank
-lines, and joining lines with backslashes."""
-
-import sys
-
-
-class TextFile:
- """Provides a file-like object that takes care of all the things you
- commonly want to do when processing a text file that has some
- line-by-line syntax: strip comments (as long as "#" is your
- comment character), skip blank lines, join adjacent lines by
- escaping the newline (ie. backslash at end of line), strip
- leading and/or trailing whitespace. All of these are optional
- and independently controllable.
-
- Provides a 'warn()' method so you can generate warning messages that
- report physical line number, even if the logical line in question
- spans multiple physical lines. Also provides 'unreadline()' for
- implementing line-at-a-time lookahead.
-
- Constructor is called as:
-
- TextFile (filename=None, file=None, **options)
-
- It bombs (RuntimeError) if both 'filename' and 'file' are None;
- 'filename' should be a string, and 'file' a file object (or
- something that provides 'readline()' and 'close()' methods). It is
- recommended that you supply at least 'filename', so that TextFile
- can include it in warning messages. If 'file' is not supplied,
- TextFile creates its own using 'io.open()'.
-
- The options are all boolean, and affect the value returned by
- 'readline()':
- strip_comments [default: true]
- strip from "#" to end-of-line, as well as any whitespace
- leading up to the "#" -- unless it is escaped by a backslash
- lstrip_ws [default: false]
- strip leading whitespace from each line before returning it
- rstrip_ws [default: true]
- strip trailing whitespace (including line terminator!) from
- each line before returning it
- skip_blanks [default: true}
- skip lines that are empty *after* stripping comments and
- whitespace. (If both lstrip_ws and rstrip_ws are false,
- then some lines may consist of solely whitespace: these will
- *not* be skipped, even if 'skip_blanks' is true.)
- join_lines [default: false]
- if a backslash is the last non-newline character on a line
- after stripping comments and whitespace, join the following line
- to it to form one "logical line"; if N consecutive lines end
- with a backslash, then N+1 physical lines will be joined to
- form one logical line.
- collapse_join [default: false]
- strip leading whitespace from lines that are joined to their
- predecessor; only matters if (join_lines and not lstrip_ws)
- errors [default: 'strict']
- error handler used to decode the file content
-
- Note that since 'rstrip_ws' can strip the trailing newline, the
- semantics of 'readline()' must differ from those of the builtin file
- object's 'readline()' method! In particular, 'readline()' returns
- None for end-of-file: an empty string might just be a blank line (or
- an all-whitespace line), if 'rstrip_ws' is true but 'skip_blanks' is
- not."""
-
- default_options = {
- 'strip_comments': 1,
- 'skip_blanks': 1,
- 'lstrip_ws': 0,
- 'rstrip_ws': 1,
- 'join_lines': 0,
- 'collapse_join': 0,
- 'errors': 'strict',
- }
-
- def __init__(self, filename=None, file=None, **options):
- """Construct a new TextFile object. At least one of 'filename'
- (a string) and 'file' (a file-like object) must be supplied.
- They keyword argument options are described above and affect
- the values returned by 'readline()'."""
- if filename is None and file is None:
- raise RuntimeError(
- "you must supply either or both of 'filename' and 'file'"
- )
-
- # set values for all options -- either from client option hash
- # or fallback to default_options
- for opt in self.default_options.keys():
- if opt in options:
- setattr(self, opt, options[opt])
- else:
- setattr(self, opt, self.default_options[opt])
-
- # sanity check client option hash
- for opt in options.keys():
- if opt not in self.default_options:
- raise KeyError("invalid TextFile option '%s'" % opt)
-
- if file is None:
- self.open(filename)
- else:
- self.filename = filename
- self.file = file
- self.current_line = 0 # assuming that file is at BOF!
-
- # 'linebuf' is a stack of lines that will be emptied before we
- # actually read from the file; it's only populated by an
- # 'unreadline()' operation
- self.linebuf = []
-
- def open(self, filename):
- """Open a new file named 'filename'. This overrides both the
- 'filename' and 'file' arguments to the constructor."""
- self.filename = filename
- self.file = open(self.filename, errors=self.errors)
- self.current_line = 0
-
- def close(self):
- """Close the current file and forget everything we know about it
- (filename, current line number)."""
- file = self.file
- self.file = None
- self.filename = None
- self.current_line = None
- file.close()
-
- def gen_error(self, msg, line=None):
- outmsg = []
- if line is None:
- line = self.current_line
- outmsg.append(self.filename + ", ")
- if isinstance(line, (list, tuple)):
- outmsg.append("lines %d-%d: " % tuple(line))
- else:
- outmsg.append("line %d: " % line)
- outmsg.append(str(msg))
- return "".join(outmsg)
-
- def error(self, msg, line=None):
- raise ValueError("error: " + self.gen_error(msg, line))
-
- def warn(self, msg, line=None):
- """Print (to stderr) a warning message tied to the current logical
- line in the current file. If the current logical line in the
- file spans multiple physical lines, the warning refers to the
- whole range, eg. "lines 3-5". If 'line' supplied, it overrides
- the current line number; it may be a list or tuple to indicate a
- range of physical lines, or an integer for a single physical
- line."""
- sys.stderr.write("warning: " + self.gen_error(msg, line) + "\n")
-
- def readline(self): # noqa: C901
- """Read and return a single logical line from the current file (or
- from an internal buffer if lines have previously been "unread"
- with 'unreadline()'). If the 'join_lines' option is true, this
- may involve reading multiple physical lines concatenated into a
- single string. Updates the current line number, so calling
- 'warn()' after 'readline()' emits a warning about the physical
- line(s) just read. Returns None on end-of-file, since the empty
- string can occur if 'rstrip_ws' is true but 'strip_blanks' is
- not."""
- # If any "unread" lines waiting in 'linebuf', return the top
- # one. (We don't actually buffer read-ahead data -- lines only
- # get put in 'linebuf' if the client explicitly does an
- # 'unreadline()'.
- if self.linebuf:
- line = self.linebuf[-1]
- del self.linebuf[-1]
- return line
-
- buildup_line = ''
-
- while True:
- # read the line, make it None if EOF
- line = self.file.readline()
- if line == '':
- line = None
-
- if self.strip_comments and line:
-
- # Look for the first "#" in the line. If none, never
- # mind. If we find one and it's the first character, or
- # is not preceded by "\", then it starts a comment --
- # strip the comment, strip whitespace before it, and
- # carry on. Otherwise, it's just an escaped "#", so
- # unescape it (and any other escaped "#"'s that might be
- # lurking in there) and otherwise leave the line alone.
-
- pos = line.find("#")
- if pos == -1: # no "#" -- no comments
- pass
-
- # It's definitely a comment -- either "#" is the first
- # character, or it's elsewhere and unescaped.
- elif pos == 0 or line[pos - 1] != "\\":
- # Have to preserve the trailing newline, because it's
- # the job of a later step (rstrip_ws) to remove it --
- # and if rstrip_ws is false, we'd better preserve it!
- # (NB. this means that if the final line is all comment
- # and has no trailing newline, we will think that it's
- # EOF; I think that's OK.)
- eol = (line[-1] == '\n') and '\n' or ''
- line = line[0:pos] + eol
-
- # If all that's left is whitespace, then skip line
- # *now*, before we try to join it to 'buildup_line' --
- # that way constructs like
- # hello \\
- # # comment that should be ignored
- # there
- # result in "hello there".
- if line.strip() == "":
- continue
- else: # it's an escaped "#"
- line = line.replace("\\#", "#")
-
- # did previous line end with a backslash? then accumulate
- if self.join_lines and buildup_line:
- # oops: end of file
- if line is None:
- self.warn("continuation line immediately precedes " "end-of-file")
- return buildup_line
-
- if self.collapse_join:
- line = line.lstrip()
- line = buildup_line + line
-
- # careful: pay attention to line number when incrementing it
- if isinstance(self.current_line, list):
- self.current_line[1] = self.current_line[1] + 1
- else:
- self.current_line = [self.current_line, self.current_line + 1]
- # just an ordinary line, read it as usual
- else:
- if line is None: # eof
- return None
-
- # still have to be careful about incrementing the line number!
- if isinstance(self.current_line, list):
- self.current_line = self.current_line[1] + 1
- else:
- self.current_line = self.current_line + 1
-
- # strip whitespace however the client wants (leading and
- # trailing, or one or the other, or neither)
- if self.lstrip_ws and self.rstrip_ws:
- line = line.strip()
- elif self.lstrip_ws:
- line = line.lstrip()
- elif self.rstrip_ws:
- line = line.rstrip()
-
- # blank line (whether we rstrip'ed or not)? skip to next line
- # if appropriate
- if (line == '' or line == '\n') and self.skip_blanks:
- continue
-
- if self.join_lines:
- if line[-1] == '\\':
- buildup_line = line[:-1]
- continue
-
- if line[-2:] == '\\\n':
- buildup_line = line[0:-2] + '\n'
- continue
-
- # well, I guess there's some actual content there: return it
- return line
-
- def readlines(self):
- """Read and return the list of all logical lines remaining in the
- current file."""
- lines = []
- while True:
- line = self.readline()
- if line is None:
- return lines
- lines.append(line)
-
- def unreadline(self, line):
- """Push 'line' (a string) onto an internal buffer that will be
- checked by future 'readline()' calls. Handy for implementing
- a parser with line-at-a-time lookahead."""
- self.linebuf.append(line)
diff --git a/spaces/BrianL/CoE197-Fil-DialectTranslator/README.md b/spaces/BrianL/CoE197-Fil-DialectTranslator/README.md
deleted file mode 100644
index 66183047edd8baca4c70948ee676d9cc6f0f2043..0000000000000000000000000000000000000000
--- a/spaces/BrianL/CoE197-Fil-DialectTranslator/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Space2
-emoji: 🌍
-colorFrom: indigo
-colorTo: gray
-sdk: gradio
-sdk_version: 2.8.14
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/CForGETaass/vits-uma-genshin-honkai/modules.py b/spaces/CForGETaass/vits-uma-genshin-honkai/modules.py
deleted file mode 100644
index 56ea4145eddf19dd330a3a41ab0183efc1686d83..0000000000000000000000000000000000000000
--- a/spaces/CForGETaass/vits-uma-genshin-honkai/modules.py
+++ /dev/null
@@ -1,388 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/web_playwright.py b/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/web_playwright.py
deleted file mode 100644
index 4e388ded203cefb5e24f9116f7fe5b8a94893413..0000000000000000000000000000000000000000
--- a/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/web_playwright.py
+++ /dev/null
@@ -1,80 +0,0 @@
-"""Web scraping commands using Playwright"""
-from __future__ import annotations
-
-try:
- from playwright.sync_api import sync_playwright
-except ImportError:
- print(
- "Playwright not installed. Please install it with 'pip install playwright' to use."
- )
-from bs4 import BeautifulSoup
-
-from autogpt.processing.html import extract_hyperlinks, format_hyperlinks
-
-
-def scrape_text(url: str) -> str:
- """Scrape text from a webpage
-
- Args:
- url (str): The URL to scrape text from
-
- Returns:
- str: The scraped text
- """
- with sync_playwright() as p:
- browser = p.chromium.launch()
- page = browser.new_page()
-
- try:
- page.goto(url)
- html_content = page.content()
- soup = BeautifulSoup(html_content, "html.parser")
-
- for script in soup(["script", "style"]):
- script.extract()
-
- text = soup.get_text()
- lines = (line.strip() for line in text.splitlines())
- chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
- text = "\n".join(chunk for chunk in chunks if chunk)
-
- except Exception as e:
- text = f"Error: {str(e)}"
-
- finally:
- browser.close()
-
- return text
-
-
-def scrape_links(url: str) -> str | list[str]:
- """Scrape links from a webpage
-
- Args:
- url (str): The URL to scrape links from
-
- Returns:
- Union[str, List[str]]: The scraped links
- """
- with sync_playwright() as p:
- browser = p.chromium.launch()
- page = browser.new_page()
-
- try:
- page.goto(url)
- html_content = page.content()
- soup = BeautifulSoup(html_content, "html.parser")
-
- for script in soup(["script", "style"]):
- script.extract()
-
- hyperlinks = extract_hyperlinks(soup, url)
- formatted_links = format_hyperlinks(hyperlinks)
-
- except Exception as e:
- formatted_links = f"Error: {str(e)}"
-
- finally:
- browser.close()
-
- return formatted_links
diff --git a/spaces/Codecooker/rvcapi/src/infer_pack/modules.py b/spaces/Codecooker/rvcapi/src/infer_pack/modules.py
deleted file mode 100644
index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000
--- a/spaces/Codecooker/rvcapi/src/infer_pack/modules.py
+++ /dev/null
@@ -1,522 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from infer_pack.transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- out_channels,
- kernel_size,
- n_layers,
- p_dropout,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(
- nn.Conv1d(
- in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(
- nn.Conv1d(
- hidden_channels,
- hidden_channels,
- kernel_size,
- padding=kernel_size // 2,
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
-
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size**i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(
- nn.Conv1d(
- channels,
- channels,
- kernel_size,
- groups=channels,
- dilation=dilation,
- padding=padding,
- )
- )
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(
- self,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- p_dropout=0,
- ):
- super(WN, self).__init__()
- assert kernel_size % 2 == 1
- self.hidden_channels = hidden_channels
- self.kernel_size = (kernel_size,)
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(
- gin_channels, 2 * hidden_channels * n_layers, 1
- )
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
-
- for i in range(n_layers):
- dilation = dilation_rate**i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(
- hidden_channels,
- 2 * hidden_channels,
- kernel_size,
- dilation=dilation,
- padding=padding,
- )
- in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:, : self.hidden_channels, :]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels :, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2]),
- )
- ),
- ]
- )
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- ]
- )
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- ]
- )
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels, 1))
- self.logs = nn.Parameter(torch.zeros(channels, 1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1, 2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False,
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=p_dropout,
- gin_channels=gin_channels,
- )
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class ConvFlow(nn.Module):
- def __init__(
- self,
- in_channels,
- filter_channels,
- kernel_size,
- n_layers,
- num_bins=10,
- tail_bound=5.0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
- self.proj = nn.Conv1d(
- filter_channels, self.half_channels * (num_bins * 3 - 1), 1
- )
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
- self.filter_channels
- )
- unnormalized_derivatives = h[..., 2 * self.num_bins :]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(
- x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails="linear",
- tail_bound=self.tail_bound,
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImageMorph.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImageMorph.py
deleted file mode 100644
index 6fccc315b3d25cf2cfe2dec952c938041f1d4531..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImageMorph.py
+++ /dev/null
@@ -1,254 +0,0 @@
-# A binary morphology add-on for the Python Imaging Library
-#
-# History:
-# 2014-06-04 Initial version.
-#
-# Copyright (c) 2014 Dov Grobgeld
-
-import re
-
-from . import Image, _imagingmorph
-
-LUT_SIZE = 1 << 9
-
-# fmt: off
-ROTATION_MATRIX = [
- 6, 3, 0,
- 7, 4, 1,
- 8, 5, 2,
-]
-MIRROR_MATRIX = [
- 2, 1, 0,
- 5, 4, 3,
- 8, 7, 6,
-]
-# fmt: on
-
-
-class LutBuilder:
- """A class for building a MorphLut from a descriptive language
-
- The input patterns is a list of a strings sequences like these::
-
- 4:(...
- .1.
- 111)->1
-
- (whitespaces including linebreaks are ignored). The option 4
- describes a series of symmetry operations (in this case a
- 4-rotation), the pattern is described by:
-
- - . or X - Ignore
- - 1 - Pixel is on
- - 0 - Pixel is off
-
- The result of the operation is described after "->" string.
-
- The default is to return the current pixel value, which is
- returned if no other match is found.
-
- Operations:
-
- - 4 - 4 way rotation
- - N - Negate
- - 1 - Dummy op for no other operation (an op must always be given)
- - M - Mirroring
-
- Example::
-
- lb = LutBuilder(patterns = ["4:(... .1. 111)->1"])
- lut = lb.build_lut()
-
- """
-
- def __init__(self, patterns=None, op_name=None):
- if patterns is not None:
- self.patterns = patterns
- else:
- self.patterns = []
- self.lut = None
- if op_name is not None:
- known_patterns = {
- "corner": ["1:(... ... ...)->0", "4:(00. 01. ...)->1"],
- "dilation4": ["4:(... .0. .1.)->1"],
- "dilation8": ["4:(... .0. .1.)->1", "4:(... .0. ..1)->1"],
- "erosion4": ["4:(... .1. .0.)->0"],
- "erosion8": ["4:(... .1. .0.)->0", "4:(... .1. ..0)->0"],
- "edge": [
- "1:(... ... ...)->0",
- "4:(.0. .1. ...)->1",
- "4:(01. .1. ...)->1",
- ],
- }
- if op_name not in known_patterns:
- msg = "Unknown pattern " + op_name + "!"
- raise Exception(msg)
-
- self.patterns = known_patterns[op_name]
-
- def add_patterns(self, patterns):
- self.patterns += patterns
-
- def build_default_lut(self):
- symbols = [0, 1]
- m = 1 << 4 # pos of current pixel
- self.lut = bytearray(symbols[(i & m) > 0] for i in range(LUT_SIZE))
-
- def get_lut(self):
- return self.lut
-
- def _string_permute(self, pattern, permutation):
- """string_permute takes a pattern and a permutation and returns the
- string permuted according to the permutation list.
- """
- assert len(permutation) == 9
- return "".join(pattern[p] for p in permutation)
-
- def _pattern_permute(self, basic_pattern, options, basic_result):
- """pattern_permute takes a basic pattern and its result and clones
- the pattern according to the modifications described in the $options
- parameter. It returns a list of all cloned patterns."""
- patterns = [(basic_pattern, basic_result)]
-
- # rotations
- if "4" in options:
- res = patterns[-1][1]
- for i in range(4):
- patterns.append(
- (self._string_permute(patterns[-1][0], ROTATION_MATRIX), res)
- )
- # mirror
- if "M" in options:
- n = len(patterns)
- for pattern, res in patterns[:n]:
- patterns.append((self._string_permute(pattern, MIRROR_MATRIX), res))
-
- # negate
- if "N" in options:
- n = len(patterns)
- for pattern, res in patterns[:n]:
- # Swap 0 and 1
- pattern = pattern.replace("0", "Z").replace("1", "0").replace("Z", "1")
- res = 1 - int(res)
- patterns.append((pattern, res))
-
- return patterns
-
- def build_lut(self):
- """Compile all patterns into a morphology lut.
-
- TBD :Build based on (file) morphlut:modify_lut
- """
- self.build_default_lut()
- patterns = []
-
- # Parse and create symmetries of the patterns strings
- for p in self.patterns:
- m = re.search(r"(\w*):?\s*\((.+?)\)\s*->\s*(\d)", p.replace("\n", ""))
- if not m:
- msg = 'Syntax error in pattern "' + p + '"'
- raise Exception(msg)
- options = m.group(1)
- pattern = m.group(2)
- result = int(m.group(3))
-
- # Get rid of spaces
- pattern = pattern.replace(" ", "").replace("\n", "")
-
- patterns += self._pattern_permute(pattern, options, result)
-
- # compile the patterns into regular expressions for speed
- for i, pattern in enumerate(patterns):
- p = pattern[0].replace(".", "X").replace("X", "[01]")
- p = re.compile(p)
- patterns[i] = (p, pattern[1])
-
- # Step through table and find patterns that match.
- # Note that all the patterns are searched. The last one
- # caught overrides
- for i in range(LUT_SIZE):
- # Build the bit pattern
- bitpattern = bin(i)[2:]
- bitpattern = ("0" * (9 - len(bitpattern)) + bitpattern)[::-1]
-
- for p, r in patterns:
- if p.match(bitpattern):
- self.lut[i] = [0, 1][r]
-
- return self.lut
-
-
-class MorphOp:
- """A class for binary morphological operators"""
-
- def __init__(self, lut=None, op_name=None, patterns=None):
- """Create a binary morphological operator"""
- self.lut = lut
- if op_name is not None:
- self.lut = LutBuilder(op_name=op_name).build_lut()
- elif patterns is not None:
- self.lut = LutBuilder(patterns=patterns).build_lut()
-
- def apply(self, image):
- """Run a single morphological operation on an image
-
- Returns a tuple of the number of changed pixels and the
- morphed image"""
- if self.lut is None:
- msg = "No operator loaded"
- raise Exception(msg)
-
- if image.mode != "L":
- msg = "Image mode must be L"
- raise ValueError(msg)
- outimage = Image.new(image.mode, image.size, None)
- count = _imagingmorph.apply(bytes(self.lut), image.im.id, outimage.im.id)
- return count, outimage
-
- def match(self, image):
- """Get a list of coordinates matching the morphological operation on
- an image.
-
- Returns a list of tuples of (x,y) coordinates
- of all matching pixels. See :ref:`coordinate-system`."""
- if self.lut is None:
- msg = "No operator loaded"
- raise Exception(msg)
-
- if image.mode != "L":
- msg = "Image mode must be L"
- raise ValueError(msg)
- return _imagingmorph.match(bytes(self.lut), image.im.id)
-
- def get_on_pixels(self, image):
- """Get a list of all turned on pixels in a binary image
-
- Returns a list of tuples of (x,y) coordinates
- of all matching pixels. See :ref:`coordinate-system`."""
-
- if image.mode != "L":
- msg = "Image mode must be L"
- raise ValueError(msg)
- return _imagingmorph.get_on_pixels(image.im.id)
-
- def load_lut(self, filename):
- """Load an operator from an mrl file"""
- with open(filename, "rb") as f:
- self.lut = bytearray(f.read())
-
- if len(self.lut) != LUT_SIZE:
- self.lut = None
- msg = "Wrong size operator file!"
- raise Exception(msg)
-
- def save_lut(self, filename):
- """Save an operator to an mrl file"""
- if self.lut is None:
- msg = "No operator loaded"
- raise Exception(msg)
- with open(filename, "wb") as f:
- f.write(self.lut)
-
- def set_lut(self, lut):
- """Set the lut from an external source"""
- self.lut = lut
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/_compat.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/_compat.py
deleted file mode 100644
index 2233fe33c72a3ac8888bb6d143922d031539c925..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/_compat.py
+++ /dev/null
@@ -1,616 +0,0 @@
-from collections import deque
-from copy import copy
-from dataclasses import dataclass, is_dataclass
-from enum import Enum
-from typing import (
- Any,
- Callable,
- Deque,
- Dict,
- FrozenSet,
- List,
- Mapping,
- Sequence,
- Set,
- Tuple,
- Type,
- Union,
-)
-
-from fastapi.exceptions import RequestErrorModel
-from fastapi.types import IncEx, ModelNameMap, UnionType
-from pydantic import BaseModel, create_model
-from pydantic.version import VERSION as PYDANTIC_VERSION
-from starlette.datastructures import UploadFile
-from typing_extensions import Annotated, Literal, get_args, get_origin
-
-PYDANTIC_V2 = PYDANTIC_VERSION.startswith("2.")
-
-
-sequence_annotation_to_type = {
- Sequence: list,
- List: list,
- list: list,
- Tuple: tuple,
- tuple: tuple,
- Set: set,
- set: set,
- FrozenSet: frozenset,
- frozenset: frozenset,
- Deque: deque,
- deque: deque,
-}
-
-sequence_types = tuple(sequence_annotation_to_type.keys())
-
-if PYDANTIC_V2:
- from pydantic import PydanticSchemaGenerationError as PydanticSchemaGenerationError
- from pydantic import TypeAdapter
- from pydantic import ValidationError as ValidationError
- from pydantic._internal._schema_generation_shared import ( # type: ignore[attr-defined]
- GetJsonSchemaHandler as GetJsonSchemaHandler,
- )
- from pydantic._internal._typing_extra import eval_type_lenient
- from pydantic._internal._utils import lenient_issubclass as lenient_issubclass
- from pydantic.fields import FieldInfo
- from pydantic.json_schema import GenerateJsonSchema as GenerateJsonSchema
- from pydantic.json_schema import JsonSchemaValue as JsonSchemaValue
- from pydantic_core import CoreSchema as CoreSchema
- from pydantic_core import MultiHostUrl as MultiHostUrl
- from pydantic_core import PydanticUndefined, PydanticUndefinedType
- from pydantic_core import Url as Url
- from pydantic_core.core_schema import (
- general_plain_validator_function as general_plain_validator_function,
- )
-
- Required = PydanticUndefined
- Undefined = PydanticUndefined
- UndefinedType = PydanticUndefinedType
- evaluate_forwardref = eval_type_lenient
- Validator = Any
-
- class BaseConfig:
- pass
-
- class ErrorWrapper(Exception):
- pass
-
- @dataclass
- class ModelField:
- field_info: FieldInfo
- name: str
- mode: Literal["validation", "serialization"] = "validation"
-
- @property
- def alias(self) -> str:
- a = self.field_info.alias
- return a if a is not None else self.name
-
- @property
- def required(self) -> bool:
- return self.field_info.is_required()
-
- @property
- def default(self) -> Any:
- return self.get_default()
-
- @property
- def type_(self) -> Any:
- return self.field_info.annotation
-
- def __post_init__(self) -> None:
- self._type_adapter: TypeAdapter[Any] = TypeAdapter(
- Annotated[self.field_info.annotation, self.field_info]
- )
-
- def get_default(self) -> Any:
- if self.field_info.is_required():
- return Undefined
- return self.field_info.get_default(call_default_factory=True)
-
- def validate(
- self,
- value: Any,
- values: Dict[str, Any] = {}, # noqa: B006
- *,
- loc: Tuple[Union[int, str], ...] = (),
- ) -> Tuple[Any, Union[List[Dict[str, Any]], None]]:
- try:
- return (
- self._type_adapter.validate_python(value, from_attributes=True),
- None,
- )
- except ValidationError as exc:
- return None, _regenerate_error_with_loc(
- errors=exc.errors(), loc_prefix=loc
- )
-
- def serialize(
- self,
- value: Any,
- *,
- mode: Literal["json", "python"] = "json",
- include: Union[IncEx, None] = None,
- exclude: Union[IncEx, None] = None,
- by_alias: bool = True,
- exclude_unset: bool = False,
- exclude_defaults: bool = False,
- exclude_none: bool = False,
- ) -> Any:
- # What calls this code passes a value that already called
- # self._type_adapter.validate_python(value)
- return self._type_adapter.dump_python(
- value,
- mode=mode,
- include=include,
- exclude=exclude,
- by_alias=by_alias,
- exclude_unset=exclude_unset,
- exclude_defaults=exclude_defaults,
- exclude_none=exclude_none,
- )
-
- def __hash__(self) -> int:
- # Each ModelField is unique for our purposes, to allow making a dict from
- # ModelField to its JSON Schema.
- return id(self)
-
- def get_annotation_from_field_info(
- annotation: Any, field_info: FieldInfo, field_name: str
- ) -> Any:
- return annotation
-
- def _normalize_errors(errors: Sequence[Any]) -> List[Dict[str, Any]]:
- return errors # type: ignore[return-value]
-
- def _model_rebuild(model: Type[BaseModel]) -> None:
- model.model_rebuild()
-
- def _model_dump(
- model: BaseModel, mode: Literal["json", "python"] = "json", **kwargs: Any
- ) -> Any:
- return model.model_dump(mode=mode, **kwargs)
-
- def _get_model_config(model: BaseModel) -> Any:
- return model.model_config
-
- def get_schema_from_model_field(
- *,
- field: ModelField,
- schema_generator: GenerateJsonSchema,
- model_name_map: ModelNameMap,
- field_mapping: Dict[
- Tuple[ModelField, Literal["validation", "serialization"]], JsonSchemaValue
- ],
- ) -> Dict[str, Any]:
- # This expects that GenerateJsonSchema was already used to generate the definitions
- json_schema = field_mapping[(field, field.mode)]
- if "$ref" not in json_schema:
- # TODO remove when deprecating Pydantic v1
- # Ref: https://github.com/pydantic/pydantic/blob/d61792cc42c80b13b23e3ffa74bc37ec7c77f7d1/pydantic/schema.py#L207
- json_schema[
- "title"
- ] = field.field_info.title or field.alias.title().replace("_", " ")
- return json_schema
-
- def get_compat_model_name_map(fields: List[ModelField]) -> ModelNameMap:
- return {}
-
- def get_definitions(
- *,
- fields: List[ModelField],
- schema_generator: GenerateJsonSchema,
- model_name_map: ModelNameMap,
- ) -> Tuple[
- Dict[
- Tuple[ModelField, Literal["validation", "serialization"]], JsonSchemaValue
- ],
- Dict[str, Dict[str, Any]],
- ]:
- inputs = [
- (field, field.mode, field._type_adapter.core_schema) for field in fields
- ]
- field_mapping, definitions = schema_generator.generate_definitions(
- inputs=inputs
- )
- return field_mapping, definitions # type: ignore[return-value]
-
- def is_scalar_field(field: ModelField) -> bool:
- from fastapi import params
-
- return field_annotation_is_scalar(
- field.field_info.annotation
- ) and not isinstance(field.field_info, params.Body)
-
- def is_sequence_field(field: ModelField) -> bool:
- return field_annotation_is_sequence(field.field_info.annotation)
-
- def is_scalar_sequence_field(field: ModelField) -> bool:
- return field_annotation_is_scalar_sequence(field.field_info.annotation)
-
- def is_bytes_field(field: ModelField) -> bool:
- return is_bytes_or_nonable_bytes_annotation(field.type_)
-
- def is_bytes_sequence_field(field: ModelField) -> bool:
- return is_bytes_sequence_annotation(field.type_)
-
- def copy_field_info(*, field_info: FieldInfo, annotation: Any) -> FieldInfo:
- return type(field_info).from_annotation(annotation)
-
- def serialize_sequence_value(*, field: ModelField, value: Any) -> Sequence[Any]:
- origin_type = (
- get_origin(field.field_info.annotation) or field.field_info.annotation
- )
- assert issubclass(origin_type, sequence_types) # type: ignore[arg-type]
- return sequence_annotation_to_type[origin_type](value) # type: ignore[no-any-return]
-
- def get_missing_field_error(loc: Tuple[str, ...]) -> Dict[str, Any]:
- error = ValidationError.from_exception_data(
- "Field required", [{"type": "missing", "loc": loc, "input": {}}]
- ).errors()[0]
- error["input"] = None
- return error # type: ignore[return-value]
-
- def create_body_model(
- *, fields: Sequence[ModelField], model_name: str
- ) -> Type[BaseModel]:
- field_params = {f.name: (f.field_info.annotation, f.field_info) for f in fields}
- BodyModel: Type[BaseModel] = create_model(model_name, **field_params) # type: ignore[call-overload]
- return BodyModel
-
-else:
- from fastapi.openapi.constants import REF_PREFIX as REF_PREFIX
- from pydantic import AnyUrl as Url # noqa: F401
- from pydantic import ( # type: ignore[assignment]
- BaseConfig as BaseConfig, # noqa: F401
- )
- from pydantic import ValidationError as ValidationError # noqa: F401
- from pydantic.class_validators import ( # type: ignore[no-redef]
- Validator as Validator, # noqa: F401
- )
- from pydantic.error_wrappers import ( # type: ignore[no-redef]
- ErrorWrapper as ErrorWrapper, # noqa: F401
- )
- from pydantic.errors import MissingError
- from pydantic.fields import ( # type: ignore[attr-defined]
- SHAPE_FROZENSET,
- SHAPE_LIST,
- SHAPE_SEQUENCE,
- SHAPE_SET,
- SHAPE_SINGLETON,
- SHAPE_TUPLE,
- SHAPE_TUPLE_ELLIPSIS,
- )
- from pydantic.fields import FieldInfo as FieldInfo
- from pydantic.fields import ( # type: ignore[no-redef,attr-defined]
- ModelField as ModelField, # noqa: F401
- )
- from pydantic.fields import ( # type: ignore[no-redef,attr-defined]
- Required as Required, # noqa: F401
- )
- from pydantic.fields import ( # type: ignore[no-redef,attr-defined]
- Undefined as Undefined,
- )
- from pydantic.fields import ( # type: ignore[no-redef, attr-defined]
- UndefinedType as UndefinedType, # noqa: F401
- )
- from pydantic.networks import ( # type: ignore[no-redef]
- MultiHostDsn as MultiHostUrl, # noqa: F401
- )
- from pydantic.schema import (
- field_schema,
- get_flat_models_from_fields,
- get_model_name_map,
- model_process_schema,
- )
- from pydantic.schema import ( # type: ignore[no-redef] # noqa: F401
- get_annotation_from_field_info as get_annotation_from_field_info,
- )
- from pydantic.typing import ( # type: ignore[no-redef]
- evaluate_forwardref as evaluate_forwardref, # noqa: F401
- )
- from pydantic.utils import ( # type: ignore[no-redef]
- lenient_issubclass as lenient_issubclass, # noqa: F401
- )
-
- GetJsonSchemaHandler = Any # type: ignore[assignment,misc]
- JsonSchemaValue = Dict[str, Any] # type: ignore[misc]
- CoreSchema = Any # type: ignore[assignment,misc]
-
- sequence_shapes = {
- SHAPE_LIST,
- SHAPE_SET,
- SHAPE_FROZENSET,
- SHAPE_TUPLE,
- SHAPE_SEQUENCE,
- SHAPE_TUPLE_ELLIPSIS,
- }
- sequence_shape_to_type = {
- SHAPE_LIST: list,
- SHAPE_SET: set,
- SHAPE_TUPLE: tuple,
- SHAPE_SEQUENCE: list,
- SHAPE_TUPLE_ELLIPSIS: list,
- }
-
- @dataclass
- class GenerateJsonSchema: # type: ignore[no-redef]
- ref_template: str
-
- class PydanticSchemaGenerationError(Exception): # type: ignore[no-redef]
- pass
-
- def general_plain_validator_function( # type: ignore[misc]
- function: Callable[..., Any],
- *,
- ref: Union[str, None] = None,
- metadata: Any = None,
- serialization: Any = None,
- ) -> Any:
- return {}
-
- def get_model_definitions(
- *,
- flat_models: Set[Union[Type[BaseModel], Type[Enum]]],
- model_name_map: Dict[Union[Type[BaseModel], Type[Enum]], str],
- ) -> Dict[str, Any]:
- definitions: Dict[str, Dict[str, Any]] = {}
- for model in flat_models:
- m_schema, m_definitions, m_nested_models = model_process_schema(
- model, model_name_map=model_name_map, ref_prefix=REF_PREFIX
- )
- definitions.update(m_definitions)
- model_name = model_name_map[model]
- if "description" in m_schema:
- m_schema["description"] = m_schema["description"].split("\f")[0]
- definitions[model_name] = m_schema
- return definitions
-
- def is_pv1_scalar_field(field: ModelField) -> bool:
- from fastapi import params
-
- field_info = field.field_info
- if not (
- field.shape == SHAPE_SINGLETON # type: ignore[attr-defined]
- and not lenient_issubclass(field.type_, BaseModel)
- and not lenient_issubclass(field.type_, dict)
- and not field_annotation_is_sequence(field.type_)
- and not is_dataclass(field.type_)
- and not isinstance(field_info, params.Body)
- ):
- return False
- if field.sub_fields: # type: ignore[attr-defined]
- if not all(
- is_pv1_scalar_field(f)
- for f in field.sub_fields # type: ignore[attr-defined]
- ):
- return False
- return True
-
- def is_pv1_scalar_sequence_field(field: ModelField) -> bool:
- if (field.shape in sequence_shapes) and not lenient_issubclass( # type: ignore[attr-defined]
- field.type_, BaseModel
- ):
- if field.sub_fields is not None: # type: ignore[attr-defined]
- for sub_field in field.sub_fields: # type: ignore[attr-defined]
- if not is_pv1_scalar_field(sub_field):
- return False
- return True
- if _annotation_is_sequence(field.type_):
- return True
- return False
-
- def _normalize_errors(errors: Sequence[Any]) -> List[Dict[str, Any]]:
- use_errors: List[Any] = []
- for error in errors:
- if isinstance(error, ErrorWrapper):
- new_errors = ValidationError( # type: ignore[call-arg]
- errors=[error], model=RequestErrorModel
- ).errors()
- use_errors.extend(new_errors)
- elif isinstance(error, list):
- use_errors.extend(_normalize_errors(error))
- else:
- use_errors.append(error)
- return use_errors
-
- def _model_rebuild(model: Type[BaseModel]) -> None:
- model.update_forward_refs()
-
- def _model_dump(
- model: BaseModel, mode: Literal["json", "python"] = "json", **kwargs: Any
- ) -> Any:
- return model.dict(**kwargs)
-
- def _get_model_config(model: BaseModel) -> Any:
- return model.__config__ # type: ignore[attr-defined]
-
- def get_schema_from_model_field(
- *,
- field: ModelField,
- schema_generator: GenerateJsonSchema,
- model_name_map: ModelNameMap,
- field_mapping: Dict[
- Tuple[ModelField, Literal["validation", "serialization"]], JsonSchemaValue
- ],
- ) -> Dict[str, Any]:
- # This expects that GenerateJsonSchema was already used to generate the definitions
- return field_schema( # type: ignore[no-any-return]
- field, model_name_map=model_name_map, ref_prefix=REF_PREFIX
- )[0]
-
- def get_compat_model_name_map(fields: List[ModelField]) -> ModelNameMap:
- models = get_flat_models_from_fields(fields, known_models=set())
- return get_model_name_map(models) # type: ignore[no-any-return]
-
- def get_definitions(
- *,
- fields: List[ModelField],
- schema_generator: GenerateJsonSchema,
- model_name_map: ModelNameMap,
- ) -> Tuple[
- Dict[
- Tuple[ModelField, Literal["validation", "serialization"]], JsonSchemaValue
- ],
- Dict[str, Dict[str, Any]],
- ]:
- models = get_flat_models_from_fields(fields, known_models=set())
- return {}, get_model_definitions(
- flat_models=models, model_name_map=model_name_map
- )
-
- def is_scalar_field(field: ModelField) -> bool:
- return is_pv1_scalar_field(field)
-
- def is_sequence_field(field: ModelField) -> bool:
- return field.shape in sequence_shapes or _annotation_is_sequence(field.type_) # type: ignore[attr-defined]
-
- def is_scalar_sequence_field(field: ModelField) -> bool:
- return is_pv1_scalar_sequence_field(field)
-
- def is_bytes_field(field: ModelField) -> bool:
- return lenient_issubclass(field.type_, bytes)
-
- def is_bytes_sequence_field(field: ModelField) -> bool:
- return field.shape in sequence_shapes and lenient_issubclass(field.type_, bytes) # type: ignore[attr-defined]
-
- def copy_field_info(*, field_info: FieldInfo, annotation: Any) -> FieldInfo:
- return copy(field_info)
-
- def serialize_sequence_value(*, field: ModelField, value: Any) -> Sequence[Any]:
- return sequence_shape_to_type[field.shape](value) # type: ignore[no-any-return,attr-defined]
-
- def get_missing_field_error(loc: Tuple[str, ...]) -> Dict[str, Any]:
- missing_field_error = ErrorWrapper(MissingError(), loc=loc) # type: ignore[call-arg]
- new_error = ValidationError([missing_field_error], RequestErrorModel)
- return new_error.errors()[0] # type: ignore[return-value]
-
- def create_body_model(
- *, fields: Sequence[ModelField], model_name: str
- ) -> Type[BaseModel]:
- BodyModel = create_model(model_name)
- for f in fields:
- BodyModel.__fields__[f.name] = f # type: ignore[index]
- return BodyModel
-
-
-def _regenerate_error_with_loc(
- *, errors: Sequence[Any], loc_prefix: Tuple[Union[str, int], ...]
-) -> List[Dict[str, Any]]:
- updated_loc_errors: List[Any] = [
- {**err, "loc": loc_prefix + err.get("loc", ())}
- for err in _normalize_errors(errors)
- ]
-
- return updated_loc_errors
-
-
-def _annotation_is_sequence(annotation: Union[Type[Any], None]) -> bool:
- if lenient_issubclass(annotation, (str, bytes)):
- return False
- return lenient_issubclass(annotation, sequence_types)
-
-
-def field_annotation_is_sequence(annotation: Union[Type[Any], None]) -> bool:
- return _annotation_is_sequence(annotation) or _annotation_is_sequence(
- get_origin(annotation)
- )
-
-
-def value_is_sequence(value: Any) -> bool:
- return isinstance(value, sequence_types) and not isinstance(value, (str, bytes)) # type: ignore[arg-type]
-
-
-def _annotation_is_complex(annotation: Union[Type[Any], None]) -> bool:
- return (
- lenient_issubclass(annotation, (BaseModel, Mapping, UploadFile))
- or _annotation_is_sequence(annotation)
- or is_dataclass(annotation)
- )
-
-
-def field_annotation_is_complex(annotation: Union[Type[Any], None]) -> bool:
- origin = get_origin(annotation)
- if origin is Union or origin is UnionType:
- return any(field_annotation_is_complex(arg) for arg in get_args(annotation))
-
- return (
- _annotation_is_complex(annotation)
- or _annotation_is_complex(origin)
- or hasattr(origin, "__pydantic_core_schema__")
- or hasattr(origin, "__get_pydantic_core_schema__")
- )
-
-
-def field_annotation_is_scalar(annotation: Any) -> bool:
- # handle Ellipsis here to make tuple[int, ...] work nicely
- return annotation is Ellipsis or not field_annotation_is_complex(annotation)
-
-
-def field_annotation_is_scalar_sequence(annotation: Union[Type[Any], None]) -> bool:
- origin = get_origin(annotation)
- if origin is Union or origin is UnionType:
- at_least_one_scalar_sequence = False
- for arg in get_args(annotation):
- if field_annotation_is_scalar_sequence(arg):
- at_least_one_scalar_sequence = True
- continue
- elif not field_annotation_is_scalar(arg):
- return False
- return at_least_one_scalar_sequence
- return field_annotation_is_sequence(annotation) and all(
- field_annotation_is_scalar(sub_annotation)
- for sub_annotation in get_args(annotation)
- )
-
-
-def is_bytes_or_nonable_bytes_annotation(annotation: Any) -> bool:
- if lenient_issubclass(annotation, bytes):
- return True
- origin = get_origin(annotation)
- if origin is Union or origin is UnionType:
- for arg in get_args(annotation):
- if lenient_issubclass(arg, bytes):
- return True
- return False
-
-
-def is_uploadfile_or_nonable_uploadfile_annotation(annotation: Any) -> bool:
- if lenient_issubclass(annotation, UploadFile):
- return True
- origin = get_origin(annotation)
- if origin is Union or origin is UnionType:
- for arg in get_args(annotation):
- if lenient_issubclass(arg, UploadFile):
- return True
- return False
-
-
-def is_bytes_sequence_annotation(annotation: Any) -> bool:
- origin = get_origin(annotation)
- if origin is Union or origin is UnionType:
- at_least_one = False
- for arg in get_args(annotation):
- if is_bytes_sequence_annotation(arg):
- at_least_one = True
- continue
- return at_least_one
- return field_annotation_is_sequence(annotation) and all(
- is_bytes_or_nonable_bytes_annotation(sub_annotation)
- for sub_annotation in get_args(annotation)
- )
-
-
-def is_uploadfile_sequence_annotation(annotation: Any) -> bool:
- origin = get_origin(annotation)
- if origin is Union or origin is UnionType:
- at_least_one = False
- for arg in get_args(annotation):
- if is_uploadfile_sequence_annotation(arg):
- at_least_one = True
- continue
- return at_least_one
- return field_annotation_is_sequence(annotation) and all(
- is_uploadfile_or_nonable_uploadfile_annotation(sub_annotation)
- for sub_annotation in get_args(annotation)
- )
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/colorLib/builder.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/colorLib/builder.py
deleted file mode 100644
index 442bc20e4223827d8e28c9fbb0290dac6f1553dc..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/colorLib/builder.py
+++ /dev/null
@@ -1,659 +0,0 @@
-"""
-colorLib.builder: Build COLR/CPAL tables from scratch
-
-"""
-import collections
-import copy
-import enum
-from functools import partial
-from math import ceil, log
-from typing import (
- Any,
- Dict,
- Generator,
- Iterable,
- List,
- Mapping,
- Optional,
- Sequence,
- Tuple,
- Type,
- TypeVar,
- Union,
-)
-from fontTools.misc.arrayTools import intRect
-from fontTools.misc.fixedTools import fixedToFloat
-from fontTools.misc.treeTools import build_n_ary_tree
-from fontTools.ttLib.tables import C_O_L_R_
-from fontTools.ttLib.tables import C_P_A_L_
-from fontTools.ttLib.tables import _n_a_m_e
-from fontTools.ttLib.tables import otTables as ot
-from fontTools.ttLib.tables.otTables import ExtendMode, CompositeMode
-from .errors import ColorLibError
-from .geometry import round_start_circle_stable_containment
-from .table_builder import BuildCallback, TableBuilder
-
-
-# TODO move type aliases to colorLib.types?
-T = TypeVar("T")
-_Kwargs = Mapping[str, Any]
-_PaintInput = Union[int, _Kwargs, ot.Paint, Tuple[str, "_PaintInput"]]
-_PaintInputList = Sequence[_PaintInput]
-_ColorGlyphsDict = Dict[str, Union[_PaintInputList, _PaintInput]]
-_ColorGlyphsV0Dict = Dict[str, Sequence[Tuple[str, int]]]
-_ClipBoxInput = Union[
- Tuple[int, int, int, int, int], # format 1, variable
- Tuple[int, int, int, int], # format 0, non-variable
- ot.ClipBox,
-]
-
-
-MAX_PAINT_COLR_LAYER_COUNT = 255
-_DEFAULT_ALPHA = 1.0
-_MAX_REUSE_LEN = 32
-
-
-def _beforeBuildPaintRadialGradient(paint, source):
- x0 = source["x0"]
- y0 = source["y0"]
- r0 = source["r0"]
- x1 = source["x1"]
- y1 = source["y1"]
- r1 = source["r1"]
-
- # TODO apparently no builder_test confirms this works (?)
-
- # avoid abrupt change after rounding when c0 is near c1's perimeter
- c = round_start_circle_stable_containment((x0, y0), r0, (x1, y1), r1)
- x0, y0 = c.centre
- r0 = c.radius
-
- # update source to ensure paint is built with corrected values
- source["x0"] = x0
- source["y0"] = y0
- source["r0"] = r0
- source["x1"] = x1
- source["y1"] = y1
- source["r1"] = r1
-
- return paint, source
-
-
-def _defaultColorStop():
- colorStop = ot.ColorStop()
- colorStop.Alpha = _DEFAULT_ALPHA
- return colorStop
-
-
-def _defaultVarColorStop():
- colorStop = ot.VarColorStop()
- colorStop.Alpha = _DEFAULT_ALPHA
- return colorStop
-
-
-def _defaultColorLine():
- colorLine = ot.ColorLine()
- colorLine.Extend = ExtendMode.PAD
- return colorLine
-
-
-def _defaultVarColorLine():
- colorLine = ot.VarColorLine()
- colorLine.Extend = ExtendMode.PAD
- return colorLine
-
-
-def _defaultPaintSolid():
- paint = ot.Paint()
- paint.Alpha = _DEFAULT_ALPHA
- return paint
-
-
-def _buildPaintCallbacks():
- return {
- (
- BuildCallback.BEFORE_BUILD,
- ot.Paint,
- ot.PaintFormat.PaintRadialGradient,
- ): _beforeBuildPaintRadialGradient,
- (
- BuildCallback.BEFORE_BUILD,
- ot.Paint,
- ot.PaintFormat.PaintVarRadialGradient,
- ): _beforeBuildPaintRadialGradient,
- (BuildCallback.CREATE_DEFAULT, ot.ColorStop): _defaultColorStop,
- (BuildCallback.CREATE_DEFAULT, ot.VarColorStop): _defaultVarColorStop,
- (BuildCallback.CREATE_DEFAULT, ot.ColorLine): _defaultColorLine,
- (BuildCallback.CREATE_DEFAULT, ot.VarColorLine): _defaultVarColorLine,
- (
- BuildCallback.CREATE_DEFAULT,
- ot.Paint,
- ot.PaintFormat.PaintSolid,
- ): _defaultPaintSolid,
- (
- BuildCallback.CREATE_DEFAULT,
- ot.Paint,
- ot.PaintFormat.PaintVarSolid,
- ): _defaultPaintSolid,
- }
-
-
-def populateCOLRv0(
- table: ot.COLR,
- colorGlyphsV0: _ColorGlyphsV0Dict,
- glyphMap: Optional[Mapping[str, int]] = None,
-):
- """Build v0 color layers and add to existing COLR table.
-
- Args:
- table: a raw ``otTables.COLR()`` object (not ttLib's ``table_C_O_L_R_``).
- colorGlyphsV0: map of base glyph names to lists of (layer glyph names,
- color palette index) tuples. Can be empty.
- glyphMap: a map from glyph names to glyph indices, as returned from
- ``TTFont.getReverseGlyphMap()``, to optionally sort base records by GID.
- """
- if glyphMap is not None:
- colorGlyphItems = sorted(
- colorGlyphsV0.items(), key=lambda item: glyphMap[item[0]]
- )
- else:
- colorGlyphItems = colorGlyphsV0.items()
- baseGlyphRecords = []
- layerRecords = []
- for baseGlyph, layers in colorGlyphItems:
- baseRec = ot.BaseGlyphRecord()
- baseRec.BaseGlyph = baseGlyph
- baseRec.FirstLayerIndex = len(layerRecords)
- baseRec.NumLayers = len(layers)
- baseGlyphRecords.append(baseRec)
-
- for layerGlyph, paletteIndex in layers:
- layerRec = ot.LayerRecord()
- layerRec.LayerGlyph = layerGlyph
- layerRec.PaletteIndex = paletteIndex
- layerRecords.append(layerRec)
-
- table.BaseGlyphRecordArray = table.LayerRecordArray = None
- if baseGlyphRecords:
- table.BaseGlyphRecordArray = ot.BaseGlyphRecordArray()
- table.BaseGlyphRecordArray.BaseGlyphRecord = baseGlyphRecords
- if layerRecords:
- table.LayerRecordArray = ot.LayerRecordArray()
- table.LayerRecordArray.LayerRecord = layerRecords
- table.BaseGlyphRecordCount = len(baseGlyphRecords)
- table.LayerRecordCount = len(layerRecords)
-
-
-def buildCOLR(
- colorGlyphs: _ColorGlyphsDict,
- version: Optional[int] = None,
- *,
- glyphMap: Optional[Mapping[str, int]] = None,
- varStore: Optional[ot.VarStore] = None,
- varIndexMap: Optional[ot.DeltaSetIndexMap] = None,
- clipBoxes: Optional[Dict[str, _ClipBoxInput]] = None,
- allowLayerReuse: bool = True,
-) -> C_O_L_R_.table_C_O_L_R_:
- """Build COLR table from color layers mapping.
-
- Args:
-
- colorGlyphs: map of base glyph name to, either list of (layer glyph name,
- color palette index) tuples for COLRv0; or a single ``Paint`` (dict) or
- list of ``Paint`` for COLRv1.
- version: the version of COLR table. If None, the version is determined
- by the presence of COLRv1 paints or variation data (varStore), which
- require version 1; otherwise, if all base glyphs use only simple color
- layers, version 0 is used.
- glyphMap: a map from glyph names to glyph indices, as returned from
- TTFont.getReverseGlyphMap(), to optionally sort base records by GID.
- varStore: Optional ItemVarationStore for deltas associated with v1 layer.
- varIndexMap: Optional DeltaSetIndexMap for deltas associated with v1 layer.
- clipBoxes: Optional map of base glyph name to clip box 4- or 5-tuples:
- (xMin, yMin, xMax, yMax) or (xMin, yMin, xMax, yMax, varIndexBase).
-
- Returns:
- A new COLR table.
- """
- self = C_O_L_R_.table_C_O_L_R_()
-
- if varStore is not None and version == 0:
- raise ValueError("Can't add VarStore to COLRv0")
-
- if version in (None, 0) and not varStore:
- # split color glyphs into v0 and v1 and encode separately
- colorGlyphsV0, colorGlyphsV1 = _split_color_glyphs_by_version(colorGlyphs)
- if version == 0 and colorGlyphsV1:
- raise ValueError("Can't encode COLRv1 glyphs in COLRv0")
- else:
- # unless explicitly requested for v1 or have variations, in which case
- # we encode all color glyph as v1
- colorGlyphsV0, colorGlyphsV1 = {}, colorGlyphs
-
- colr = ot.COLR()
-
- populateCOLRv0(colr, colorGlyphsV0, glyphMap)
-
- colr.LayerList, colr.BaseGlyphList = buildColrV1(
- colorGlyphsV1,
- glyphMap,
- allowLayerReuse=allowLayerReuse,
- )
-
- if version is None:
- version = 1 if (varStore or colorGlyphsV1) else 0
- elif version not in (0, 1):
- raise NotImplementedError(version)
- self.version = colr.Version = version
-
- if version == 0:
- self.ColorLayers = self._decompileColorLayersV0(colr)
- else:
- colr.ClipList = buildClipList(clipBoxes) if clipBoxes else None
- colr.VarIndexMap = varIndexMap
- colr.VarStore = varStore
- self.table = colr
-
- return self
-
-
-def buildClipList(clipBoxes: Dict[str, _ClipBoxInput]) -> ot.ClipList:
- clipList = ot.ClipList()
- clipList.Format = 1
- clipList.clips = {name: buildClipBox(box) for name, box in clipBoxes.items()}
- return clipList
-
-
-def buildClipBox(clipBox: _ClipBoxInput) -> ot.ClipBox:
- if isinstance(clipBox, ot.ClipBox):
- return clipBox
- n = len(clipBox)
- clip = ot.ClipBox()
- if n not in (4, 5):
- raise ValueError(f"Invalid ClipBox: expected 4 or 5 values, found {n}")
- clip.xMin, clip.yMin, clip.xMax, clip.yMax = intRect(clipBox[:4])
- clip.Format = int(n == 5) + 1
- if n == 5:
- clip.VarIndexBase = int(clipBox[4])
- return clip
-
-
-class ColorPaletteType(enum.IntFlag):
- USABLE_WITH_LIGHT_BACKGROUND = 0x0001
- USABLE_WITH_DARK_BACKGROUND = 0x0002
-
- @classmethod
- def _missing_(cls, value):
- # enforce reserved bits
- if isinstance(value, int) and (value < 0 or value & 0xFFFC != 0):
- raise ValueError(f"{value} is not a valid {cls.__name__}")
- return super()._missing_(value)
-
-
-# None, 'abc' or {'en': 'abc', 'de': 'xyz'}
-_OptionalLocalizedString = Union[None, str, Dict[str, str]]
-
-
-def buildPaletteLabels(
- labels: Iterable[_OptionalLocalizedString], nameTable: _n_a_m_e.table__n_a_m_e
-) -> List[Optional[int]]:
- return [
- nameTable.addMultilingualName(l, mac=False)
- if isinstance(l, dict)
- else C_P_A_L_.table_C_P_A_L_.NO_NAME_ID
- if l is None
- else nameTable.addMultilingualName({"en": l}, mac=False)
- for l in labels
- ]
-
-
-def buildCPAL(
- palettes: Sequence[Sequence[Tuple[float, float, float, float]]],
- paletteTypes: Optional[Sequence[ColorPaletteType]] = None,
- paletteLabels: Optional[Sequence[_OptionalLocalizedString]] = None,
- paletteEntryLabels: Optional[Sequence[_OptionalLocalizedString]] = None,
- nameTable: Optional[_n_a_m_e.table__n_a_m_e] = None,
-) -> C_P_A_L_.table_C_P_A_L_:
- """Build CPAL table from list of color palettes.
-
- Args:
- palettes: list of lists of colors encoded as tuples of (R, G, B, A) floats
- in the range [0..1].
- paletteTypes: optional list of ColorPaletteType, one for each palette.
- paletteLabels: optional list of palette labels. Each lable can be either:
- None (no label), a string (for for default English labels), or a
- localized string (as a dict keyed with BCP47 language codes).
- paletteEntryLabels: optional list of palette entry labels, one for each
- palette entry (see paletteLabels).
- nameTable: optional name table where to store palette and palette entry
- labels. Required if either paletteLabels or paletteEntryLabels is set.
-
- Return:
- A new CPAL v0 or v1 table, if custom palette types or labels are specified.
- """
- if len({len(p) for p in palettes}) != 1:
- raise ColorLibError("color palettes have different lengths")
-
- if (paletteLabels or paletteEntryLabels) and not nameTable:
- raise TypeError(
- "nameTable is required if palette or palette entries have labels"
- )
-
- cpal = C_P_A_L_.table_C_P_A_L_()
- cpal.numPaletteEntries = len(palettes[0])
-
- cpal.palettes = []
- for i, palette in enumerate(palettes):
- colors = []
- for j, color in enumerate(palette):
- if not isinstance(color, tuple) or len(color) != 4:
- raise ColorLibError(
- f"In palette[{i}][{j}]: expected (R, G, B, A) tuple, got {color!r}"
- )
- if any(v > 1 or v < 0 for v in color):
- raise ColorLibError(
- f"palette[{i}][{j}] has invalid out-of-range [0..1] color: {color!r}"
- )
- # input colors are RGBA, CPAL encodes them as BGRA
- red, green, blue, alpha = color
- colors.append(
- C_P_A_L_.Color(*(round(v * 255) for v in (blue, green, red, alpha)))
- )
- cpal.palettes.append(colors)
-
- if any(v is not None for v in (paletteTypes, paletteLabels, paletteEntryLabels)):
- cpal.version = 1
-
- if paletteTypes is not None:
- if len(paletteTypes) != len(palettes):
- raise ColorLibError(
- f"Expected {len(palettes)} paletteTypes, got {len(paletteTypes)}"
- )
- cpal.paletteTypes = [ColorPaletteType(t).value for t in paletteTypes]
- else:
- cpal.paletteTypes = [C_P_A_L_.table_C_P_A_L_.DEFAULT_PALETTE_TYPE] * len(
- palettes
- )
-
- if paletteLabels is not None:
- if len(paletteLabels) != len(palettes):
- raise ColorLibError(
- f"Expected {len(palettes)} paletteLabels, got {len(paletteLabels)}"
- )
- cpal.paletteLabels = buildPaletteLabels(paletteLabels, nameTable)
- else:
- cpal.paletteLabels = [C_P_A_L_.table_C_P_A_L_.NO_NAME_ID] * len(palettes)
-
- if paletteEntryLabels is not None:
- if len(paletteEntryLabels) != cpal.numPaletteEntries:
- raise ColorLibError(
- f"Expected {cpal.numPaletteEntries} paletteEntryLabels, "
- f"got {len(paletteEntryLabels)}"
- )
- cpal.paletteEntryLabels = buildPaletteLabels(paletteEntryLabels, nameTable)
- else:
- cpal.paletteEntryLabels = [
- C_P_A_L_.table_C_P_A_L_.NO_NAME_ID
- ] * cpal.numPaletteEntries
- else:
- cpal.version = 0
-
- return cpal
-
-
-# COLR v1 tables
-# See draft proposal at: https://github.com/googlefonts/colr-gradients-spec
-
-
-def _is_colrv0_layer(layer: Any) -> bool:
- # Consider as COLRv0 layer any sequence of length 2 (be it tuple or list) in which
- # the first element is a str (the layerGlyph) and the second element is an int
- # (CPAL paletteIndex).
- # https://github.com/googlefonts/ufo2ft/issues/426
- try:
- layerGlyph, paletteIndex = layer
- except (TypeError, ValueError):
- return False
- else:
- return isinstance(layerGlyph, str) and isinstance(paletteIndex, int)
-
-
-def _split_color_glyphs_by_version(
- colorGlyphs: _ColorGlyphsDict,
-) -> Tuple[_ColorGlyphsV0Dict, _ColorGlyphsDict]:
- colorGlyphsV0 = {}
- colorGlyphsV1 = {}
- for baseGlyph, layers in colorGlyphs.items():
- if all(_is_colrv0_layer(l) for l in layers):
- colorGlyphsV0[baseGlyph] = layers
- else:
- colorGlyphsV1[baseGlyph] = layers
-
- # sanity check
- assert set(colorGlyphs) == (set(colorGlyphsV0) | set(colorGlyphsV1))
-
- return colorGlyphsV0, colorGlyphsV1
-
-
-def _reuse_ranges(num_layers: int) -> Generator[Tuple[int, int], None, None]:
- # TODO feels like something itertools might have already
- for lbound in range(num_layers):
- # Reuse of very large #s of layers is relatively unlikely
- # +2: we want sequences of at least 2
- # otData handles single-record duplication
- for ubound in range(
- lbound + 2, min(num_layers + 1, lbound + 2 + _MAX_REUSE_LEN)
- ):
- yield (lbound, ubound)
-
-
-class LayerReuseCache:
- reusePool: Mapping[Tuple[Any, ...], int]
- tuples: Mapping[int, Tuple[Any, ...]]
- keepAlive: List[ot.Paint] # we need id to remain valid
-
- def __init__(self):
- self.reusePool = {}
- self.tuples = {}
- self.keepAlive = []
-
- def _paint_tuple(self, paint: ot.Paint):
- # start simple, who even cares about cyclic graphs or interesting field types
- def _tuple_safe(value):
- if isinstance(value, enum.Enum):
- return value
- elif hasattr(value, "__dict__"):
- return tuple(
- (k, _tuple_safe(v)) for k, v in sorted(value.__dict__.items())
- )
- elif isinstance(value, collections.abc.MutableSequence):
- return tuple(_tuple_safe(e) for e in value)
- return value
-
- # Cache the tuples for individual Paint instead of the whole sequence
- # because the seq could be a transient slice
- result = self.tuples.get(id(paint), None)
- if result is None:
- result = _tuple_safe(paint)
- self.tuples[id(paint)] = result
- self.keepAlive.append(paint)
- return result
-
- def _as_tuple(self, paints: Sequence[ot.Paint]) -> Tuple[Any, ...]:
- return tuple(self._paint_tuple(p) for p in paints)
-
- def try_reuse(self, layers: List[ot.Paint]) -> List[ot.Paint]:
- found_reuse = True
- while found_reuse:
- found_reuse = False
-
- ranges = sorted(
- _reuse_ranges(len(layers)),
- key=lambda t: (t[1] - t[0], t[1], t[0]),
- reverse=True,
- )
- for lbound, ubound in ranges:
- reuse_lbound = self.reusePool.get(
- self._as_tuple(layers[lbound:ubound]), -1
- )
- if reuse_lbound == -1:
- continue
- new_slice = ot.Paint()
- new_slice.Format = int(ot.PaintFormat.PaintColrLayers)
- new_slice.NumLayers = ubound - lbound
- new_slice.FirstLayerIndex = reuse_lbound
- layers = layers[:lbound] + [new_slice] + layers[ubound:]
- found_reuse = True
- break
- return layers
-
- def add(self, layers: List[ot.Paint], first_layer_index: int):
- for lbound, ubound in _reuse_ranges(len(layers)):
- self.reusePool[self._as_tuple(layers[lbound:ubound])] = (
- lbound + first_layer_index
- )
-
-
-class LayerListBuilder:
- layers: List[ot.Paint]
- cache: LayerReuseCache
- allowLayerReuse: bool
-
- def __init__(self, *, allowLayerReuse=True):
- self.layers = []
- if allowLayerReuse:
- self.cache = LayerReuseCache()
- else:
- self.cache = None
-
- # We need to intercept construction of PaintColrLayers
- callbacks = _buildPaintCallbacks()
- callbacks[
- (
- BuildCallback.BEFORE_BUILD,
- ot.Paint,
- ot.PaintFormat.PaintColrLayers,
- )
- ] = self._beforeBuildPaintColrLayers
- self.tableBuilder = TableBuilder(callbacks)
-
- # COLR layers is unusual in that it modifies shared state
- # so we need a callback into an object
- def _beforeBuildPaintColrLayers(self, dest, source):
- # Sketchy gymnastics: a sequence input will have dropped it's layers
- # into NumLayers; get it back
- if isinstance(source.get("NumLayers", None), collections.abc.Sequence):
- layers = source["NumLayers"]
- else:
- layers = source["Layers"]
-
- # Convert maps seqs or whatever into typed objects
- layers = [self.buildPaint(l) for l in layers]
-
- # No reason to have a colr layers with just one entry
- if len(layers) == 1:
- return layers[0], {}
-
- if self.cache is not None:
- # Look for reuse, with preference to longer sequences
- # This may make the layer list smaller
- layers = self.cache.try_reuse(layers)
-
- # The layer list is now final; if it's too big we need to tree it
- is_tree = len(layers) > MAX_PAINT_COLR_LAYER_COUNT
- layers = build_n_ary_tree(layers, n=MAX_PAINT_COLR_LAYER_COUNT)
-
- # We now have a tree of sequences with Paint leaves.
- # Convert the sequences into PaintColrLayers.
- def listToColrLayers(layer):
- if isinstance(layer, collections.abc.Sequence):
- return self.buildPaint(
- {
- "Format": ot.PaintFormat.PaintColrLayers,
- "Layers": [listToColrLayers(l) for l in layer],
- }
- )
- return layer
-
- layers = [listToColrLayers(l) for l in layers]
-
- # No reason to have a colr layers with just one entry
- if len(layers) == 1:
- return layers[0], {}
-
- paint = ot.Paint()
- paint.Format = int(ot.PaintFormat.PaintColrLayers)
- paint.NumLayers = len(layers)
- paint.FirstLayerIndex = len(self.layers)
- self.layers.extend(layers)
-
- # Register our parts for reuse provided we aren't a tree
- # If we are a tree the leaves registered for reuse and that will suffice
- if self.cache is not None and not is_tree:
- self.cache.add(layers, paint.FirstLayerIndex)
-
- # we've fully built dest; empty source prevents generalized build from kicking in
- return paint, {}
-
- def buildPaint(self, paint: _PaintInput) -> ot.Paint:
- return self.tableBuilder.build(ot.Paint, paint)
-
- def build(self) -> Optional[ot.LayerList]:
- if not self.layers:
- return None
- layers = ot.LayerList()
- layers.LayerCount = len(self.layers)
- layers.Paint = self.layers
- return layers
-
-
-def buildBaseGlyphPaintRecord(
- baseGlyph: str, layerBuilder: LayerListBuilder, paint: _PaintInput
-) -> ot.BaseGlyphList:
- self = ot.BaseGlyphPaintRecord()
- self.BaseGlyph = baseGlyph
- self.Paint = layerBuilder.buildPaint(paint)
- return self
-
-
-def _format_glyph_errors(errors: Mapping[str, Exception]) -> str:
- lines = []
- for baseGlyph, error in sorted(errors.items()):
- lines.append(f" {baseGlyph} => {type(error).__name__}: {error}")
- return "\n".join(lines)
-
-
-def buildColrV1(
- colorGlyphs: _ColorGlyphsDict,
- glyphMap: Optional[Mapping[str, int]] = None,
- *,
- allowLayerReuse: bool = True,
-) -> Tuple[Optional[ot.LayerList], ot.BaseGlyphList]:
- if glyphMap is not None:
- colorGlyphItems = sorted(
- colorGlyphs.items(), key=lambda item: glyphMap[item[0]]
- )
- else:
- colorGlyphItems = colorGlyphs.items()
-
- errors = {}
- baseGlyphs = []
- layerBuilder = LayerListBuilder(allowLayerReuse=allowLayerReuse)
- for baseGlyph, paint in colorGlyphItems:
- try:
- baseGlyphs.append(buildBaseGlyphPaintRecord(baseGlyph, layerBuilder, paint))
-
- except (ColorLibError, OverflowError, ValueError, TypeError) as e:
- errors[baseGlyph] = e
-
- if errors:
- failed_glyphs = _format_glyph_errors(errors)
- exc = ColorLibError(f"Failed to build BaseGlyphList:\n{failed_glyphs}")
- exc.errors = errors
- raise exc from next(iter(errors.values()))
-
- layers = layerBuilder.build()
- glyphs = ot.BaseGlyphList()
- glyphs.BaseGlyphCount = len(baseGlyphs)
- glyphs.BaseGlyphPaintRecord = baseGlyphs
- return (layers, glyphs)
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/_writers.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/_writers.py
deleted file mode 100644
index 939cdb912a9debaea07fbf3a9ac04549c44d077c..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/h11/_writers.py
+++ /dev/null
@@ -1,145 +0,0 @@
-# Code to read HTTP data
-#
-# Strategy: each writer takes an event + a write-some-bytes function, which is
-# calls.
-#
-# WRITERS is a dict describing how to pick a reader. It maps states to either:
-# - a writer
-# - or, for body writers, a dict of framin-dependent writer factories
-
-from typing import Any, Callable, Dict, List, Tuple, Type, Union
-
-from ._events import Data, EndOfMessage, Event, InformationalResponse, Request, Response
-from ._headers import Headers
-from ._state import CLIENT, IDLE, SEND_BODY, SEND_RESPONSE, SERVER
-from ._util import LocalProtocolError, Sentinel
-
-__all__ = ["WRITERS"]
-
-Writer = Callable[[bytes], Any]
-
-
-def write_headers(headers: Headers, write: Writer) -> None:
- # "Since the Host field-value is critical information for handling a
- # request, a user agent SHOULD generate Host as the first header field
- # following the request-line." - RFC 7230
- raw_items = headers._full_items
- for raw_name, name, value in raw_items:
- if name == b"host":
- write(b"%s: %s\r\n" % (raw_name, value))
- for raw_name, name, value in raw_items:
- if name != b"host":
- write(b"%s: %s\r\n" % (raw_name, value))
- write(b"\r\n")
-
-
-def write_request(request: Request, write: Writer) -> None:
- if request.http_version != b"1.1":
- raise LocalProtocolError("I only send HTTP/1.1")
- write(b"%s %s HTTP/1.1\r\n" % (request.method, request.target))
- write_headers(request.headers, write)
-
-
-# Shared between InformationalResponse and Response
-def write_any_response(
- response: Union[InformationalResponse, Response], write: Writer
-) -> None:
- if response.http_version != b"1.1":
- raise LocalProtocolError("I only send HTTP/1.1")
- status_bytes = str(response.status_code).encode("ascii")
- # We don't bother sending ascii status messages like "OK"; they're
- # optional and ignored by the protocol. (But the space after the numeric
- # status code is mandatory.)
- #
- # XX FIXME: could at least make an effort to pull out the status message
- # from stdlib's http.HTTPStatus table. Or maybe just steal their enums
- # (either by import or copy/paste). We already accept them as status codes
- # since they're of type IntEnum < int.
- write(b"HTTP/1.1 %s %s\r\n" % (status_bytes, response.reason))
- write_headers(response.headers, write)
-
-
-class BodyWriter:
- def __call__(self, event: Event, write: Writer) -> None:
- if type(event) is Data:
- self.send_data(event.data, write)
- elif type(event) is EndOfMessage:
- self.send_eom(event.headers, write)
- else: # pragma: no cover
- assert False
-
- def send_data(self, data: bytes, write: Writer) -> None:
- pass
-
- def send_eom(self, headers: Headers, write: Writer) -> None:
- pass
-
-
-#
-# These are all careful not to do anything to 'data' except call len(data) and
-# write(data). This allows us to transparently pass-through funny objects,
-# like placeholder objects referring to files on disk that will be sent via
-# sendfile(2).
-#
-class ContentLengthWriter(BodyWriter):
- def __init__(self, length: int) -> None:
- self._length = length
-
- def send_data(self, data: bytes, write: Writer) -> None:
- self._length -= len(data)
- if self._length < 0:
- raise LocalProtocolError("Too much data for declared Content-Length")
- write(data)
-
- def send_eom(self, headers: Headers, write: Writer) -> None:
- if self._length != 0:
- raise LocalProtocolError("Too little data for declared Content-Length")
- if headers:
- raise LocalProtocolError("Content-Length and trailers don't mix")
-
-
-class ChunkedWriter(BodyWriter):
- def send_data(self, data: bytes, write: Writer) -> None:
- # if we encoded 0-length data in the naive way, it would look like an
- # end-of-message.
- if not data:
- return
- write(b"%x\r\n" % len(data))
- write(data)
- write(b"\r\n")
-
- def send_eom(self, headers: Headers, write: Writer) -> None:
- write(b"0\r\n")
- write_headers(headers, write)
-
-
-class Http10Writer(BodyWriter):
- def send_data(self, data: bytes, write: Writer) -> None:
- write(data)
-
- def send_eom(self, headers: Headers, write: Writer) -> None:
- if headers:
- raise LocalProtocolError("can't send trailers to HTTP/1.0 client")
- # no need to close the socket ourselves, that will be taken care of by
- # Connection: close machinery
-
-
-WritersType = Dict[
- Union[Tuple[Type[Sentinel], Type[Sentinel]], Type[Sentinel]],
- Union[
- Dict[str, Type[BodyWriter]],
- Callable[[Union[InformationalResponse, Response], Writer], None],
- Callable[[Request, Writer], None],
- ],
-]
-
-WRITERS: WritersType = {
- (CLIENT, IDLE): write_request,
- (SERVER, IDLE): write_any_response,
- (SERVER, SEND_RESPONSE): write_any_response,
- SEND_BODY: {
- "chunked": ChunkedWriter,
- "content-length": ContentLengthWriter,
- "http/1.0": Http10Writer,
- },
-}
diff --git a/spaces/Datasculptor/MusicGen/MODEL_CARD.md b/spaces/Datasculptor/MusicGen/MODEL_CARD.md
deleted file mode 100644
index 6c2c9f883969eb905e74ad3376966d156cc5ca00..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/MusicGen/MODEL_CARD.md
+++ /dev/null
@@ -1,81 +0,0 @@
-# MusicGen Model Card
-
-## Model details
-
-**Organization developing the model:** The FAIR team of Meta AI.
-
-**Model date:** MusicGen was trained between April 2023 and May 2023.
-
-**Model version:** This is the version 1 of the model.
-
-**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
-
-**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation][arxiv].
-
-**Citation details** See [our paper][arxiv]
-
-**License** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
-
-**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
-
-## Intended use
-**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
-
-- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
-- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
-
-**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
-
-**Out-of-scope use cases** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
-
-## Metrics
-
-**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
-
-- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
-- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
-- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
-
-Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
-
-- Overall quality of the music samples;
-- Text relevance to the provided text input;
-- Adherence to the melody for melody-guided music generation.
-
-More details on performance measures and human studies can be found in the paper.
-
-**Decision thresholds:** Not applicable.
-
-## Evaluation datasets
-
-The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
-
-## Training datasets
-
-The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
-
-## Quantitative analysis
-
-More information can be found in the paper [Simple and Controllable Music Generation][arxiv], in the Experimental Setup section.
-
-## Limitations and biases
-
-**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
-
-**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
-
-**Limitations:**
-
-- The model is not able to generate realistic vocals.
-- The model has been trained with English descriptions and will not perform as well in other languages.
-- The model does not perform equally well for all music styles and cultures.
-- The model sometimes generates end of songs, collapsing to silence.
-- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
-
-**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
-
-**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
-
-**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
-
-[arxiv]: https://arxiv.org/abs/2306.05284
diff --git a/spaces/Datasculptor/StyleGAN-NADA/e4e/scripts/inference.py b/spaces/Datasculptor/StyleGAN-NADA/e4e/scripts/inference.py
deleted file mode 100644
index 185b9b34db85dcd97b9793bd5dbfc9d1ca046549..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/StyleGAN-NADA/e4e/scripts/inference.py
+++ /dev/null
@@ -1,133 +0,0 @@
-import argparse
-
-import torch
-import numpy as np
-import sys
-import os
-import dlib
-
-sys.path.append(".")
-sys.path.append("..")
-
-from configs import data_configs, paths_config
-from datasets.inference_dataset import InferenceDataset
-from torch.utils.data import DataLoader
-from utils.model_utils import setup_model
-from utils.common import tensor2im
-from utils.alignment import align_face
-from PIL import Image
-
-
-def main(args):
- net, opts = setup_model(args.ckpt, device)
- is_cars = 'cars_' in opts.dataset_type
- generator = net.decoder
- generator.eval()
- args, data_loader = setup_data_loader(args, opts)
-
- # Check if latents exist
- latents_file_path = os.path.join(args.save_dir, 'latents.pt')
- if os.path.exists(latents_file_path):
- latent_codes = torch.load(latents_file_path).to(device)
- else:
- latent_codes = get_all_latents(net, data_loader, args.n_sample, is_cars=is_cars)
- torch.save(latent_codes, latents_file_path)
-
- if not args.latents_only:
- generate_inversions(args, generator, latent_codes, is_cars=is_cars)
-
-
-def setup_data_loader(args, opts):
- dataset_args = data_configs.DATASETS[opts.dataset_type]
- transforms_dict = dataset_args['transforms'](opts).get_transforms()
- images_path = args.images_dir if args.images_dir is not None else dataset_args['test_source_root']
- print(f"images path: {images_path}")
- align_function = None
- if args.align:
- align_function = run_alignment
- test_dataset = InferenceDataset(root=images_path,
- transform=transforms_dict['transform_test'],
- preprocess=align_function,
- opts=opts)
-
- data_loader = DataLoader(test_dataset,
- batch_size=args.batch,
- shuffle=False,
- num_workers=2,
- drop_last=True)
-
- print(f'dataset length: {len(test_dataset)}')
-
- if args.n_sample is None:
- args.n_sample = len(test_dataset)
- return args, data_loader
-
-
-def get_latents(net, x, is_cars=False):
- codes = net.encoder(x)
- if net.opts.start_from_latent_avg:
- if codes.ndim == 2:
- codes = codes + net.latent_avg.repeat(codes.shape[0], 1, 1)[:, 0, :]
- else:
- codes = codes + net.latent_avg.repeat(codes.shape[0], 1, 1)
- if codes.shape[1] == 18 and is_cars:
- codes = codes[:, :16, :]
- return codes
-
-
-def get_all_latents(net, data_loader, n_images=None, is_cars=False):
- all_latents = []
- i = 0
- with torch.no_grad():
- for batch in data_loader:
- if n_images is not None and i > n_images:
- break
- x = batch
- inputs = x.to(device).float()
- latents = get_latents(net, inputs, is_cars)
- all_latents.append(latents)
- i += len(latents)
- return torch.cat(all_latents)
-
-
-def save_image(img, save_dir, idx):
- result = tensor2im(img)
- im_save_path = os.path.join(save_dir, f"{idx:05d}.jpg")
- Image.fromarray(np.array(result)).save(im_save_path)
-
-
-@torch.no_grad()
-def generate_inversions(args, g, latent_codes, is_cars):
- print('Saving inversion images')
- inversions_directory_path = os.path.join(args.save_dir, 'inversions')
- os.makedirs(inversions_directory_path, exist_ok=True)
- for i in range(args.n_sample):
- imgs, _ = g([latent_codes[i].unsqueeze(0)], input_is_latent=True, randomize_noise=False, return_latents=True)
- if is_cars:
- imgs = imgs[:, :, 64:448, :]
- save_image(imgs[0], inversions_directory_path, i + 1)
-
-
-def run_alignment(image_path):
- predictor = dlib.shape_predictor(paths_config.model_paths['shape_predictor'])
- aligned_image = align_face(filepath=image_path, predictor=predictor)
- print("Aligned image has shape: {}".format(aligned_image.size))
- return aligned_image
-
-
-if __name__ == "__main__":
- device = "cuda"
-
- parser = argparse.ArgumentParser(description="Inference")
- parser.add_argument("--images_dir", type=str, default=None,
- help="The directory of the images to be inverted")
- parser.add_argument("--save_dir", type=str, default=None,
- help="The directory to save the latent codes and inversion images. (default: images_dir")
- parser.add_argument("--batch", type=int, default=1, help="batch size for the generator")
- parser.add_argument("--n_sample", type=int, default=None, help="number of the samples to infer.")
- parser.add_argument("--latents_only", action="store_true", help="infer only the latent codes of the directory")
- parser.add_argument("--align", action="store_true", help="align face images before inference")
- parser.add_argument("ckpt", metavar="CHECKPOINT", help="path to generator checkpoint")
-
- args = parser.parse_args()
- main(args)
diff --git a/spaces/Datasculptor/stabilityai-stable-diffusion-2-1/app.py b/spaces/Datasculptor/stabilityai-stable-diffusion-2-1/app.py
deleted file mode 100644
index 0160420876923d89f2ab5fccb9f4d13725e29972..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/stabilityai-stable-diffusion-2-1/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/stabilityai/stable-diffusion-2-1").launch()
\ No newline at end of file
diff --git a/spaces/Deepak107/NSFW-Detection/README.md b/spaces/Deepak107/NSFW-Detection/README.md
deleted file mode 100644
index 21d3bd0a38405e337e8e686c74ad421c881415a0..0000000000000000000000000000000000000000
--- a/spaces/Deepak107/NSFW-Detection/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: NSFW Detection
-emoji: 🌍
-colorFrom: gray
-colorTo: purple
-sdk: gradio
-sdk_version: 3.8
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Dilmurat/bingo/README.md b/spaces/Dilmurat/bingo/README.md
deleted file mode 100644
index 5d6936218874c647b5d22e13ad4be7edb8936f92..0000000000000000000000000000000000000000
--- a/spaces/Dilmurat/bingo/README.md
+++ /dev/null
@@ -1,28 +0,0 @@
----
-title: bingo
-emoji: 😊
-colorFrom: red
-colorTo: red
-sdk: docker
-license: mit
-duplicated_from: hf4all/bingo
----
-
-
-
-# Bingo
-
-Bingo,一个让你呼吸顺畅 New Bing。
-
-高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。
-
-
-
-[](https://hub.docker.com/repository/docker/weaigc/bingo/)
-[](https://hub.docker.com/repository/docker/weaigc/bingo/)
-[](https://github.com/weaigc/bingo/blob/main/license)
-
-问题反馈请前往 https://github.com/weaigc/bingo/issues
-
-
-
diff --git a/spaces/ECCV2022/bytetrack/tools/convert_crowdhuman_to_coco.py b/spaces/ECCV2022/bytetrack/tools/convert_crowdhuman_to_coco.py
deleted file mode 100644
index 62e0b66788f7625e2fbb5ba420794abf1125aa84..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/tools/convert_crowdhuman_to_coco.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import os
-import numpy as np
-import json
-from PIL import Image
-
-DATA_PATH = 'datasets/crowdhuman/'
-OUT_PATH = DATA_PATH + 'annotations/'
-SPLITS = ['val', 'train']
-DEBUG = False
-
-def load_func(fpath):
- print('fpath', fpath)
- assert os.path.exists(fpath)
- with open(fpath,'r') as fid:
- lines = fid.readlines()
- records =[json.loads(line.strip('\n')) for line in lines]
- return records
-
-if __name__ == '__main__':
- if not os.path.exists(OUT_PATH):
- os.mkdir(OUT_PATH)
- for split in SPLITS:
- data_path = DATA_PATH + split
- out_path = OUT_PATH + '{}.json'.format(split)
- out = {'images': [], 'annotations': [], 'categories': [{'id': 1, 'name': 'person'}]}
- ann_path = DATA_PATH + 'annotation_{}.odgt'.format(split)
- anns_data = load_func(ann_path)
- image_cnt = 0
- ann_cnt = 0
- video_cnt = 0
- for ann_data in anns_data:
- image_cnt += 1
- file_path = DATA_PATH + 'CrowdHuman_{}/'.format(split) + '{}.jpg'.format(ann_data['ID'])
- im = Image.open(file_path)
- image_info = {'file_name': '{}.jpg'.format(ann_data['ID']),
- 'id': image_cnt,
- 'height': im.size[1],
- 'width': im.size[0]}
- out['images'].append(image_info)
- if split != 'test':
- anns = ann_data['gtboxes']
- for i in range(len(anns)):
- ann_cnt += 1
- fbox = anns[i]['fbox']
- ann = {'id': ann_cnt,
- 'category_id': 1,
- 'image_id': image_cnt,
- 'track_id': -1,
- 'bbox_vis': anns[i]['vbox'],
- 'bbox': fbox,
- 'area': fbox[2] * fbox[3],
- 'iscrowd': 1 if 'extra' in anns[i] and \
- 'ignore' in anns[i]['extra'] and \
- anns[i]['extra']['ignore'] == 1 else 0}
- out['annotations'].append(ann)
- print('loaded {} for {} images and {} samples'.format(split, len(out['images']), len(out['annotations'])))
- json.dump(out, open(out_path, 'w'))
\ No newline at end of file
diff --git a/spaces/EcoCy/LoRA-DreamBooth-Training-UI/inference.py b/spaces/EcoCy/LoRA-DreamBooth-Training-UI/inference.py
deleted file mode 100644
index ce0f2b08df75e6d62f06c4119f1dc859930de032..0000000000000000000000000000000000000000
--- a/spaces/EcoCy/LoRA-DreamBooth-Training-UI/inference.py
+++ /dev/null
@@ -1,94 +0,0 @@
-from __future__ import annotations
-
-import gc
-import pathlib
-
-import gradio as gr
-import PIL.Image
-import torch
-from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
-from huggingface_hub import ModelCard
-
-
-class InferencePipeline:
- def __init__(self, hf_token: str | None = None):
- self.hf_token = hf_token
- self.pipe = None
- self.device = torch.device(
- 'cuda:0' if torch.cuda.is_available() else 'cpu')
- self.lora_model_id = None
- self.base_model_id = None
-
- def clear(self) -> None:
- self.lora_model_id = None
- self.base_model_id = None
- del self.pipe
- self.pipe = None
- torch.cuda.empty_cache()
- gc.collect()
-
- @staticmethod
- def check_if_model_is_local(lora_model_id: str) -> bool:
- return pathlib.Path(lora_model_id).exists()
-
- @staticmethod
- def get_model_card(model_id: str,
- hf_token: str | None = None) -> ModelCard:
- if InferencePipeline.check_if_model_is_local(model_id):
- card_path = (pathlib.Path(model_id) / 'README.md').as_posix()
- else:
- card_path = model_id
- return ModelCard.load(card_path, token=hf_token)
-
- @staticmethod
- def get_base_model_info(lora_model_id: str,
- hf_token: str | None = None) -> str:
- card = InferencePipeline.get_model_card(lora_model_id, hf_token)
- return card.data.base_model
-
- def load_pipe(self, lora_model_id: str) -> None:
- if lora_model_id == self.lora_model_id:
- return
- base_model_id = self.get_base_model_info(lora_model_id, self.hf_token)
- if base_model_id != self.base_model_id:
- if self.device.type == 'cpu':
- pipe = DiffusionPipeline.from_pretrained(
- base_model_id, use_auth_token=self.hf_token)
- else:
- pipe = DiffusionPipeline.from_pretrained(
- base_model_id,
- torch_dtype=torch.float16,
- use_auth_token=self.hf_token)
- pipe = pipe.to(self.device)
- pipe.scheduler = DPMSolverMultistepScheduler.from_config(
- pipe.scheduler.config)
- self.pipe = pipe
- self.pipe.unet.load_attn_procs( # type: ignore
- lora_model_id, use_auth_token=self.hf_token)
-
- self.lora_model_id = lora_model_id # type: ignore
- self.base_model_id = base_model_id # type: ignore
-
- def run(
- self,
- lora_model_id: str,
- prompt: str,
- lora_scale: float,
- seed: int,
- n_steps: int,
- guidance_scale: float,
- ) -> PIL.Image.Image:
- if not torch.cuda.is_available():
- raise gr.Error('CUDA is not available.')
-
- self.load_pipe(lora_model_id)
-
- generator = torch.Generator(device=self.device).manual_seed(seed)
- out = self.pipe(
- prompt,
- num_inference_steps=n_steps,
- guidance_scale=guidance_scale,
- generator=generator,
- cross_attention_kwargs={'scale': lora_scale},
- ) # type: ignore
- return out.images[0]
diff --git a/spaces/Epoching/3D_Photo_Inpainting/MiDaS/run.py b/spaces/Epoching/3D_Photo_Inpainting/MiDaS/run.py
deleted file mode 100644
index a483d2850a81b3520b80097eff4bb9367ef6a144..0000000000000000000000000000000000000000
--- a/spaces/Epoching/3D_Photo_Inpainting/MiDaS/run.py
+++ /dev/null
@@ -1,81 +0,0 @@
-"""Compute depth maps for images in the input folder.
-"""
-import os
-import glob
-import torch
-# from monodepth_net import MonoDepthNet
-# import utils
-import matplotlib.pyplot as plt
-import numpy as np
-import cv2
-import imageio
-
-
-def run_depth(img_names, input_path, output_path, model_path, Net, utils, target_w=None):
- """Run MonoDepthNN to compute depth maps.
-
- Args:
- input_path (str): path to input folder
- output_path (str): path to output folder
- model_path (str): path to saved model
- """
- print("initialize")
-
- # select device
- device = torch.device("cpu")
- print("device: %s" % device)
-
- # load network
- model = Net(model_path)
- model.to(device)
- model.eval()
-
- # get input
- # img_names = glob.glob(os.path.join(input_path, "*"))
- num_images = len(img_names)
-
- # create output folder
- os.makedirs(output_path, exist_ok=True)
-
- print("start processing")
-
- for ind, img_name in enumerate(img_names):
-
- print(" processing {} ({}/{})".format(img_name, ind + 1, num_images))
-
- # input
- img = utils.read_image(img_name)
- w = img.shape[1]
- scale = 640. / max(img.shape[0], img.shape[1])
- target_height, target_width = int(round(img.shape[0] * scale)), int(round(img.shape[1] * scale))
- img_input = utils.resize_image(img)
- print(img_input.shape)
- img_input = img_input.to(device)
- # compute
- with torch.no_grad():
- out = model.forward(img_input)
-
- depth = utils.resize_depth(out, target_width, target_height)
- img = cv2.resize((img * 255).astype(np.uint8), (target_width, target_height), interpolation=cv2.INTER_AREA)
-
- filename = os.path.join(
- output_path, os.path.splitext(os.path.basename(img_name))[0]
- )
- np.save(filename + '.npy', depth)
- utils.write_depth(filename, depth, bits=2)
-
- print("finished")
-
-
-# if __name__ == "__main__":
-# # set paths
-# INPUT_PATH = "image"
-# OUTPUT_PATH = "output"
-# MODEL_PATH = "model.pt"
-
-# # set torch options
-# torch.backends.cudnn.enabled = True
-# torch.backends.cudnn.benchmark = True
-
-# # compute depth maps
-# run_depth(INPUT_PATH, OUTPUT_PATH, MODEL_PATH, Net, target_w=640)
diff --git a/spaces/EronSamez/RVC_HFmeu/infer/modules/vc/modules.py b/spaces/EronSamez/RVC_HFmeu/infer/modules/vc/modules.py
deleted file mode 100644
index 458cfbe860b23bdd8f07abc2934443e6b8b01c3a..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/infer/modules/vc/modules.py
+++ /dev/null
@@ -1,526 +0,0 @@
-import os, sys
-import traceback
-import logging
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-logger = logging.getLogger(__name__)
-import lib.globals.globals as rvc_globals
-import numpy as np
-import soundfile as sf
-import torch
-from io import BytesIO
-from infer.lib.audio import load_audio
-from infer.lib.audio import wav2
-from infer.lib.infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-from infer.modules.vc.pipeline import Pipeline
-from infer.modules.vc.utils import *
-import time
-import scipy.io.wavfile as wavfile
-
-def note_to_hz(note_name):
- SEMITONES = {'C': -9, 'C#': -8, 'D': -7, 'D#': -6, 'E': -5, 'F': -4, 'F#': -3, 'G': -2, 'G#': -1, 'A': 0, 'A#': 1, 'B': 2}
- pitch_class, octave = note_name[:-1], int(note_name[-1])
- semitone = SEMITONES[pitch_class]
- note_number = 12 * (octave - 4) + semitone
- frequency = 440.0 * (2.0 ** (1.0/12)) ** note_number
- return frequency
-
-class VC:
- def __init__(self, config):
- self.n_spk = None
- self.tgt_sr = None
- self.net_g = None
- self.pipeline = None
- self.cpt = None
- self.version = None
- self.if_f0 = None
- self.version = None
- self.hubert_model = None
-
- self.config = config
-
- def get_vc(self, sid, *to_return_protect):
- logger.info("Get sid: " + sid)
-
- to_return_protect0 = {
- "visible": self.if_f0 != 0,
- "value": to_return_protect[0]
- if self.if_f0 != 0 and to_return_protect
- else 0.5,
- "__type__": "update",
- }
- to_return_protect1 = {
- "visible": self.if_f0 != 0,
- "value": to_return_protect[1]
- if self.if_f0 != 0 and to_return_protect
- else 0.33,
- "__type__": "update",
- }
-
- if not sid:
- if self.hubert_model is not None: # 考虑到轮询, 需要加个判断看是否 sid 是由有模型切换到无模型的
- logger.info("Clean model cache")
- del (
- self.net_g,
- self.n_spk,
- self.vc,
- self.hubert_model,
- self.tgt_sr,
- ) # ,cpt
- self.hubert_model = (
- self.net_g
- ) = self.n_spk = self.vc = self.hubert_model = self.tgt_sr = None
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- ###楼下不这么折腾清理不干净
- self.if_f0 = self.cpt.get("f0", 1)
- self.version = self.cpt.get("version", "v1")
- if self.version == "v1":
- if self.if_f0 == 1:
- self.net_g = SynthesizerTrnMs256NSFsid(
- *self.cpt["config"], is_half=self.config.is_half
- )
- else:
- self.net_g = SynthesizerTrnMs256NSFsid_nono(*self.cpt["config"])
- elif self.version == "v2":
- if self.if_f0 == 1:
- self.net_g = SynthesizerTrnMs768NSFsid(
- *self.cpt["config"], is_half=self.config.is_half
- )
- else:
- self.net_g = SynthesizerTrnMs768NSFsid_nono(*self.cpt["config"])
- del self.net_g, self.cpt
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- return (
- {"visible": False, "__type__": "update"},
- {
- "visible": True,
- "value": to_return_protect0,
- "__type__": "update",
- },
- {
- "visible": True,
- "value": to_return_protect1,
- "__type__": "update",
- },
- "",
- "",
- )
- #person = f'{os.getenv("weight_root")}/{sid}'
- person = f'{sid}'
- #logger.info(f"Loading: {person}")
- logger.info(f"Loading...")
- self.cpt = torch.load(person, map_location="cpu")
- self.tgt_sr = self.cpt["config"][-1]
- self.cpt["config"][-3] = self.cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- self.if_f0 = self.cpt.get("f0", 1)
- self.version = self.cpt.get("version", "v1")
-
- synthesizer_class = {
- ("v1", 1): SynthesizerTrnMs256NSFsid,
- ("v1", 0): SynthesizerTrnMs256NSFsid_nono,
- ("v2", 1): SynthesizerTrnMs768NSFsid,
- ("v2", 0): SynthesizerTrnMs768NSFsid_nono,
- }
-
- self.net_g = synthesizer_class.get(
- (self.version, self.if_f0), SynthesizerTrnMs256NSFsid
- )(*self.cpt["config"], is_half=self.config.is_half)
-
- del self.net_g.enc_q
-
- self.net_g.load_state_dict(self.cpt["weight"], strict=False)
- self.net_g.eval().to(self.config.device)
- if self.config.is_half:
- self.net_g = self.net_g.half()
- else:
- self.net_g = self.net_g.float()
-
- self.pipeline = Pipeline(self.tgt_sr, self.config)
- n_spk = self.cpt["config"][-3]
- index = {"value": get_index_path_from_model(sid), "__type__": "update"}
- logger.info("Select index: " + index["value"])
-
- return (
- (
- {"visible": False, "maximum": n_spk, "__type__": "update"},
- to_return_protect0,
- to_return_protect1
- )
- if to_return_protect
- else {"visible": False, "maximum": n_spk, "__type__": "update"}
- )
-
-
- def vc_single(
- self,
- sid,
- input_audio_path0,
- input_audio_path1,
- f0_up_key,
- f0_file,
- f0_method,
- file_index,
- file_index2,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- crepe_hop_length,
- f0_min,
- note_min,
- f0_max,
- note_max,
- f0_autotune,
- ):
- global total_time
- total_time = 0
- start_time = time.time()
- if not input_audio_path0 and not input_audio_path1:
- return "You need to upload an audio", None
-
- if (not os.path.exists(input_audio_path0)) and (not os.path.exists(os.path.join(now_dir, input_audio_path0))):
- return "Audio was not properly selected or doesn't exist", None
-
- input_audio_path1 = input_audio_path1 or input_audio_path0
- print(f"\nStarting inference for '{os.path.basename(input_audio_path1)}'")
- print("-------------------")
- f0_up_key = int(f0_up_key)
- if rvc_globals.NotesOrHertz and f0_method != 'rmvpe':
- f0_min = note_to_hz(note_min) if note_min else 50
- f0_max = note_to_hz(note_max) if note_max else 1100
- print(f"Converted Min pitch: freq - {f0_min}\n"
- f"Converted Max pitch: freq - {f0_max}")
- else:
- f0_min = f0_min or 50
- f0_max = f0_max or 1100
- try:
- input_audio_path1 = input_audio_path1 or input_audio_path0
- print(f"Attempting to load {input_audio_path1}....")
- audio = load_audio(file=input_audio_path1,
- sr=16000,
- DoFormant=rvc_globals.DoFormant,
- Quefrency=rvc_globals.Quefrency,
- Timbre=rvc_globals.Timbre)
-
- audio_max = np.abs(audio).max() / 0.95
- if audio_max > 1:
- audio /= audio_max
- times = [0, 0, 0]
-
- if self.hubert_model is None:
- self.hubert_model = load_hubert(self.config)
-
- try:
- self.if_f0 = self.cpt.get("f0", 1)
- except NameError:
- message = "Model was not properly selected"
- print(message)
- return message, None
-
- file_index = (
- (
- file_index.strip(" ")
- .strip('"')
- .strip("\n")
- .strip('"')
- .strip(" ")
- .replace("trained", "added")
- )
- if file_index != ""
- else file_index2
- ) # 防止小白写错,自动帮他替换掉
-
- try:
- audio_opt = self.pipeline.pipeline(
- self.hubert_model,
- self.net_g,
- sid,
- audio,
- input_audio_path1,
- times,
- f0_up_key,
- f0_method,
- file_index,
- index_rate,
- self.if_f0,
- filter_radius,
- self.tgt_sr,
- resample_sr,
- rms_mix_rate,
- self.version,
- protect,
- crepe_hop_length,
- f0_autotune,
- f0_file=f0_file,
- f0_min=f0_min,
- f0_max=f0_max
- )
- except AssertionError:
- message = "Mismatching index version detected (v1 with v2, or v2 with v1)."
- print(message)
- return message, None
- except NameError:
- message = "RVC libraries are still loading. Please try again in a few seconds."
- print(message)
- return message, None
-
- if self.tgt_sr != resample_sr >= 16000:
- self.tgt_sr = resample_sr
- index_info = (
- "Index:\n%s." % file_index
- if os.path.exists(file_index)
- else "Index not used."
- )
- end_time = time.time()
- total_time = end_time - start_time
-
- output_folder = "audio-outputs"
- os.makedirs(output_folder, exist_ok=True)
- output_filename = "generated_audio_{}.wav"
- output_count = 1
- while True:
- current_output_path = os.path.join(output_folder, output_filename.format(output_count))
- if not os.path.exists(current_output_path):
- break
- output_count += 1
-
- wavfile.write(current_output_path, self.tgt_sr, audio_opt)
- print(f"Generated audio saved to: {current_output_path}")
- return f"Success.\n {index_info}\nTime:\n npy:{times[0]}, f0:{times[1]}, infer:{times[2]}\nTotal Time: {total_time} seconds", (self.tgt_sr, audio_opt)
- except:
- info = traceback.format_exc()
- logger.warn(info)
- return info, (None, None)
-
- def vc_single_dont_save(
- self,
- sid,
- input_audio_path0,
- input_audio_path1,
- f0_up_key,
- f0_file,
- f0_method,
- file_index,
- file_index2,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- crepe_hop_length,
- f0_min,
- note_min,
- f0_max,
- note_max,
- f0_autotune,
- ):
- global total_time
- total_time = 0
- start_time = time.time()
- if not input_audio_path0 and not input_audio_path1:
- return "You need to upload an audio", None
-
- if (not os.path.exists(input_audio_path0)) and (not os.path.exists(os.path.join(now_dir, input_audio_path0))):
- return "Audio was not properly selected or doesn't exist", None
-
- input_audio_path1 = input_audio_path1 or input_audio_path0
- print(f"\nStarting inference for '{os.path.basename(input_audio_path1)}'")
- print("-------------------")
- f0_up_key = int(f0_up_key)
- if rvc_globals.NotesOrHertz and f0_method != 'rmvpe':
- f0_min = note_to_hz(note_min) if note_min else 50
- f0_max = note_to_hz(note_max) if note_max else 1100
- print(f"Converted Min pitch: freq - {f0_min}\n"
- f"Converted Max pitch: freq - {f0_max}")
- else:
- f0_min = f0_min or 50
- f0_max = f0_max or 1100
- try:
- input_audio_path1 = input_audio_path1 or input_audio_path0
- print(f"Attempting to load {input_audio_path1}....")
- audio = load_audio(file=input_audio_path1,
- sr=16000,
- DoFormant=rvc_globals.DoFormant,
- Quefrency=rvc_globals.Quefrency,
- Timbre=rvc_globals.Timbre)
-
- audio_max = np.abs(audio).max() / 0.95
- if audio_max > 1:
- audio /= audio_max
- times = [0, 0, 0]
-
- if self.hubert_model is None:
- self.hubert_model = load_hubert(self.config)
-
- try:
- self.if_f0 = self.cpt.get("f0", 1)
- except NameError:
- message = "Model was not properly selected"
- print(message)
- return message, None
-
- file_index = (
- (
- file_index.strip(" ")
- .strip('"')
- .strip("\n")
- .strip('"')
- .strip(" ")
- .replace("trained", "added")
- )
- if file_index != ""
- else file_index2
- ) # 防止小白写错,自动帮他替换掉
-
- try:
- audio_opt = self.pipeline.pipeline(
- self.hubert_model,
- self.net_g,
- sid,
- audio,
- input_audio_path1,
- times,
- f0_up_key,
- f0_method,
- file_index,
- index_rate,
- self.if_f0,
- filter_radius,
- self.tgt_sr,
- resample_sr,
- rms_mix_rate,
- self.version,
- protect,
- crepe_hop_length,
- f0_autotune,
- f0_file=f0_file,
- f0_min=f0_min,
- f0_max=f0_max
- )
- except AssertionError:
- message = "Mismatching index version detected (v1 with v2, or v2 with v1)."
- print(message)
- return message, None
- except NameError:
- message = "RVC libraries are still loading. Please try again in a few seconds."
- print(message)
- return message, None
-
- if self.tgt_sr != resample_sr >= 16000:
- self.tgt_sr = resample_sr
- index_info = (
- "Index:\n%s." % file_index
- if os.path.exists(file_index)
- else "Index not used."
- )
- end_time = time.time()
- total_time = end_time - start_time
-
- return f"Success.\n {index_info}\nTime:\n npy:{times[0]}, f0:{times[1]}, infer:{times[2]}\nTotal Time: {total_time} seconds", (self.tgt_sr, audio_opt)
- except:
- info = traceback.format_exc()
- logger.warn(info)
- return info, (None, None)
-
-
- def vc_multi(
- self,
- sid,
- dir_path,
- opt_root,
- paths,
- f0_up_key,
- f0_method,
- file_index,
- file_index2,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- format1,
- crepe_hop_length,
- f0_min,
- note_min,
- f0_max,
- note_max,
- f0_autotune,
- ):
- if rvc_globals.NotesOrHertz and f0_method != 'rmvpe':
- f0_min = note_to_hz(note_min) if note_min else 50
- f0_max = note_to_hz(note_max) if note_max else 1100
- print(f"Converted Min pitch: freq - {f0_min}\n"
- f"Converted Max pitch: freq - {f0_max}")
- else:
- f0_min = f0_min or 50
- f0_max = f0_max or 1100
- try:
- dir_path = (
- dir_path.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- ) # 防止小白拷路径头尾带了空格和"和回车
- opt_root = opt_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- os.makedirs(opt_root, exist_ok=True)
- try:
- if dir_path != "":
- paths = [
- os.path.join(dir_path, name) for name in os.listdir(dir_path)
- ]
- else:
- paths = [path.name for path in paths]
- except:
- traceback.print_exc()
- paths = [path.name for path in paths]
- infos = []
- for path in paths:
- info, opt = self.vc_single(
- sid,
- path,
- f0_up_key,
- None,
- f0_method,
- file_index,
- file_index2,
- # file_big_npy,
- index_rate,
- filter_radius,
- resample_sr,
- rms_mix_rate,
- protect,
- )
- if "Success" in info:
- try:
- tgt_sr, audio_opt = opt
- if format1 in ["wav", "flac"]:
- sf.write(
- "%s/%s.%s"
- % (opt_root, os.path.basename(path), format1),
- audio_opt,
- tgt_sr,
- )
- else:
- path = "%s/%s.%s" % (opt_root, os.path.basename(path), format1)
- with BytesIO() as wavf:
- sf.write(
- wavf,
- audio_opt,
- tgt_sr,
- format="wav"
- )
- wavf.seek(0, 0)
- with open(path, "wb") as outf:
- wav2(wavf, outf, format1)
- except:
- info += traceback.format_exc()
- infos.append("%s->%s" % (os.path.basename(path), info))
- yield "\n".join(infos)
- yield "\n".join(infos)
- except:
- yield traceback.format_exc()
diff --git a/spaces/GT6242Causion/Causion/src/pred_plot.py b/spaces/GT6242Causion/Causion/src/pred_plot.py
deleted file mode 100644
index a3f8667230675939404bc72df1cd58fce39d5f33..0000000000000000000000000000000000000000
--- a/spaces/GT6242Causion/Causion/src/pred_plot.py
+++ /dev/null
@@ -1,269 +0,0 @@
-from datetime import date, datetime, timedelta
-from sklearn.model_selection import train_test_split
-from sklearn.neural_network import MLPClassifier
-import pandas as pd
-import plotly.graph_objects as go
-import streamlit as st
-from plotly.subplots import make_subplots
-
-def hour_rounder(t):
- if int(t.minute)>= 30:
- time_1 = str(int(t.hour)+1)
- if len(time_1) == 1:
- return "0"+time_1+":00"
- else:
- return str(time_1)+":00"
- else:
- if len(str(t.hour)) == 1:
- return "0"+str(t.hour)+":00"
- else:
- return str(t.hour)+":00"
-
-def peak_hours(t):
- peak = ['07:00', "08:00", '09:00', "17:00", "18:00", "19:00"]
- if t in peak:
- return 1
- else:
- return 0
-
-def weekend(w):
- end = ['Saturday', 'Sunday']
- if w in end:
- return 1
- else:
- return 0
-
-def vehicle_cat(v):
- if v >= 0 and v < 2:
- return 0
- elif v >= 2 and v < 4:
- return 1
- elif v >= 4 and v < 6:
- return 2
- elif v >= 6 and v < 8:
- return 3
- else:
- return 4
-
-def data_split(final_table):
- X = final_table.loc[:,['day', 'hour','view']]
- Y = final_table.loc[:,'cat']
-
- X = pd.get_dummies(X)
- X.loc[:,['peak', 'weekend']] = final_table.loc[:,['peak', 'weekend']]
-
-
-
- x_train, x_test, y_train, y_test = train_test_split(X, Y, train_size=0.7,
- test_size=0.3,
- shuffle=True, random_state=13)
-
- return x_train, x_test, y_train, y_test
-
-def convert_date(date):
- return datetime.strptime(date, "%Y-%m-%d").strftime('%A')
-
-def create_row(x_train, date_d, hour, view):
- if date_d is None:
- date_d = "2023-04-11"
- if hour is None:
- hour = "09:00"
- if view is None:
- view = "Johor-Tuas"
-
- features = x_train.columns
- d_dict = {}
- day = datetime.strptime(date_d, "%Y-%m-%d").strftime('%A')
- hour = str(hour)
- view = str(view)
- col_day = "day_" + day
- col_hour = 'hour_'+ hour
- col_view = 'view_'+view
-
- for i in features:
- if i == col_day or i == col_hour or i == col_view:
- d_dict[i] = [1]
- else:
- d_dict[i] = [0]
- end = ['Saturday', 'Sunday']
- peak = ['07:00', "08:00", '09:00', "17:00", "18:00", "19:00"]
-
- if day in end:
- d_dict['weekend'] = 1
- if hour in peak:
- d_dict['peak'] = 1
- result = pd.DataFrame.from_dict(d_dict, orient='columns')
- for i in features:
- result[i] = result[i].astype('category')
- return result
-
-def prep_data_pred_plot(df):
- df = df.sort_values(by=['date']).reset_index(drop=True)
- df['date'] = pd.to_datetime(df['date'], format = "%Y-%m-%d")
- df['day'] = df['date'].dt.day_name()
- df.drop(columns=['motorcycle'], axis=1, inplace=True)
- df['vehicle'] = df['car'] + df['large_vehicle']
-
- transfer = {"View_from_Second_Link_at_Tuas_to_sg": 'Johor-Tuas',
- "View_from_Second_Link_at_Tuas_to_jh": 'Tuas-Johor',
- "View_from_Tuas_Checkpoint_to_sg": 'Johor-Tuas',
- "View_from_Tuas_Checkpoint_to_jh": 'Tuas-Johor',
- "View_from_Woodlands_Causeway_Towards_Johor_to_sg": 'Johor-Woodlands',
- "View_from_Woodlands_Causeway_Towards_Johor_to_jh": 'Woodlands-Johor',
- "View_from_Woodlands_Checkpoint_Towards_BKE_to_sg": 'Johor-Woodlands',
- "View_from_Woodlands_Checkpoint_Towards_BKE_to_jh": 'Woodlands-Johor'}
-
- new_table = df.replace({'view':transfer})
- options = ['Johor-Woodlands','Woodlands-Johor','Johor-Tuas','Tuas-Johor']
- final_df = new_table[new_table['view'].isin(options)]
- final_df.loc[:, 'time'] = pd.to_datetime(final_df.loc[:,'time'], format='%H:%M:%S')
- final_df.loc[:,'hour'] = final_df.loc[:,'time'].apply(hour_rounder)
-
- final_table = final_df.groupby(['view', 'day', 'hour']).mean().reset_index().loc[:,['day', 'hour','view', 'vehicle']]
- final_table['vehicle'] = final_table['vehicle'].apply(lambda x: round(x))
- final_table.loc[:,'peak'] = final_table.loc[:,'hour'].apply(peak_hours)
- final_table.loc[:,'peak'] = final_table.loc[:,'peak'].astype('category')
- final_table.loc[:,'weekend'] = final_table.loc[:,'day'].apply(weekend)
- final_table.loc[:,'weekend'] = final_table.loc[:,'weekend'].astype('category')
- final_table.loc[:,'cat'] = final_table.loc[:,'vehicle'].apply(vehicle_cat)
- final_table.loc[:,'cat'] = final_table.loc[:,'cat'].astype('category')
-
- return final_table
-
-def gen_fig():
-
- paths = ["M 0.2 0.35 L 0.48 0.52 L 0.52 0.50",
- "M 0.25 0.75 L 0.475 0.52 L 0.52 0.52",
- "M 0.5 0.9 L 0.485 0.52 L 0.515 0.52",
- "M 0.75 0.75 L 0.485 0.52 L 0.52 0.51",
- "M 0.8 0.35 L 0.48 0.50 L 0.52 0.52"]
-
- figs = []
- values_ = ["No Traffic on Johor-Singapore Causeway", "Low Traffic on Johor-Singapore Causeway", "Johor-Singapore Causeway Slightly Busy",
- "Johor-Singapore Causeway Moderately Busy", "Busiest Time to Travel on Johor-Singapore Causeway"]
-
- for i in range(5):
- plot_bgcolor = "#def"
- colors = ["#f25829", "#f2a529", "#eff229", "#85e043", "#2bad4e","rgba(0,0,0,0)"]
- quadrant_text = ["Heavy", "Moderate", "Mild", "Low", "None",""]
- n_quadrants = len(colors) - 1
- figure_1 = go.Figure(
- data=[
- go.Pie(
- values=[14,14,14,14,14,30],
- rotation=130,
- hole=0.75,
- marker_colors=colors,
- marker_line={"width":2, "color":"white"},
- textinfo="none",
- text=quadrant_text,
- hoverinfo="text"
- ),
- ],
- layout=go.Layout(
- showlegend=False,
- margin=dict(b=0,t=30,l=10,r=10),
- width=500,
- height=350,
- paper_bgcolor="rgba(0,0,0,0)",
- annotations=[
- go.layout.Annotation(
- text=f"{values_[i]}",
- x=0.5, xanchor="center", xref="paper",
- y= 0.1, yanchor="bottom", yref="paper",
- showarrow=False,
- font= {"size":15, "color":"#333"}
- )
- ]
- )
- )
- figure_1.update_layout(shapes=[dict(type='path',
- path=paths[i],
- fillcolor="#333"),
- go.layout.Shape(
- type="circle",
- x0=0.48, x1=0.52,
- y0=0.48, y1=0.54,
- fillcolor="#333",
- line_color="#333",
- )])
- figs.append(figure_1)
-
- return figs
-
-def predicted_figure(clf, x, figs):
-
- result = create_row(x[0], x[1], x[2], x[3])
-
- pred_val = clf.predict(result)[0]
-
- return figs[pred_val]
-
-def get_today():
- t = str(date.today()).split('-')
- today = []
-
- for i in t:
- if t[0] =='0':
- today.append(int(t[1:]))
- else:
- today.append(int(i))
- return today
-
-def update_output(date_value):
- string_prefix = 'Travel Day: '
- if date_value is not None:
- date_string = convert_date(date_value)
- return string_prefix + date_string
-
-def update_final_output_hour(starter_variables, my_date_picker_single, hours_dropdown_id, direction_id):
- # starter_variables = [clf, str(date.today()), "07:00", "Tuas-Johor"]
- starter_variables[1] = str(my_date_picker_single)
- starter_variables[2] = str(hours_dropdown_id)
- starter_variables[3] = str(direction_id)
- fig = predicted_figure(starter_variables)
- return fig
-
-def train_model(x_train, y_train):
- clf = MLPClassifier(solver='lbfgs', alpha=3, hidden_layer_sizes=(5,4), random_state=2, max_iter=3000)
- clf.fit(x_train, y_train)
-
- return clf
-
-def pred_bars(my_date_picker_single, final_table):
- day_today = convert_date(str(my_date_picker_single))
- df_filter = final_table[final_table['day']==day_today]
-
- color_map = {0:"#2bad4e", 1:"#85e043", 2:"#eff229", 3:"#f2a529", 4:"#f25829"}
-
-
- bar_day = make_subplots(shared_yaxes="all", rows=2, cols=2, start_cell="bottom-left", subplot_titles=("Johor-Tuas",
- "Tuas-Johor",
- "Johor-Woodlands",
- "Woodlands-Johor"))
- f1 = df_filter[df_filter['view']=='Johor-Tuas']
- c1 = pd.Series(f1['cat']).map(color_map)
- bar_day.add_trace(go.Bar(x=f1['hour'], y=f1['vehicle'], name='Johor-Tuas', showlegend=False, marker={'color':c1}),
- row=1, col=1)
-
- f2 = df_filter[df_filter['view']=='Tuas-Johor']
- c2 = pd.Series(f2['cat']).map(color_map)
- bar_day.add_trace(go.Bar(x=f2['hour'], y=f2['vehicle'], name='Tuas-Johor', showlegend=False, marker={'color':c2}),
- row=1, col=2)
- f3 = df_filter[df_filter['view']=='Johor-Woodlands']
- c3 = pd.Series(f3['cat']).map(color_map)
- bar_day.add_trace(go.Bar(x=f3['hour'], y=f3['vehicle'], name='Johor-Woodlands', showlegend=False, marker={'color':c3}),
- row=2, col=1)
- f4 = df_filter[df_filter['view']=='Woodlands-Johor']
- c4 = pd.Series(f4['cat']).map(color_map)
- bar_day.add_trace(go.Bar(x=f4['hour'], y=f4['vehicle'], name='Woodlands-Johor', showlegend=False, marker={'color':c4}),
- row=2, col=2)
-
- val_d = my_date_picker_single.strftime("%d %B, %Y")
- day_d = my_date_picker_single.strftime("%A")
- tex = "Predicted 24 Hour Traffic Trend on: " + day_d + ", " + str(val_d)
-
-
- bar_day.update_layout(title_text=tex, paper_bgcolor="rgba(0,0,0,0)", plot_bgcolor="rgba(0,0,0,0)")
- bar_day.update_xaxes(tickangle=45)
- return bar_day
\ No newline at end of file
diff --git a/spaces/Gauri54damle/sdxl-lora-multi-object/style.css b/spaces/Gauri54damle/sdxl-lora-multi-object/style.css
deleted file mode 100644
index 9bfa78cc983f84693cf7cbab1e3bfd0e0d36c944..0000000000000000000000000000000000000000
--- a/spaces/Gauri54damle/sdxl-lora-multi-object/style.css
+++ /dev/null
@@ -1,24 +0,0 @@
-.finetuned-diffusion-div div{
- display:inline-flex;
- align-items:center;
- gap:.8rem;
- font-size:1.75rem
-}
-.finetuned-diffusion-div div h1{
- font-weight:900;
- margin-bottom:7px
-}
-.finetuned-diffusion-div p{
- margin-bottom:10px;
- font-size:94%
-}
-a{
- text-decoration:underline
-}
-.tabs{
- margin-top:0;
- margin-bottom:0
-}
-#gallery{
- min-height:20rem
-}
diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train10_gptmixcliport3.sh b/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train10_gptmixcliport3.sh
deleted file mode 100644
index 0728981a26bd2e5c0113bbd80ddd15e76dddd84d..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train10_gptmixcliport3.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/bin/bash
-#SBATCH -c 10
-#SBATCH -n 1
-#SBATCH -o logs/%j.out
-#SBATCH --exclusive
-STEPS=${1-'50000'}
-
-
-sh scripts/traintest_scripts/train_test_multi_task_goal.sh data \
- "[put-block-in-bowl,align-box-corner,stack-block-pyramid-seq,color-sorted-container-stack,color-sorted-block-race,Four-corner-pyramid-challenge,triangle-block-arrangement,sort-and-stack-clr-blocks,color-coordinated-sphere-insertion,rainbow-stack,align-pair-colored-blocks-along-line,vertical-insertion-blocks,stack-blocks-in-container]" \
- "[put-block-in-bowl,align-box-corner,stack-block-pyramid-seq]" \
- gpt5_mixcliport3_task $STEPS
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index 132787db98d3fc9df5ed62e31738c82da8c279bf..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/deeplabv3_r50-d8.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
-]
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18_512x512_160k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18_512x512_160k_ade20k.py
deleted file mode 100644
index a3c86e18ea65c6aaa36a4fb6e2708f08c7ae1698..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18_512x512_160k_ade20k.py
+++ /dev/null
@@ -1,35 +0,0 @@
-_base_ = [
- '../_base_/models/ocrnet_hr18.py', '../_base_/datasets/ade20k.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
-]
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(decode_head=[
- dict(
- type='FCNHead',
- in_channels=[18, 36, 72, 144],
- channels=sum([18, 36, 72, 144]),
- in_index=(0, 1, 2, 3),
- input_transform='resize_concat',
- kernel_size=1,
- num_convs=1,
- concat_input=False,
- dropout_ratio=-1,
- num_classes=150,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- dict(
- type='OCRHead',
- in_channels=[18, 36, 72, 144],
- in_index=(0, 1, 2, 3),
- input_transform='resize_concat',
- channels=512,
- ocr_channels=256,
- dropout_ratio=-1,
- num_classes=150,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
-])
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/denoising_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/denoising_dataset.py
deleted file mode 100644
index bdb62c8d5db9c8755c72db4d0d8083c936f18dc8..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/denoising_dataset.py
+++ /dev/null
@@ -1,436 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-import numpy as np
-import torch
-
-from . import FairseqDataset, data_utils
-
-
-def collate(
- samples,
- pad_idx,
- eos_idx,
- vocab,
- left_pad_source=False,
- left_pad_target=False,
- input_feeding=True,
- pad_to_length=None,
-):
- assert input_feeding
- if len(samples) == 0:
- return {}
-
- def merge(key, left_pad, move_eos_to_beginning=False, pad_to_length=None):
- return data_utils.collate_tokens(
- [s[key] for s in samples],
- pad_idx,
- eos_idx=None, # use eos_idx of each sample instead of vocab.eos()
- left_pad=left_pad,
- move_eos_to_beginning=move_eos_to_beginning,
- pad_to_length=pad_to_length,
- )
-
- id = torch.LongTensor([s["id"] for s in samples])
- src_tokens = merge(
- "source",
- left_pad=left_pad_source,
- pad_to_length=pad_to_length["source"] if pad_to_length is not None else None,
- )
- # sort by descending source length
- src_lengths = torch.LongTensor([s["source"].numel() for s in samples])
- src_lengths, sort_order = src_lengths.sort(descending=True)
- id = id.index_select(0, sort_order)
- src_tokens = src_tokens.index_select(0, sort_order)
-
- prev_output_tokens = None
- target = None
- if samples[0].get("target", None) is not None:
- target = merge(
- "target",
- left_pad=left_pad_target,
- pad_to_length=pad_to_length["target"]
- if pad_to_length is not None
- else None,
- )
- target = target.index_select(0, sort_order)
- ntokens = sum(len(s["target"]) for s in samples)
-
- if input_feeding:
- # we create a shifted version of targets for feeding the
- # previous output token(s) into the next decoder step
- prev_output_tokens = merge(
- "target",
- left_pad=left_pad_target,
- move_eos_to_beginning=True,
- pad_to_length=pad_to_length["target"]
- if pad_to_length is not None
- else None,
- )
- prev_output_tokens = prev_output_tokens.index_select(0, sort_order)
- else:
- ntokens = sum(len(s["source"]) for s in samples)
-
- batch = {
- "id": id,
- "ntokens": ntokens,
- "net_input": {
- "src_tokens": src_tokens,
- "src_lengths": src_lengths,
- },
- "target": target,
- "nsentences": samples[0]["source"].size(0),
- "sort_order": sort_order,
- }
- if prev_output_tokens is not None:
- batch["net_input"]["prev_output_tokens"] = prev_output_tokens
-
- return batch
-
-
-class DenoisingDataset(FairseqDataset):
- """
- A wrapper around TokenBlockDataset for BART dataset.
-
- Args:
- dataset (TokenBlockDataset): dataset to wrap
- sizes (List[int]): sentence lengths
- vocab (~fairseq.data.Dictionary): vocabulary
- mask_idx (int): dictionary index used for masked token
- mask_whole_words: only mask whole words. This should be a byte mask
- over vocab indices, indicating whether it is the beginning of a
- word. We will extend any mask to encompass the whole word.
- shuffle (bool, optional): shuffle the elements before batching.
- Default: ``True``
- seed: Seed for random number generator for reproducibility.
- args: argparse arguments.
- """
-
- def __init__(
- self,
- dataset,
- sizes,
- vocab,
- mask_idx,
- mask_whole_words,
- shuffle,
- seed,
- args,
- eos=None,
- item_transform_func=None,
- ):
- self.dataset = dataset
-
- self.sizes = sizes
-
- self.vocab = vocab
- self.shuffle = shuffle
- self.seed = seed
- self.mask_idx = mask_idx
- self.mask_whole_word = mask_whole_words
- self.mask_ratio = args.mask
- self.random_ratio = args.mask_random
- self.insert_ratio = args.insert
- self.rotate_ratio = args.rotate
- self.permute_sentence_ratio = args.permute_sentences
- self.eos = eos if eos is not None else vocab.eos()
- self.item_transform_func = item_transform_func
-
- if args.bpe != "gpt2":
- self.full_stop_index = self.vocab.eos()
- else:
- assert args.bpe == "gpt2"
- self.full_stop_index = self.vocab.index("13")
-
- self.replace_length = args.replace_length
- if self.replace_length not in [-1, 0, 1]:
- raise ValueError(f"invalid arg: replace_length={self.replace_length}")
- if args.mask_length not in ["subword", "word", "span-poisson"]:
- raise ValueError(f"invalid arg: mask-length={args.mask_length}")
- if args.mask_length == "subword" and args.replace_length not in [0, 1]:
- raise ValueError(f"if using subwords, use replace-length=1 or 0")
-
- self.mask_span_distribution = None
- if args.mask_length == "span-poisson":
- _lambda = args.poisson_lambda
-
- lambda_to_the_k = 1
- e_to_the_minus_lambda = math.exp(-_lambda)
- k_factorial = 1
- ps = []
- for k in range(0, 128):
- ps.append(e_to_the_minus_lambda * lambda_to_the_k / k_factorial)
- lambda_to_the_k *= _lambda
- k_factorial *= k + 1
- if ps[-1] < 0.0000001:
- break
- ps = torch.FloatTensor(ps)
- self.mask_span_distribution = torch.distributions.Categorical(ps)
-
- self.epoch = 0
-
- @property
- def can_reuse_epoch_itr_across_epochs(self):
- return True # only the noise changes, not item sizes
-
- def set_epoch(self, epoch, **unused):
- self.epoch = epoch
-
- def __getitem__(self, index):
- with data_utils.numpy_seed(self.seed, self.epoch, index):
- tokens = self.dataset[index]
- assert tokens[-1] == self.eos
- source, target = tokens, tokens.clone()
-
- if self.permute_sentence_ratio > 0.0:
- source = self.permute_sentences(source, self.permute_sentence_ratio)
-
- if self.mask_ratio > 0:
- source = self.add_whole_word_mask(source, self.mask_ratio)
-
- if self.insert_ratio > 0:
- source = self.add_insertion_noise(source, self.insert_ratio)
-
- if self.rotate_ratio > 0.0 and np.random.random() < self.rotate_ratio:
- source = self.add_rolling_noise(source)
- # there can additional changes to make:
- if self.item_transform_func is not None:
- source, target = self.item_transform_func(source, target)
-
- assert (source >= 0).all()
- assert (source[1:-1] >= 1).all()
- assert (source <= len(self.vocab)).all()
- assert source[0] == self.vocab.bos()
- assert source[-1] == self.eos
- return {
- "id": index,
- "source": source,
- "target": target,
- }
-
- def __len__(self):
- return len(self.dataset)
-
- def permute_sentences(self, source, p=1.0):
- full_stops = source == self.full_stop_index
- # Pretend it ends with a full stop so last span is a sentence
- full_stops[-2] = 1
-
- # Tokens that are full stops, where the previous token is not
- sentence_ends = (full_stops[1:] * ~full_stops[:-1]).nonzero(as_tuple=False) + 2
- result = source.clone()
-
- num_sentences = sentence_ends.size(0)
- num_to_permute = math.ceil((num_sentences * 2 * p) / 2.0)
- substitutions = torch.randperm(num_sentences)[:num_to_permute]
- ordering = torch.arange(0, num_sentences)
- ordering[substitutions] = substitutions[torch.randperm(num_to_permute)]
-
- # Ignore at start
- index = 1
- for i in ordering:
- sentence = source[(sentence_ends[i - 1] if i > 0 else 1) : sentence_ends[i]]
- result[index : index + sentence.size(0)] = sentence
- index += sentence.size(0)
- return result
-
- def word_starts(self, source):
- if self.mask_whole_word is not None:
- is_word_start = self.mask_whole_word.gather(0, source)
- else:
- is_word_start = torch.ones(source.size())
- is_word_start[0] = 0
- is_word_start[-1] = 0
- return is_word_start
-
- def add_whole_word_mask(self, source, p):
- is_word_start = self.word_starts(source)
- num_to_mask = int(math.ceil(is_word_start.float().sum() * p))
- num_inserts = 0
- if num_to_mask == 0:
- return source
-
- if self.mask_span_distribution is not None:
- lengths = self.mask_span_distribution.sample(sample_shape=(num_to_mask,))
-
- # Make sure we have enough to mask
- cum_length = torch.cumsum(lengths, 0)
- while cum_length[-1] < num_to_mask:
- lengths = torch.cat(
- [
- lengths,
- self.mask_span_distribution.sample(sample_shape=(num_to_mask,)),
- ],
- dim=0,
- )
- cum_length = torch.cumsum(lengths, 0)
-
- # Trim to masking budget
- i = 0
- while cum_length[i] < num_to_mask:
- i += 1
- lengths[i] = num_to_mask - (0 if i == 0 else cum_length[i - 1])
- num_to_mask = i + 1
- lengths = lengths[:num_to_mask]
-
- # Handle 0-length mask (inserts) separately
- lengths = lengths[lengths > 0]
- num_inserts = num_to_mask - lengths.size(0)
- num_to_mask -= num_inserts
- if num_to_mask == 0:
- return self.add_insertion_noise(source, num_inserts / source.size(0))
-
- assert (lengths > 0).all()
- else:
- lengths = torch.ones((num_to_mask,)).long()
- assert is_word_start[-1] == 0
- word_starts = is_word_start.nonzero(as_tuple=False)
- indices = word_starts[
- torch.randperm(word_starts.size(0))[:num_to_mask]
- ].squeeze(1)
- mask_random = torch.FloatTensor(num_to_mask).uniform_() < self.random_ratio
-
- source_length = source.size(0)
- assert source_length - 1 not in indices
- to_keep = torch.ones(source_length, dtype=torch.bool)
- is_word_start[
- -1
- ] = 255 # acts as a long length, so spans don't go over the end of doc
- if self.replace_length == 0:
- to_keep[indices] = 0
- else:
- # keep index, but replace it with [MASK]
- source[indices] = self.mask_idx
- source[indices[mask_random]] = torch.randint(
- 1, len(self.vocab), size=(mask_random.sum(),)
- )
-
- if self.mask_span_distribution is not None:
- assert len(lengths.size()) == 1
- assert lengths.size() == indices.size()
- lengths -= 1
- while indices.size(0) > 0:
- assert lengths.size() == indices.size()
- lengths -= is_word_start[indices + 1].long()
- uncompleted = lengths >= 0
- indices = indices[uncompleted] + 1
- mask_random = mask_random[uncompleted]
- lengths = lengths[uncompleted]
- if self.replace_length != -1:
- # delete token
- to_keep[indices] = 0
- else:
- # keep index, but replace it with [MASK]
- source[indices] = self.mask_idx
- source[indices[mask_random]] = torch.randint(
- 1, len(self.vocab), size=(mask_random.sum(),)
- )
- else:
- # A bit faster when all lengths are 1
- while indices.size(0) > 0:
- uncompleted = is_word_start[indices + 1] == 0
- indices = indices[uncompleted] + 1
- mask_random = mask_random[uncompleted]
- if self.replace_length != -1:
- # delete token
- to_keep[indices] = 0
- else:
- # keep index, but replace it with [MASK]
- source[indices] = self.mask_idx
- source[indices[mask_random]] = torch.randint(
- 1, len(self.vocab), size=(mask_random.sum(),)
- )
-
- assert source_length - 1 not in indices
-
- source = source[to_keep]
-
- if num_inserts > 0:
- source = self.add_insertion_noise(source, num_inserts / source.size(0))
-
- return source
-
- def add_permuted_noise(self, tokens, p):
- num_words = len(tokens)
- num_to_permute = math.ceil(((num_words * 2) * p) / 2.0)
- substitutions = torch.randperm(num_words - 2)[:num_to_permute] + 1
- tokens[substitutions] = tokens[substitutions[torch.randperm(num_to_permute)]]
- return tokens
-
- def add_rolling_noise(self, tokens):
- offset = np.random.randint(1, max(1, tokens.size(-1) - 1) + 1)
- tokens = torch.cat(
- (tokens[0:1], tokens[offset:-1], tokens[1:offset], tokens[-1:]),
- dim=0,
- )
- return tokens
-
- def add_insertion_noise(self, tokens, p):
- if p == 0.0:
- return tokens
-
- num_tokens = len(tokens)
- n = int(math.ceil(num_tokens * p))
-
- noise_indices = torch.randperm(num_tokens + n - 2)[:n] + 1
- noise_mask = torch.zeros(size=(num_tokens + n,), dtype=torch.bool)
- noise_mask[noise_indices] = 1
- result = torch.LongTensor(n + len(tokens)).fill_(-1)
-
- num_random = int(math.ceil(n * self.random_ratio))
- result[noise_indices[num_random:]] = self.mask_idx
- result[noise_indices[:num_random]] = torch.randint(
- low=1, high=len(self.vocab), size=(num_random,)
- )
-
- result[~noise_mask] = tokens
-
- assert (result >= 0).all()
- return result
-
- def collater(self, samples, pad_to_length=None):
- """Merge a list of samples to form a mini-batch.
- Args:
- samples (List[dict]): samples to collate
- Returns:
- dict: a mini-batch of data
- """
- return collate(
- samples, self.vocab.pad(), self.eos, self.vocab, pad_to_length=pad_to_length
- )
-
- def num_tokens(self, index):
- """Return the number of tokens in a sample. This value is used to
- enforce ``--max-tokens`` during batching."""
- return self.sizes[index]
-
- def size(self, index):
- """Return an example's size as a float or tuple. This value is used when
- filtering a dataset with ``--max-positions``."""
- return self.sizes[index]
-
- def ordered_indices(self):
- """Return an ordered list of indices. Batches will be constructed based
- on this order."""
- if self.shuffle:
- indices = np.random.permutation(len(self))
- else:
- indices = np.arange(len(self))
- return indices[np.argsort(self.sizes[indices], kind="mergesort")]
-
- def prefetch(self, indices):
- self.src.prefetch(indices)
- self.tgt.prefetch(indices)
-
- @property
- def supports_prefetch(self):
- return (
- hasattr(self.src, "supports_prefetch")
- and self.src.supports_prefetch
- and hasattr(self.tgt, "supports_prefetch")
- and self.tgt.supports_prefetch
- )
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/sort_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/sort_dataset.py
deleted file mode 100644
index b3890e7279e1f26db2e48ec0a91c639e9299d60f..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/sort_dataset.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-
-from . import BaseWrapperDataset
-
-
-class SortDataset(BaseWrapperDataset):
- def __init__(self, dataset, sort_order):
- super().__init__(dataset)
- if not isinstance(sort_order, (list, tuple)):
- sort_order = [sort_order]
- self.sort_order = sort_order
-
- assert all(len(so) == len(dataset) for so in sort_order)
-
- def ordered_indices(self):
- return np.lexsort(self.sort_order)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/utils/cider/pyciderevalcap/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/utils/cider/pyciderevalcap/__init__.py
deleted file mode 100644
index 3f7d85bba884ea8f83fc6ab2a1e6ade80d98d4d9..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/utils/cider/pyciderevalcap/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-__author__ = 'tylin'
diff --git a/spaces/HarshulNanda/HARM_ML_App_ludwig/statsViewer.py b/spaces/HarshulNanda/HARM_ML_App_ludwig/statsViewer.py
deleted file mode 100644
index e9bbad8035966d4b0b33b9f92c18aab5dfe75408..0000000000000000000000000000000000000000
--- a/spaces/HarshulNanda/HARM_ML_App_ludwig/statsViewer.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import os
-from pytube import YouTube
-import pytube
-from stqdm import stqdm
-import pandas as pd
-from youtubesearchpython import Video, ResultMode
-import streamlit as st
-import scrapetube
-from categoryPredictor import predictCategoryFor
-import pandas as pd
-
-@st.experimental_memo
-def convert_df(df):
- return df.to_csv(index=False).encode('utf-8')
-
-def generate_channel_video_data(of_channel, with_number_of_videos):
- video_urls = []
- c_id = Video.get(of_channel, mode=ResultMode.json, get_upload_date=True)["channel"]["id"]
- videos = scrapetube.get_channel(c_id)
- i = 0
- for video in videos:
- video_urls.append("https://www.youtube.com/watch?v="+str(video['videoId']))
- i += 1
- if i == with_number_of_videos:
- break
-
- data = {
- "Title": [],
- "Description": [],
- "Category": [],
- "Is Educational?": [],
- "Beyond Exams Category": [],
- }
-
- timer = stqdm(video_urls)
-
- for video in timer:
- timer.set_description("☕️ Have a coffee, while we are generating your dataset. ")
- try:
- v = Video.get(video, mode = ResultMode.json, get_upload_date=True)
- t = v["title"]
- d = v["description"]
- c = v["category"]
- isEdu, isCat, cat_array, sub_array = predictCategoryFor(video)
- data["Description"].append(d)
- data["Category"].append(c)
- data["Title"].append(t)
- data["Is Educational?"].append(isEdu)
- data["Beyond Exams Category"].append(isCat)
- except Exception as e:
- print(e)
- continue
-
- df = pd.DataFrame(data)
- st.dataframe(df)
- csv = convert_df(df)
-
- st.download_button(
- "Download this dataframe",
- csv,
- "file.csv",
- "text/csv",
- key='download-csv'
- )
\ No newline at end of file
diff --git a/spaces/Harveenchadha/Hindi_TTS/app.py b/spaces/Harveenchadha/Hindi_TTS/app.py
deleted file mode 100644
index c4723bc3b7b1edccd7b4002510c4ff9f2041e390..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Hindi_TTS/app.py
+++ /dev/null
@@ -1,102 +0,0 @@
-import os
-os.system("pip uninstall -y gradio")
-os.system("pip install gradio==2.7")
-import gradio as gr
-
-#os.system('git clone https://github.com/Open-Speech-EkStep/vakyansh-tts')
-os.chdir('vakyansh_tts')
-os.system('bash install.sh')
-os.system('python setup.py bdist_wheel')
-os.system('pip install -e .')
-os.chdir('tts_infer')
-os.system('mkdir translit_models')
-os.chdir('translit_models')
-os.system('wget -q https://storage.googleapis.com/vakyaansh-open-models/translit_models/default_lineup.json')
-os.system('mkdir hindi')
-os.chdir('hindi')
-os.system('wget -q https://storage.googleapis.com/vakyaansh-open-models/translit_models/hindi/hindi_transliteration.zip')
-os.system('unzip hindi_transliteration')
-
-os.system('wget -q https://storage.googleapis.com/vakyansh-open-models/tts/hindi/hi-IN/female_voice_0/glow.zip')
-os.system('unzip glow.zip')
-
-os.system('wget -q https://storage.googleapis.com/vakyansh-open-models/tts/hindi/hi-IN/female_voice_0/hifi.zip')
-os.system('unzip hifi.zip')
-os.system('rm glow.zip')
-os.system('rm hifi.zip')
-
-os.system('mkdir male')
-os.chdir('male')
-os.system('wget -q https://storage.googleapis.com/vakyansh-open-models/tts/hindi/hi-IN/male_voice_1/glow.zip')
-os.system('unzip glow.zip')
-
-os.system('wget -q https://storage.googleapis.com/vakyansh-open-models/tts/hindi/hi-IN/male_voice_1/hifi.zip')
-os.system('unzip hifi.zip')
-
-
-os.system('pwd')
-os.system('rm glow.zip')
-os.system('rm hifi.zip')
-os.system('pip uninstall -y numpy')
-os.system('pip install numpy==1.19.5')
-os.system('pip uninstall -y numba')
-os.system('pip install numba==0.53')
-
-os.chdir('/home/user/app/')
-os.system('pwd')
-#print('hello')
-
-from vakyansh_tts.tts_infer.tts import TextToMel, MelToWav
-from vakyansh_tts.tts_infer.transliterate import XlitEngine
-from vakyansh_tts.tts_infer.num_to_word_on_sent import normalize_nums
-
-import re
-from scipy.io.wavfile import write
-device = 'cpu'
-
-text_to_mel_f = TextToMel(glow_model_dir='/home/user/app/vakyansh_tts/tts_infer/translit_models/hindi/glow_ckp', device=device)
-mel_to_wav_f = MelToWav(hifi_model_dir='/home/user/app/vakyansh_tts/tts_infer/translit_models/hindi/hifi_ckp', device=device)
-text_to_mel_m = TextToMel(glow_model_dir='/home/user/app/vakyansh_tts/tts_infer/translit_models/hindi/male/glow_ckp', device=device)
-mel_to_wav_m = MelToWav(hifi_model_dir='/home/user/app/vakyansh_tts/tts_infer/translit_models/hindi/male/hifi_ckp', device=device)
-
-
-def translit(text, lang):
- reg = re.compile(r'[a-zA-Z]')
- engine = XlitEngine(lang)
- words = [engine.translit_word(word, topk=1)[lang][0] if reg.match(word) else word for word in text.split()]
- updated_sent = ' '.join(words)
- return updated_sent
-
-def run_tts(text, gender):
- print("Original Text from user: ", text)
- lang='hi'
- text = text.replace('।', '.') # only for hindi models
- text_num_to_word = normalize_nums(text, lang) # converting numbers to words in lang
- text_num_to_word_and_transliterated = translit(text_num_to_word, lang) # transliterating english words to lang
- print("Text after preprocessing: ", text_num_to_word_and_transliterated)
- if gender == 'female':
- mel = text_to_mel_f.generate_mel(text_num_to_word_and_transliterated)
- audio, sr = mel_to_wav_f.generate_wav(mel)
- else:
- mel = text_to_mel_m.generate_mel(text_num_to_word_and_transliterated)
- audio, sr = mel_to_wav_m.generate_wav(mel)
- #write(filename='temp.wav', rate=sr, data=audio) # for saving wav file, if needed
- return (sr, audio)
-
-#_, audio = run_tts('hello my name is harveen')
-
-
-textbox = gr.inputs.Textbox(
- placeholder="Enter Hindi text here", default="", label="TTS"
-)
-
-choices = ['male', 'female']
-radioBtns = gr.inputs.Radio(choices, type="value", default='male', label=None)
-
-op = gr.outputs.Audio(type="numpy", label=None)
-examples = [['क्रिप्टो करेंसी दरअसल, वित्तीय लेन-देन का एक जरिया है। बिल्कुल भारतीय रुपये और अमेरिकी डॉलर के समान, अंतर सिर्फ इतना है कि यह आभाषी है और दिखाई नहीं देती, न ही आप इसे छू सकते हैं।', 'male'],
- ['mujhe abhi bhi yakeen nai aa raha ki yeh aise bhi chal sakta hai', 'male'],
- ['मुझे 26 रुपए दे दो, फिर मेरे पास 50 रुपए हो जाएंगे', 'male']]
-
-iface = gr.Interface(fn=run_tts, examples=examples, inputs=[textbox,radioBtns], outputs=op, title='Vakyansh Text To Speech (TTS): Hindi Demo', description = 'Glow TTS + hifi gan. Training Code: https://github.com/Open-Speech-EkStep/vakyansh-tts ' , article = ' Note: This space is running on CPU, inference times will be higher. Please report issues to @harveenchadha twitter. ')
-iface.launch(enable_queue=True, cache_examples=True)
diff --git a/spaces/Hexamind/QnA/src/model/block.py b/spaces/Hexamind/QnA/src/model/block.py
deleted file mode 100644
index da01e982a6ce409a4bdfbe6e64abaf20d0e32755..0000000000000000000000000000000000000000
--- a/spaces/Hexamind/QnA/src/model/block.py
+++ /dev/null
@@ -1,44 +0,0 @@
-class Block:
- def __init__(self, doc: str = '', title: str = '', content: str = '', content_fr: str = '',
- index: str = '', rank: int = 0, level: int = 0, distance: float = 99999):
- self.doc = doc
- self.title = title
- self.title_fr = ""
- self.content = content
- self.content_fr = content_fr
- self.specials = []
- self.index = index
- self.rank = rank
- self.level = level
- self.distance = distance
-
- def to_dict(self) -> {}:
- block_dict = {'doc': self.doc, 'title': self.title, 'title_fr': self.title_fr, 'content': self.content,
- 'content_fr': self.content_fr, 'index': self.index, 'rank': self.rank, 'level': self.level,
- 'distance': self.distance}
- for i, s in enumerate(self.specials):
- special_key = 'special_'+str(i)
- block_dict[special_key] = s
- block_dict['specials_len'] = len(self.specials)
- return block_dict
-
- def from_dict(self, block_dict: {}):
- self.doc = block_dict['doc']
- self.title = block_dict['title']
- self.title_fr = block_dict['title_fr']
- self.content = block_dict['content']
- self.content_fr = block_dict['content_fr']
- self.index = block_dict['index']
- self.rank = block_dict['rank']
- self.level = block_dict['level']
- self.distance = block_dict['distance']
- self.specials = []
- for i in range(block_dict['specials_len']):
- special_key = 'special_' + str(i)
- self.specials.append(block_dict[special_key])
- return self
-
- @property
- def distance_str(self) -> str:
- return format(self.distance, '.2f')
-
diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/test_data/blocks_configs.py b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/test_data/blocks_configs.py
deleted file mode 100644
index 11dfa3a8da5efe45127f8e967ce5a0530f82b771..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/test_data/blocks_configs.py
+++ /dev/null
@@ -1,653 +0,0 @@
-XRAY_CONFIG = {
- "version": "3.4b3\n",
- "mode": "blocks",
- "dev_mode": True,
- "components": [
- {
- "id": 27,
- "type": "markdown",
- "props": {
- "value": "
- Streaming / Unlimited conversations / Save history / Preset prompts / Chat with files / Web search
- LaTeX rendering / Table rendering / Code highlighting
- Auto dark mode / Adaptive web interface / WeChat-like theme
- Multi-parameters tuning / Multi-API-Key support / Multi-user support
- Compatible with GPT-4 / Local deployment for LLMs
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/Download The Tarzan A New York Full !FREE! Movie Italian Dubbed In Torrent.md b/spaces/bioriAsaeru/text-to-voice/Download The Tarzan A New York Full !FREE! Movie Italian Dubbed In Torrent.md
deleted file mode 100644
index 271c89f96843e13e914c08892737af47e36ef5fa..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Download The Tarzan A New York Full !FREE! Movie Italian Dubbed In Torrent.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
silent hill full movie in hindi dubbedgolkes From Hell (2001)doul Audio Eng-hindi 1 download film serangan umum 1 maretinstmank supply chain logistics management bowersox 4th pdf free spss 21 32 bit torrentbfdcm hauppauge wintv v7 2 28147 w extend iso.rar CRACK Rosetta Stone v3 Latin Speech Preinstalled.exe Ashes Remain What I Ve Become Lossless.rar Magic Partition Recovery 2.6 Portable KeyGen - Crackingpatch Serial Key Keygen Contaplus elite 2013 taringa
-
Download the Tarzan a New York full movie italian dubbed in torrent
jimmy eat world bleed american deluxe zip Sachin - A Billion Dreams movie hindi download mp4 tabliczka do wydruku pdf Asus dsl-n55u custom firmware Twinbridge Chinese Partner V65 Premium Edition 23 terjemah kitab syamsul maarif kubro zip osteopathy in the cranial field magoun pdf 16 storm front epub download dresden files 80 HD Online Player (Matrubhoomi Movie Download 720p) gears of war 3 pc download utorrent for 167
-
ReFX Nexus v2.2 VSTi RTAS DVDR Crack .rar guitar hero 3 psp cso download microcat daihatsu dongle crack free nfs most wanted copspeech big sound file rapidshare Adobe After Effects CC 2018 17.1.1.14 (x64) Patch crack 3ds max vray material library free download torrent hidraulica de tuberias y canales arturo rocha pdf solucionario non conventional energy resources book by hasan saeed free download Bijoy Ekattor 2012 KeygenaRnE] Sultan - The Warrior hindi dubbed movie download hd
-
Film Documentario Bob Marley 2012 Download Torrent Ita Devon.Erotique.XXX.DVDRip.XviD-LUST Saxy Mom Ki Jungle Me Chodai Hindi Story nadiya ke paar full movie mp4 download Timepass 2 Online Watch Dailymotion 720p Grade 7 Math Textbook Nelson.pdf Baahubali 2 - The Conclusion full mp4 movie download fifa manager 12 patch 1.0.0.1 crack Pasanga 2 Hd Tamil Movie Free 122 heat treatment by rajan and sharma pdf free 161
-
-
the Joker full movie mp4 download wbs schedule pro 5.1 crack download kanye west graduation zip file CRACK Waves - Complete v10 2018.08.07 (VST, VST3, AAX, STANDALONE) x64 descargar programa para hacer horarios escolares gratis Torrent ciel compta mac 2013 word power by dilip kushwaha pdf 27 enzai oav 1 vostfr non censure alludu seenu movie hd 720p dvdrip bacaan ratib al attas pdf download
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/breadlicker45/the-jam-machine-app/familizer.py b/spaces/breadlicker45/the-jam-machine-app/familizer.py
deleted file mode 100644
index a55b29c4612a4541162fbb1cd1d7e3d1795758dc..0000000000000000000000000000000000000000
--- a/spaces/breadlicker45/the-jam-machine-app/familizer.py
+++ /dev/null
@@ -1,137 +0,0 @@
-import random
-from joblib import Parallel, delayed
-from pathlib import Path
-from constants import INSTRUMENT_CLASSES, INSTRUMENT_TRANSFER_CLASSES
-from utils import get_files, timeit, FileCompressor
-
-
-class Familizer:
- def __init__(self, n_jobs=-1, arbitrary=False):
- self.n_jobs = n_jobs
- self.reverse_family(arbitrary)
-
- def get_family_number(self, program_number):
- """
- Given a MIDI instrument number, return its associated instrument family number.
- """
- for instrument_class in INSTRUMENT_CLASSES:
- if program_number in instrument_class["program_range"]:
- return instrument_class["family_number"]
-
- def reverse_family(self, arbitrary):
- """
- Create a dictionary of family numbers to randomly assigned program numbers.
- This is used to reverse the family number tokens back to program number tokens.
- """
-
- if arbitrary is True:
- int_class = INSTRUMENT_TRANSFER_CLASSES
- else:
- int_class = INSTRUMENT_CLASSES
-
- self.reference_programs = {}
- for family in int_class:
- self.reference_programs[family["family_number"]] = random.choice(
- family["program_range"]
- )
-
- def get_program_number(self, family_number):
- """
- Given given a family number return a random program number in the respective program_range.
- This is the reverse operation of get_family_number.
- """
- assert family_number in self.reference_programs
- return self.reference_programs[family_number]
-
- # Replace instruments in text files
- def replace_instrument_token(self, token):
- """
- Given a MIDI program number in a word token, replace it with the family or program
- number token depending on the operation.
- e.g. INST=86 -> INST=10
- """
- inst_number = int(token.split("=")[1])
- if self.operation == "family":
- return "INST=" + str(self.get_family_number(inst_number))
- elif self.operation == "program":
- return "INST=" + str(self.get_program_number(inst_number))
-
- def replace_instrument_in_text(self, text):
- """Given a text piece, replace all instrument tokens with family number tokens."""
- return " ".join(
- [
- self.replace_instrument_token(token)
- if token.startswith("INST=") and not token == "INST=DRUMS"
- else token
- for token in text.split(" ")
- ]
- )
-
- def replace_instruments_in_file(self, file):
- """Given a text file, replace all instrument tokens with family number tokens."""
- text = file.read_text()
- file.write_text(self.replace_instrument_in_text(text))
-
- @timeit
- def replace_instruments(self):
- """
- Given a directory of text files:
- Replace all instrument tokens with family number tokens.
- """
- files = get_files(self.output_directory, extension="txt")
- Parallel(n_jobs=self.n_jobs)(
- delayed(self.replace_instruments_in_file)(file) for file in files
- )
-
- def replace_tokens(self, input_directory, output_directory, operation):
- """
- Given a directory and an operation, perform the operation on all text files in the directory.
- operation can be either 'family' or 'program'.
- """
- self.input_directory = input_directory
- self.output_directory = output_directory
- self.operation = operation
-
- # Uncompress files, replace tokens, compress files
- fc = FileCompressor(self.input_directory, self.output_directory, self.n_jobs)
- fc.unzip()
- self.replace_instruments()
- fc.zip()
- print(self.operation + " complete.")
-
- def to_family(self, input_directory, output_directory):
- """
- Given a directory containing zip files, replace all instrument tokens with
- family number tokens. The output is a directory of zip files.
- """
- self.replace_tokens(input_directory, output_directory, "family")
-
- def to_program(self, input_directory, output_directory):
- """
- Given a directory containing zip files, replace all instrument tokens with
- program number tokens. The output is a directory of zip files.
- """
- self.replace_tokens(input_directory, output_directory, "program")
-
-
-if __name__ == "__main__":
-
- # Choose number of jobs for parallel processing
- n_jobs = -1
-
- # Instantiate Familizer
- familizer = Familizer(n_jobs)
-
- # Choose directory to process for program
- input_directory = Path("midi/dataset/first_selection/validate").resolve() # fmt: skip
- output_directory = input_directory / "family"
-
- # familize files
- familizer.to_family(input_directory, output_directory)
-
- # Choose directory to process for family
- # input_directory = Path("../data/music_picks/encoded_samples/validate/family").resolve() # fmt: skip
- # output_directory = input_directory.parent / "program"
-
- # # programize files
- # familizer.to_program(input_directory, output_directory)
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/inference.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/inference.py
deleted file mode 100644
index 81049649edddb23aeebeac4085514da838f1463b..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/inference.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from dataclasses import fields
-from typing import Any, List
-import torch
-
-from detectron2.structures import Instances
-
-
-def densepose_inference(densepose_predictor_output: Any, detections: List[Instances]) -> None:
- """
- Splits DensePose predictor outputs into chunks, each chunk corresponds to
- detections on one image. Predictor output chunks are stored in `pred_densepose`
- attribute of the corresponding `Instances` object.
-
- Args:
- densepose_predictor_output: a dataclass instance (can be of different types,
- depending on predictor used for inference). Each field can be `None`
- (if the corresponding output was not inferred) or a tensor of size
- [N, ...], where N = N_1 + N_2 + .. + N_k is a total number of
- detections on all images, N_1 is the number of detections on image 1,
- N_2 is the number of detections on image 2, etc.
- detections: a list of objects of type `Instance`, k-th object corresponds
- to detections on k-th image.
- """
- k = 0
- for detection_i in detections:
- if densepose_predictor_output is None:
- # don't add `pred_densepose` attribute
- continue
- n_i = detection_i.__len__()
-
- PredictorOutput = type(densepose_predictor_output)
- output_i_dict = {}
- # we assume here that `densepose_predictor_output` is a dataclass object
- for field in fields(densepose_predictor_output):
- field_value = getattr(densepose_predictor_output, field.name)
- # slice tensors
- if isinstance(field_value, torch.Tensor):
- output_i_dict[field.name] = field_value[k : k + n_i]
- # leave others as is
- else:
- output_i_dict[field.name] = field_value
- detection_i.pred_densepose = PredictorOutput(**output_i_dict)
- k += n_i
diff --git a/spaces/bubbletea98/Neo4J_Integration/app.py b/spaces/bubbletea98/Neo4J_Integration/app.py
deleted file mode 100644
index 68c13f6c979721057cf4f403613f8710c19fca97..0000000000000000000000000000000000000000
--- a/spaces/bubbletea98/Neo4J_Integration/app.py
+++ /dev/null
@@ -1,255 +0,0 @@
-drop_down_l=['Autoencoders',
- 'Dirichlet Processes',
- 'Gaussian graphical models',
- 'Manifold Learning',
- 'Markov Random Fields',
- 'Markov chains',
- 'Mean Field Approximation',
- 'Message Passing',
- 'Meta-Learning',
- 'Mixture Models',
- 'Naive Bayes',
- 'Principal Component Analysis',
- 'ResNet',
- 'Sampling',
- 'Sequence to sequence',
- 'State Space Models',
- 'Unsupervised learning',
- 'Variations of GANs',
- 'Visual QA',
- 'a* search',
- 'adversarial search',
- 'agent-based view of ai',
- 'automated essay scoring',
- 'autonomous cars',
- 'backpropagation',
- 'bag of words model',
- 'bayes theorem',
- 'bayesian network',
- 'beam search',
- 'bio text mining',
- 'character level language models',
- 'chinese nlp',
- 'chomsky hierarchy',
- 'citation networks',
- 'cky parsing',
- 'classic parsing methods',
- 'classification',
- 'clustering',
- 'computation theory',
- 'computational phonology',
- 'computer vision',
- 'context free grammar',
- 'context free grammars',
- 'context sensitive grammar',
- 'convolutional neural network',
- 'convolutional neural networks',
- 'course introduction',
- 'deep learning introduction',
- 'dependency parsing',
- 'dependency syntax',
- 'dialog systems',
- 'dimensionality reduction',
- 'discourse analysis',
- 'discourse parsing',
- 'document ranking',
- 'document representation',
- 'dual decomposition',
- 'dynamic programming',
- 'edit distance',
- 'entailment',
- 'evaluation of dependency parsing',
- 'evaluation of language modeling',
- 'evaluation of question answering',
- 'evaluation of text classification',
- 'event detection',
- 'expectation maximization algorithm',
- 'expert systems',
- 'feature learning',
- 'feature selection',
- 'finite state machines',
- 'finite state transducers',
- 'first order logic',
- 'first-order logic',
- 'game playing in ai',
- 'gated recurrent units',
- 'generative adversarial networks',
- 'generative and discriminative models',
- 'gibbs sampling',
- 'grammar checker',
- 'graph convolutional networks',
- 'graph theory',
- 'graph-based nlp',
- 'graphical models',
- 'harmonic functions',
- 'heuristic search',
- 'hidden markov models',
- 'image retrieval',
- 'information extraction',
- 'information retrieval',
- 'informed search',
- 'kernel function',
- 'kernels',
- 'knowledge representation',
- 'language identification',
- 'language modeling',
- 'latent dirichlet allocation',
- 'latent semantic indexing',
- 'latent variable models',
- 'lexical semantics',
- 'lexicalized parsing',
- 'lexicography',
- 'linear algebra',
- 'linear discriminant analysis',
- 'linear regression',
- 'linguistics basics',
- 'log-linear models',
- 'logic and logical agents',
- 'logic and reasoning',
- 'logistic regression',
- 'long short term memory networks',
- 'loss function',
- 'machine learning resources',
- 'machine translation',
- 'machine translation techniques',
- 'markov chain monte carlo',
- 'markov decision processes',
- 'mathematical models',
- 'matrix factorization',
- 'matrix multiplication',
- 'maximum likelihood estimation',
- 'memory networks',
- 'monte carlo methods',
- 'monte carlo tree search',
- 'morphology and lexicon',
- 'morphology and semantics in machine translation',
- 'multi-agent systems',
- 'multilingual word embedding',
- 'n-gram models',
- 'natural language processing intro',
- 'neural language modeling',
- 'neural machine translation',
- 'neural networks',
- 'neural parsing',
- 'neural question answering',
- 'neural summarization',
- 'neural turing machine',
- 'newton method',
- 'nlp and vision',
- 'nlp for biology',
- 'nlp for the humanities',
- 'noisy channel model',
- 'optimization',
- 'pagerank',
- 'parsing',
- 'parsing evaluation',
- 'part of speech tagging',
- 'particle filter',
- 'parts of speech',
- 'penn treebank',
- 'phonetics',
- 'planning',
- 'pointer networks',
- 'predicate logic',
- 'preprocessing',
- 'probabilistic context free grammars',
- 'probabilistic grammars',
- 'probabilities',
- 'problem solving and search',
- 'programming languages',
- 'propositional logic',
- 'prosody',
- 'python',
- 'question answering',
- 'random walks',
- 'random walks and harmonic functions',
- 'recommendation system',
- 'recurrent neural networks',
- 'recursive neural network',
- 'recursive neural networks',
- 'regular expressions',
- 'reinforcement learning',
- 'relation extraction',
- 'robotic locomotion',
- 'robotics',
- 'scientific article summarization',
- 'search',
- 'search engines',
- 'semantic parsing',
- 'semantic role labeling',
- 'semantic similarity',
- 'semi supervised learning',
- 'semi-supervised learning',
- 'sentence boundary recognition',
- 'sentence representations',
- 'sentence simplification',
- 'sentiment analysis',
- 'seq2seq',
- 'sequence classification and conditional random fields',
- 'shallow parsing',
- 'shift-reduce parsing',
- 'singular value decomposition',
- 'social media analysis',
- 'social network extraction',
- 'spectral clustering',
- 'spectral methods',
- 'speech processing',
- 'speech signal analysis',
- 'speech synthesis',
- 'spelling correction',
- 'stack lstm',
- 'statistical machine translation',
- 'statistical parsing',
- 'statistical part of speech tagging',
- 'structured learning',
- 'structured sparsity',
- 'summarization evaluation',
- 'syntax',
- 'syntax based machine translation',
- 'syntaxnet',
- 'text generation',
- 'text mining',
- 'text similarity',
- 'text summarization',
- 'text to speech generation',
- 'the ibm models',
- 'thesaurus-based similarity',
- 'tokenization',
- 'toolkits for information retrieval',
- 'tools for dl',
- 'topic modeling',
- 'training neural networks',
- 'transition based dependency parsing',
- 'tree adjoining grammar',
- 'uncertainty',
- 'variational bayes models',
- 'vector representations',
- 'vector semantics',
- 'weakly-supervised learning',
- 'word distributions',
- 'word embedding',
- 'word embedding variations',
- 'word segmentation',
- 'word sense disambiguation',
- 'wordnet']
-domain_list=['ml', 'nlp', 'dl', 'deep-rl', 'pr', 'ai']
-import gradio as gr
-from py2neo import Graph
-
-def greet(topic_name, lecture_name, URL_link,author,year,domain):
- graph = Graph("neo4j+s://e4991d17.databases.neo4j.io", auth=("neo4j", "ohM5LvdhcutqyRQTqk6PpuiDofvpA4o3pXLpC9kb4_g"),routing=True)
- qs="MATCH (T:topic)\
- Where T.id_topic=$topic\
- CREATE (test:lecture{topic:$topic,name:$lecture,id:$URL,year:$year,author:$author,domain:$domain})-[:lecture_of]->(T);"
- graph.run(qs,topic=topic_name,lecture=lecture_name,URL=URL_link,author=author,year=year,domain=domain)
- return 'Finished Adding'
-
-iface = gr.Interface(
- fn=greet,
- inputs=[
- gr.inputs.Dropdown(drop_down_l, label="Topic belonging"),
- 'text','text','text','text',
- gr.inputs.Radio(domain_list, label="Which Domain this lecture is belonging ?"),
- ],
- outputs=['text'])
-iface.launch()
\ No newline at end of file
diff --git a/spaces/cakewalk/splat/index.html b/spaces/cakewalk/splat/index.html
deleted file mode 100644
index 4be79e8cea9e02371af6f75fc516aa570c6f07e5..0000000000000000000000000000000000000000
--- a/spaces/cakewalk/splat/index.html
+++ /dev/null
@@ -1,262 +0,0 @@
-
-
-
- WebGL Gaussian Splat Viewer
-
-
-
-
-
-
-
-
-
movement (arrow keys)
-- left/right arrow keys to strafe side to side
-- up/down arrow keys to move forward/back
-- space to jump
-
-camera angle (wasd)
-- a/d to turn camera left/right
-- w/s to tilt camera up/down
-- q/e to roll camera counterclockwise/clockwise
-- i/k and j/l to orbit
-
-trackpad
-- scroll up/down/left/right to orbit
-- pinch to move forward/back
-- ctrl key + scroll to move forward/back
-- shift + scroll to move up/down or strafe
-
-mouse
-- click and drag to orbit
-- right click (or ctrl/cmd key) and drag up/down to move
-
-touch (mobile)
-- one finger to orbit
-- two finger pinch to move forward/back
-- two finger rotate to rotate camera clockwise/counterclockwise
-- two finger pan to move side-to-side and up-down
-
-other
-- press 0-9 to switch to one of the pre-loaded camera views
-- press p to resume default animation
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/modules.py b/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/modules.py
deleted file mode 100644
index 9c7fd9cd6eb8b7e0ec0e08957e970744a374a924..0000000000000000000000000000000000000000
--- a/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/modules.py
+++ /dev/null
@@ -1,390 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/__init__.py b/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/__init__.py
deleted file mode 100644
index 2f93cab80ded8e7239bb96eb6e364c3fd4fb46d9..0000000000000000000000000000000000000000
--- a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from .ldm import LatentDiffusion
-from .utils import seed_everything
-from .pipeline import *
\ No newline at end of file
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/docs/tutorials/README.md b/spaces/carlosalonso/Detection-video/carpeta_deteccion/docs/tutorials/README.md
deleted file mode 100644
index 1ca9c94d042ef838143a45490fe6b4556c19f3c9..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/docs/tutorials/README.md
+++ /dev/null
@@ -1,4 +0,0 @@
-# Read the docs:
-
-The latest documentation built from this directory is available at [detectron2.readthedocs.io](https://detectron2.readthedocs.io/).
-Documents in this directory are not meant to be read on github.
diff --git a/spaces/cbr/swp/utils.py b/spaces/cbr/swp/utils.py
deleted file mode 100644
index 2a74e9e795af9f6e7f78e28520617753beee36ef..0000000000000000000000000000000000000000
--- a/spaces/cbr/swp/utils.py
+++ /dev/null
@@ -1,112 +0,0 @@
-import os
-import cv2
-import time
-import glob
-import shutil
-import platform
-import datetime
-import subprocess
-from threading import Thread
-from moviepy.editor import VideoFileClip, ImageSequenceClip
-from moviepy.video.io.ffmpeg_tools import ffmpeg_extract_subclip
-
-
-def trim_video(video_path, output_path, start_frame, stop_frame):
- video_name, _ = os.path.splitext(os.path.basename(video_path))
- trimmed_video_filename = video_name + "_trimmed" + ".mp4"
- temp_path = os.path.join(output_path, "trim")
- os.makedirs(temp_path, exist_ok=True)
- trimmed_video_file_path = os.path.join(temp_path, trimmed_video_filename)
-
- video = VideoFileClip(video_path)
- fps = video.fps
- start_time = start_frame / fps
- duration = (stop_frame - start_frame) / fps
-
- trimmed_video = video.subclip(start_time, start_time + duration)
- trimmed_video.write_videofile(
- trimmed_video_file_path, codec="libx264", audio_codec="aac"
- )
- trimmed_video.close()
- video.close()
-
- return trimmed_video_file_path
-
-
-def open_directory(path=None):
- if path is None:
- return
- try:
- os.startfile(path)
- except:
- subprocess.Popen(["xdg-open", path])
-
-
-class StreamerThread(object):
- def __init__(self, src=0):
- self.capture = cv2.VideoCapture(src)
- self.capture.set(cv2.CAP_PROP_BUFFERSIZE, 2)
- self.FPS = 1 / 30
- self.FPS_MS = int(self.FPS * 1000)
- self.thread = None
- self.stopped = False
- self.frame = None
-
- def start(self):
- self.thread = Thread(target=self.update, args=())
- self.thread.daemon = True
- self.thread.start()
-
- def stop(self):
- self.stopped = True
- self.thread.join()
- print("stopped")
-
- def update(self):
- while not self.stopped:
- if self.capture.isOpened():
- (self.status, self.frame) = self.capture.read()
- time.sleep(self.FPS)
-
-
-class ProcessBar:
- def __init__(self, bar_length, total, before="⬛", after="🟨"):
- self.bar_length = bar_length
- self.total = total
- self.before = before
- self.after = after
- self.bar = [self.before] * bar_length
- self.start_time = time.time()
-
- def get(self, index):
- total = self.total
- elapsed_time = time.time() - self.start_time
- average_time_per_iteration = elapsed_time / (index + 1)
- remaining_iterations = total - (index + 1)
- estimated_remaining_time = remaining_iterations * average_time_per_iteration
-
- self.bar[int(index / total * self.bar_length)] = self.after
- info_text = f"({index+1}/{total}) {''.join(self.bar)} "
- info_text += f"(ETR: {int(estimated_remaining_time // 60)} min {int(estimated_remaining_time % 60)} sec)"
- return info_text
-
-
-logo_image = cv2.imread("./assets/images/logo.png", cv2.IMREAD_UNCHANGED)
-
-
-def add_logo_to_image(img, logo=logo_image):
- logo_size = int(img.shape[1] * 0.1)
- logo = cv2.resize(logo, (logo_size, logo_size))
- if logo.shape[2] == 4:
- alpha = logo[:, :, 3]
- else:
- alpha = np.ones_like(logo[:, :, 0]) * 255
- padding = int(logo_size * 0.1)
- roi = img.shape[0] - logo_size - padding, img.shape[1] - logo_size - padding
- for c in range(0, 3):
- img[roi[0] : roi[0] + logo_size, roi[1] : roi[1] + logo_size, c] = (
- alpha / 255.0
- ) * logo[:, :, c] + (1 - alpha / 255.0) * img[
- roi[0] : roi[0] + logo_size, roi[1] : roi[1] + logo_size, c
- ]
- return img
diff --git a/spaces/ccolas/TastyPiano/src/cocktails/pipeline/get_affect2affective_cluster.py b/spaces/ccolas/TastyPiano/src/cocktails/pipeline/get_affect2affective_cluster.py
deleted file mode 100644
index 6b0cd8cc37195869643cd591b9cf4585d7ff3c4a..0000000000000000000000000000000000000000
--- a/spaces/ccolas/TastyPiano/src/cocktails/pipeline/get_affect2affective_cluster.py
+++ /dev/null
@@ -1,23 +0,0 @@
-from src.music.config import CHECKPOINTS_PATH
-import pickle
-import numpy as np
-
-# can be computed from cocktail2affect
-cluster_model_path = CHECKPOINTS_PATH + "/music2cocktails/affects2affect_cluster/cluster_model.pickle"
-
-def get_affect2affective_cluster():
- with open(cluster_model_path, 'rb') as f:
- data = pickle.load(f)
- model = data['cluster_model']
- dimensions_weights = data['dimensions_weights']
- def find_cluster(aff_coord):
- if aff_coord.ndim == 1:
- aff_coord = aff_coord.reshape(1, -1)
- return model.predict(aff_coord * np.array(dimensions_weights))
- return find_cluster
-
-def get_affective_cluster_centers():
- with open(cluster_model_path, 'rb') as f:
- data = pickle.load(f)
- return data['cluster_centers']
-
diff --git a/spaces/chansung/zero2story/interfaces/story_gen_ui.py b/spaces/chansung/zero2story/interfaces/story_gen_ui.py
deleted file mode 100644
index 83132505c3301a37ab8d4ab25872210f257afd81..0000000000000000000000000000000000000000
--- a/spaces/chansung/zero2story/interfaces/story_gen_ui.py
+++ /dev/null
@@ -1,451 +0,0 @@
-import re
-import copy
-import random
-import gradio as gr
-from gradio_client import Client
-from pathlib import Path
-
-from modules import (
- ImageMaker, MusicMaker, merge_video
-)
-from modules.llms import get_llm_factory
-from interfaces import utils
-
-from pingpong import PingPong
-from pingpong.context import CtxLastWindowStrategy
-
-img_maker = ImageMaker('https://huggingface.co/jphan32/Zero2Story/landscapeAnimePro_v20Inspiration.safetensors',
- vae="https://huggingface.co/jphan32/Zero2Story/cute20vae.safetensors")
-
-bgm_maker = MusicMaker(model_size='small', output_format='mp3')
-
-video_gen_client_url = None # e.g. "https://0447df3cf5f7c49c46.gradio.live"
-
-
-async def update_story_gen(
- cursors, cur_cursor_idx,
- genre, place, mood,
- main_char_name, main_char_age, main_char_personality, main_char_job,
- side_char_enable1, side_char_name1, side_char_age1, side_char_personality1, side_char_job1,
- side_char_enable2, side_char_name2, side_char_age2, side_char_personality2, side_char_job2,
- side_char_enable3, side_char_name3, side_char_age3, side_char_personality3, side_char_job3,
-):
- if len(cursors) == 1:
- return await first_story_gen(
- cursors,
- genre, place, mood,
- main_char_name, main_char_age, main_char_personality, main_char_job,
- side_char_enable1, side_char_name1, side_char_age1, side_char_personality1, side_char_job1,
- side_char_enable2, side_char_name2, side_char_age2, side_char_personality2, side_char_job2,
- side_char_enable3, side_char_name3, side_char_age3, side_char_personality3, side_char_job3,
- cur_cursor_idx=cur_cursor_idx
- )
- else:
- return await next_story_gen(
- cursors,
- None,
- genre, place, mood,
- main_char_name, main_char_age, main_char_personality, main_char_job,
- side_char_enable1, side_char_name1, side_char_age1, side_char_personality1, side_char_job1,
- side_char_enable2, side_char_name2, side_char_age2, side_char_personality2, side_char_job2,
- side_char_enable3, side_char_name3, side_char_age3, side_char_personality3, side_char_job3,
- cur_cursor_idx=cur_cursor_idx
- )
-
-async def next_story_gen(
- cursors,
- action,
- genre, place, mood,
- main_char_name, main_char_age, main_char_personality, main_char_job,
- side_char_enable1, side_char_name1, side_char_age1, side_char_personality1, side_char_job1,
- side_char_enable2, side_char_name2, side_char_age2, side_char_personality2, side_char_job2,
- side_char_enable3, side_char_name3, side_char_age3, side_char_personality3, side_char_job3,
- cur_cursor_idx=None,
- llm_type="PaLM"
-):
- factory = get_llm_factory(llm_type)
- prompts = factory.create_prompt_manager().prompts
- llm_service = factory.create_llm_service()
-
- stories = ""
- cur_side_chars = 1
-
- action = cursors[cur_cursor_idx]["action"] if cur_cursor_idx is not None else action
- end_idx = len(cursors) if cur_cursor_idx is None else len(cursors)-1
-
- for cursor in cursors[:end_idx]:
- stories = stories + cursor["story"]
-
- side_char_prompt = utils.add_side_character(
- [side_char_enable1, side_char_enable2, side_char_enable3],
- [side_char_name1, side_char_name2, side_char_name3],
- [side_char_job1, side_char_job2, side_char_job3],
- [side_char_age1, side_char_age2, side_char_age3],
- [side_char_personality1, side_char_personality2, side_char_personality3],
- )
-
- prompt = prompts['story_gen']['next_story_gen'].format(
- genre=genre, place=place, mood=mood,
- main_char_name=main_char_name,
- main_char_job=main_char_job,
- main_char_age=main_char_age,
- main_char_personality=main_char_personality,
- side_char_placeholder=side_char_prompt,
- stories=stories, action=action,
- )
-
- print(f"generated prompt:\n{prompt}")
- parameters = llm_service.make_params(mode="text", temperature=1.0, top_k=40, top_p=0.9, max_output_tokens=4096)
- try:
- response_json = await utils.retry_until_valid_json(prompt, parameters=parameters)
- except Exception as e:
- print(e)
- raise gr.Error(e)
-
- story = response_json["paragraphs"]
- if isinstance(story, list):
- story = "\n\n".join(story)
-
- if cur_cursor_idx is None:
- cursors.append({
- "title": "",
- "story": story,
- "action": action
- })
- else:
- cursors[cur_cursor_idx]["story"] = story
- cursors[cur_cursor_idx]["action"] = action
-
- return (
- cursors, len(cursors)-1,
- story,
- gr.update(
- maximum=len(cursors), value=len(cursors),
- label=f"{len(cursors)} out of {len(cursors)} stories",
- visible=True, interactive=True
- ),
- gr.update(interactive=True),
- gr.update(interactive=True),
- gr.update(value=None, visible=False, interactive=True),
- gr.update(value=None, visible=False, interactive=True),
- gr.update(value=None, visible=False, interactive=True),
- )
-
-async def actions_gen(
- cursors,
- genre, place, mood,
- main_char_name, main_char_age, main_char_personality, main_char_job,
- side_char_enable1, side_char_name1, side_char_age1, side_char_personality1, side_char_job1,
- side_char_enable2, side_char_name2, side_char_age2, side_char_personality2, side_char_job2,
- side_char_enable3, side_char_name3, side_char_age3, side_char_personality3, side_char_job3,
- cur_cursor_idx=None,
- llm_type="PaLM"
-):
- factory = get_llm_factory(llm_type)
- prompts = factory.create_prompt_manager().prompts
- llm_service = factory.create_llm_service()
-
- stories = ""
- cur_side_chars = 1
- end_idx = len(cursors) if cur_cursor_idx is None else len(cursors)-1
-
- for cursor in cursors[:end_idx]:
- stories = stories + cursor["story"]
-
- summary_prompt = prompts['story_gen']['summarize'].format(stories=stories)
-
- print(f"generated prompt:\n{summary_prompt}")
- parameters = llm_service.make_params(mode="text", temperature=1.0, top_k=40, top_p=1.0, max_output_tokens=4096)
-
- try:
- _, summary = await llm_service.gen_text(summary_prompt, mode="text", parameters=parameters)
- except Exception as e:
- print(e)
- raise gr.Error(e)
-
- side_char_prompt = utils.add_side_character(
- [side_char_enable1, side_char_enable2, side_char_enable3],
- [side_char_name1, side_char_name2, side_char_name3],
- [side_char_job1, side_char_job2, side_char_job3],
- [side_char_age1, side_char_age2, side_char_age3],
- [side_char_personality1, side_char_personality2, side_char_personality3],
- )
- prompt = prompts['story_gen']['actions_gen'].format(
- genre=genre, place=place, mood=mood,
- main_char_name=main_char_name,
- main_char_job=main_char_job,
- main_char_age=main_char_age,
- main_char_personality=main_char_personality,
- side_char_placeholder=side_char_prompt,
- summary=summary,
- )
-
- print(f"generated prompt:\n{prompt}")
- parameters = llm_service.make_params(mode="text", temperature=1.0, top_k=40, top_p=1.0, max_output_tokens=4096)
- try:
- response_json = await utils.retry_until_valid_json(prompt, parameters=parameters)
- except Exception as e:
- print(e)
- raise gr.Error(e)
- actions = response_json["options"]
-
- random_actions = random.sample(actions, 3)
-
- return (
- gr.update(value=random_actions[0], interactive=True),
- gr.update(value=random_actions[1], interactive=True),
- gr.update(value=random_actions[2], interactive=True),
- " "
- )
-
-async def first_story_gen(
- cursors,
- genre, place, mood,
- main_char_name, main_char_age, main_char_personality, main_char_job,
- side_char_enable1, side_char_name1, side_char_age1, side_char_personality1, side_char_job1,
- side_char_enable2, side_char_name2, side_char_age2, side_char_personality2, side_char_job2,
- side_char_enable3, side_char_name3, side_char_age3, side_char_personality3, side_char_job3,
- cur_cursor_idx=None,
- llm_type="PaLM"
-):
- factory = get_llm_factory(llm_type)
- prompts = factory.create_prompt_manager().prompts
- llm_service = factory.create_llm_service()
-
- cur_side_chars = 1
-
- side_char_prompt = utils.add_side_character(
- [side_char_enable1, side_char_enable2, side_char_enable3],
- [side_char_name1, side_char_name2, side_char_name3],
- [side_char_job1, side_char_job2, side_char_job3],
- [side_char_age1, side_char_age2, side_char_age3],
- [side_char_personality1, side_char_personality2, side_char_personality3],
- )
- prompt = prompts['story_gen']['first_story_gen'].format(
- genre=genre, place=place, mood=mood,
- main_char_name=main_char_name,
- main_char_job=main_char_job,
- main_char_age=main_char_age,
- main_char_personality=main_char_personality,
- side_char_placeholder=side_char_prompt,
- )
-
- print(f"generated prompt:\n{prompt}")
- parameters = llm_service.make_params(mode="text", temperature=1.0, top_k=40, top_p=1.0, max_output_tokens=4096)
- try:
- response_json = await utils.retry_until_valid_json(prompt, parameters=parameters)
- except Exception as e:
- print(e)
- raise gr.Error(e)
-
- story = response_json["paragraphs"]
- if isinstance(story, list):
- story = "\n\n".join(story)
-
- if cur_cursor_idx is None:
- cursors.append({
- "title": "",
- "story": story
- })
- else:
- cursors[cur_cursor_idx]["story"] = story
-
- return (
- cursors, len(cursors)-1,
- story,
- gr.update(
- maximum=len(cursors), value=len(cursors),
- label=f"{len(cursors)} out of {len(cursors)} stories",
- visible=False if len(cursors) == 1 else True, interactive=True
- ),
- gr.update(interactive=True),
- gr.update(interactive=True),
- gr.update(value=None, visible=False, interactive=True),
- gr.update(value=None, visible=False, interactive=True),
- gr.update(value=None, visible=False, interactive=True),
- )
-
-def video_gen(
- image, audio, title, cursors, cur_cursor, use_ffmpeg=True
-):
- if use_ffmpeg:
- output_filename = merge_video(image, audio, story_title="")
-
- if not use_ffmpeg or not output_filename:
- client = Client(video_gen_client_url)
- result = client.predict(
- "",
- audio,
- image,
- f"{utils.id_generator()}.mp4",
- api_name="/predict"
- )
- output_filename = result[0]
-
- cursors[cur_cursor]["video"] = output_filename
-
- return (
- gr.update(visible=False),
- gr.update(visible=False),
- gr.update(visible=True, value=output_filename),
- cursors,
- " "
- )
-
-
-def image_gen(
- genre, place, mood, title, story_content, cursors, cur_cursor, llm_type="PaLM"
-):
- # generate prompts for background image with LLM
- for _ in range(3):
- try:
- prompt, neg_prompt = img_maker.generate_background_prompts(genre, place, mood, title, "", story_content, llm_type)
- print(f"Image Prompt: {prompt}")
- print(f"Negative Prompt: {neg_prompt}")
- break
- except Exception as e:
- print(e)
- raise gr.Error(e)
-
- if not prompt:
- raise ValueError("Failed to generate prompts for background image.")
-
- # generate image
- try:
- img_filename = img_maker.text2image(prompt, neg_prompt=neg_prompt, ratio='16:9', cfg=6.5)
- except ValueError as e:
- print(e)
- img_filename = str(Path('.') / 'assets' / 'nsfw_warning_wide.png')
-
- cursors[cur_cursor]["img"] = img_filename
-
- return (
- gr.update(visible=True, value=img_filename),
- cursors,
- " "
- )
-
-
-def audio_gen(
- genre, place, mood, title, story_content, cursors, cur_cursor, llm_type="PaLM"
-):
- # generate prompt for background music with LLM
- for _ in range(3):
- try:
- prompt = bgm_maker.generate_prompt(genre, place, mood, title, "", story_content, llm_type)
- print(f"Music Prompt: {prompt}")
- break
- except Exception as e:
- print(e)
- raise gr.Error(e)
-
- if not prompt:
- raise ValueError("Failed to generate prompt for background music.")
-
- # generate music
- bgm_filename = bgm_maker.text2music(prompt, length=60)
- cursors[cur_cursor]["audio"] = bgm_filename
-
- return (
- gr.update(visible=True, value=bgm_filename),
- cursors,
- " "
- )
-
-def move_story_cursor(moved_cursor, cursors):
- cursor_content = cursors[moved_cursor-1]
- max_cursor = len(cursors)
-
- action_btn = (
- gr.update(interactive=False),
- gr.update(interactive=False),
- gr.update(interactive=False)
- )
-
- if moved_cursor == max_cursor:
- action_btn = (
- gr.update(interactive=True),
- gr.update(interactive=True),
- gr.update(interactive=True)
- )
-
- if "video" in cursor_content:
- outputs = (
- moved_cursor-1,
- gr.update(label=f"{moved_cursor} out of {len(cursors)} chapters"),
- cursor_content["story"],
- gr.update(value=None, visible=False),
- gr.update(value=None, visible=False),
- gr.update(value=cursor_content["video"], visible=True),
- )
-
- else:
- image_container = gr.update(value=None, visible=False)
- audio_container = gr.update(value=None, visible=False)
-
- if "img" in cursor_content:
- image_container = gr.update(value=cursor_content["img"], visible=True)
-
- if "audio" in cursor_content:
- audio_container = gr.update(value=cursor_content["audio"], visible=True)
-
- outputs = (
- moved_cursor-1,
- gr.update(label=f"{moved_cursor} out of {len(cursors)} stories"),
- cursor_content["story"],
- image_container,
- audio_container,
- gr.update(value=None, visible=False),
- )
-
- return outputs + action_btn
-
-def update_story_content(story_content, cursors, cur_cursor):
- cursors[cur_cursor]["story"] = story_content
- return cursors
-
-def disable_btns():
- return (
- gr.update(interactive=False), # image_gen_btn
- gr.update(interactive=False), # audio_gen_btn
- gr.update(interactive=False), # img_audio_combine_btn
-
- gr.update(interactive=False), # regen_actions_btn
- gr.update(interactive=False), # regen_story_btn
- gr.update(interactive=False), # custom_prompt_txt
-
- gr.update(interactive=False), # action_btn1
- gr.update(interactive=False), # action_btn2
- gr.update(interactive=False), # action_btn3
-
- gr.update(interactive=False), # custom_action_txt
-
- gr.update(interactive=False), # restart_from_story_generation_btn
- gr.update(interactive=False), # story_writing_done_btn
- )
-
-def enable_btns(story_image, story_audio):
- video_gen_btn_state = gr.update(interactive=False)
-
- if story_image is not None and \
- story_audio is not None:
- video_gen_btn_state = gr.update(interactive=True)
-
- return (
- gr.update(interactive=True), # image_gen_btn
- gr.update(interactive=True), # audio_gen_btn
- video_gen_btn_state, # img_audio_combine_btn
-
- gr.update(interactive=True), # regen_actions_btn
- gr.update(interactive=True), # regen_story_btn
- gr.update(interactive=True), # custom_prompt_txt
-
- gr.update(interactive=True), # action_btn1
- gr.update(interactive=True), # action_btn2
- gr.update(interactive=True), # action_btn3
-
- gr.update(interactive=True), # custom_action_txt
-
- gr.update(interactive=True), # restart_from_story_generation_btn
- gr.update(interactive=True), # story_writing_done_btn
- )
\ No newline at end of file
diff --git a/spaces/chasemcdo/hf_localai/examples/query_data/store.py b/spaces/chasemcdo/hf_localai/examples/query_data/store.py
deleted file mode 100644
index 0d628c81967f2a7829f0597c1681c427d4914643..0000000000000000000000000000000000000000
--- a/spaces/chasemcdo/hf_localai/examples/query_data/store.py
+++ /dev/null
@@ -1,27 +0,0 @@
-import os
-
-# Uncomment to specify your OpenAI API key here (local testing only, not in production!), or add corresponding environment variable (recommended)
-# os.environ['OPENAI_API_KEY']= ""
-
-from llama_index import GPTVectorStoreIndex, SimpleDirectoryReader, LLMPredictor, PromptHelper, ServiceContext
-from langchain.llms.openai import OpenAI
-from llama_index import StorageContext, load_index_from_storage
-
-base_path = os.environ.get('OPENAI_API_BASE', 'http://localhost:8080/v1')
-
-# This example uses text-davinci-003 by default; feel free to change if desired
-llm_predictor = LLMPredictor(llm=OpenAI(temperature=0, model_name="gpt-3.5-turbo", openai_api_base=base_path))
-
-# Configure prompt parameters and initialise helper
-max_input_size = 400
-num_output = 400
-max_chunk_overlap = 30
-
-prompt_helper = PromptHelper(max_input_size, num_output, max_chunk_overlap)
-
-# Load documents from the 'data' directory
-documents = SimpleDirectoryReader('data').load_data()
-service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper, chunk_size_limit = 400)
-index = GPTVectorStoreIndex.from_documents(documents, service_context=service_context)
-index.storage_context.persist(persist_dir="./storage")
-
diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/docs/freeze_module.md b/spaces/chendl/compositional_test/multimodal/YOLOX/docs/freeze_module.md
deleted file mode 100644
index 421d95cd96d0a876f17ad57af899b2e06f0addbd..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/multimodal/YOLOX/docs/freeze_module.md
+++ /dev/null
@@ -1,37 +0,0 @@
-# Freeze module
-
-This page guide users to freeze module in YOLOX.
-Exp controls everything in YOLOX, so let's start from creating an Exp object.
-
-## 1. Create your own expermiment object
-
-We take an example of YOLOX-S model on COCO dataset to give a more clear guide.
-
-Import the config you want (or write your own Exp object inherit from `yolox.exp.BaseExp`).
-```python
-from yolox.exp.default.yolox_s import Exp as MyExp
-```
-
-## 2. Override `get_model` method
-
-Here is a simple code to freeze backbone (FPN not included) of module.
-```python
-class Exp(MyExp):
-
- def get_model(self):
- from yolox.utils import freeze_module
- model = super().get_model()
- freeze_module(model.backbone.backbone)
- return model
-```
-if you only want to freeze FPN, `freeze_module(model.backbone)` might help.
-
-## 3. Train
-Suppose that the path of your Exp is `/path/to/my_exp.py`, use the following command to train your model.
-```bash
-python3 -m yolox.tools.train -f /path/to/my_exp.py
-```
-For more details of training, run the following command.
-```bash
-python3 -m yolox.tools.train --help
-```
diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/tests/utils/test_model_utils.py b/spaces/chendl/compositional_test/multimodal/YOLOX/tests/utils/test_model_utils.py
deleted file mode 100644
index abfc3446f06974998c8ab25b5ded52e1327e2363..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/multimodal/YOLOX/tests/utils/test_model_utils.py
+++ /dev/null
@@ -1,107 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-# Copyright (c) Megvii, Inc. and its affiliates.
-
-import unittest
-
-import torch
-from torch import nn
-
-from yolox.utils import adjust_status, freeze_module
-from yolox.exp import get_exp
-
-
-class TestModelUtils(unittest.TestCase):
-
- def setUp(self):
- self.model: nn.Module = get_exp(exp_name="yolox-s").get_model()
-
- def test_model_state_adjust_status(self):
- data = torch.ones(1, 10, 10, 10)
- # use bn since bn changes state during train/val
- model = nn.BatchNorm2d(10)
- prev_state = model.state_dict()
-
- modes = [False, True]
- results = [True, False]
-
- # test under train/eval mode
- for mode, result in zip(modes, results):
- with adjust_status(model, training=mode):
- model(data)
- model_state = model.state_dict()
- self.assertTrue(len(model_state) == len(prev_state))
- self.assertEqual(
- result,
- all([torch.allclose(v, model_state[k]) for k, v in prev_state.items()])
- )
-
- # test recurrsive context case
- prev_state = model.state_dict()
- with adjust_status(model, training=False):
- with adjust_status(model, training=False):
- model(data)
- model_state = model.state_dict()
- self.assertTrue(len(model_state) == len(prev_state))
- self.assertTrue(
- all([torch.allclose(v, model_state[k]) for k, v in prev_state.items()])
- )
-
- def test_model_effect_adjust_status(self):
- # test context effect
- self.model.train()
- with adjust_status(self.model, training=False):
- for module in self.model.modules():
- self.assertFalse(module.training)
- # all training after exit
- for module in self.model.modules():
- self.assertTrue(module.training)
-
- # only backbone set to eval
- self.model.backbone.eval()
- with adjust_status(self.model, training=False):
- for module in self.model.modules():
- self.assertFalse(module.training)
-
- for name, module in self.model.named_modules():
- if "backbone" in name:
- self.assertFalse(module.training)
- else:
- self.assertTrue(module.training)
-
- def test_freeze_module(self):
- model = nn.Sequential(
- nn.Conv2d(3, 10, 1),
- nn.BatchNorm2d(10),
- nn.ReLU(),
- )
- data = torch.rand(1, 3, 10, 10)
- model.train()
- assert isinstance(model[1], nn.BatchNorm2d)
- before_states = model[1].state_dict()
- freeze_module(model[1])
- model(data)
- after_states = model[1].state_dict()
- self.assertTrue(
- all([torch.allclose(v, after_states[k]) for k, v in before_states.items()])
- )
-
- # yolox test
- self.model.train()
- for module in self.model.modules():
- self.assertTrue(module.training)
-
- freeze_module(self.model, "backbone")
- for module in self.model.backbone.modules():
- self.assertFalse(module.training)
- for p in self.model.backbone.parameters():
- self.assertFalse(p.requires_grad)
-
- for module in self.model.head.modules():
- self.assertTrue(module.training)
- for p in self.model.head.parameters():
- self.assertTrue(p.requires_grad)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/chendl/compositional_test/transformers/docker/transformers-pytorch-tpu/docker-entrypoint.sh b/spaces/chendl/compositional_test/transformers/docker/transformers-pytorch-tpu/docker-entrypoint.sh
deleted file mode 100644
index fbe59566fdcdfd2e61d23288d8da6273003ff9ab..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/docker/transformers-pytorch-tpu/docker-entrypoint.sh
+++ /dev/null
@@ -1,8 +0,0 @@
-#!/bin/bash
-source ~/.bashrc
-echo "running docker-entrypoint.sh"
-conda activate container
-echo $KUBE_GOOGLE_CLOUD_TPU_ENDPOINTS
-echo "printed TPU info"
-export XRT_TPU_CONFIG="tpu_worker;0;${KUBE_GOOGLE_CLOUD_TPU_ENDPOINTS:7}"
-exec "$@"#!/bin/bash
diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/rag-end2end-retriever/use_own_knowledge_dataset.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/rag-end2end-retriever/use_own_knowledge_dataset.py
deleted file mode 100644
index e0aa86a3a65ba91089c9b363b226e3b5ca343631..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/research_projects/rag-end2end-retriever/use_own_knowledge_dataset.py
+++ /dev/null
@@ -1,175 +0,0 @@
-import logging
-import os
-from dataclasses import dataclass, field
-from functools import partial
-from pathlib import Path
-from tempfile import TemporaryDirectory
-from typing import List, Optional
-
-import faiss
-import torch
-from datasets import Features, Sequence, Value, load_dataset
-
-from transformers import DPRContextEncoder, DPRContextEncoderTokenizerFast, HfArgumentParser
-
-
-logger = logging.getLogger(__name__)
-torch.set_grad_enabled(False)
-device = "cuda" if torch.cuda.is_available() else "cpu"
-
-
-def split_text(text: str, n=100, character=" ") -> List[str]:
- """Split the text every ``n``-th occurrence of ``character``"""
- text = text.split(character)
- return [character.join(text[i : i + n]).strip() for i in range(0, len(text), n)]
-
-
-def split_documents(documents: dict) -> dict:
- """Split documents into passages"""
- titles, texts = [], []
- for title, text in zip(documents["title"], documents["text"]):
- if text is not None:
- for passage in split_text(text):
- titles.append(title if title is not None else "")
- texts.append(passage)
- return {"title": titles, "text": texts}
-
-
-def embed(documents: dict, ctx_encoder: DPRContextEncoder, ctx_tokenizer: DPRContextEncoderTokenizerFast) -> dict:
- """Compute the DPR embeddings of document passages"""
- input_ids = ctx_tokenizer(
- documents["title"], documents["text"], truncation=True, padding="longest", return_tensors="pt"
- )["input_ids"]
- embeddings = ctx_encoder(input_ids.to(device=device), return_dict=True).pooler_output
- return {"embeddings": embeddings.detach().cpu().numpy()}
-
-
-def main(
- rag_example_args: "RagExampleArguments",
- processing_args: "ProcessingArguments",
- index_hnsw_args: "IndexHnswArguments",
-):
- ######################################
- logger.info("Step 1 - Create the dataset")
- ######################################
-
- # The dataset needed for RAG must have three columns:
- # - title (string): title of the document
- # - text (string): text of a passage of the document
- # - embeddings (array of dimension d): DPR representation of the passage
- # Let's say you have documents in tab-separated csv files with columns "title" and "text"
- assert os.path.isfile(rag_example_args.csv_path), "Please provide a valid path to a csv file"
-
- # You can load a Dataset object this way
- dataset = load_dataset(
- "csv", data_files=[rag_example_args.csv_path], split="train", delimiter="\t", column_names=["title", "text"]
- )
-
- # More info about loading csv files in the documentation: https://huggingface.co/docs/datasets/loading_datasets.html?highlight=csv#csv-files
-
- # Then split the documents into passages of 100 words
- dataset = dataset.map(split_documents, batched=True, num_proc=processing_args.num_proc)
-
- # And compute the embeddings
- ctx_encoder = DPRContextEncoder.from_pretrained(rag_example_args.dpr_ctx_encoder_model_name).to(device=device)
- ctx_tokenizer = DPRContextEncoderTokenizerFast.from_pretrained(rag_example_args.dpr_ctx_encoder_model_name)
- new_features = Features(
- {"text": Value("string"), "title": Value("string"), "embeddings": Sequence(Value("float32"))}
- ) # optional, save as float32 instead of float64 to save space
- dataset = dataset.map(
- partial(embed, ctx_encoder=ctx_encoder, ctx_tokenizer=ctx_tokenizer),
- batched=True,
- batch_size=processing_args.batch_size,
- features=new_features,
- )
-
- # And finally save your dataset
- passages_path = os.path.join(rag_example_args.output_dir, "my_knowledge_dataset")
- dataset.save_to_disk(passages_path)
- # from datasets import load_from_disk
- # dataset = load_from_disk(passages_path) # to reload the dataset
-
- ######################################
- logger.info("Step 2 - Index the dataset")
- ######################################
-
- # Let's use the Faiss implementation of HNSW for fast approximate nearest neighbor search
- index = faiss.IndexHNSWFlat(index_hnsw_args.d, index_hnsw_args.m, faiss.METRIC_INNER_PRODUCT)
- dataset.add_faiss_index("embeddings", custom_index=index)
-
- # And save the index
- index_path = os.path.join(rag_example_args.output_dir, "my_knowledge_dataset_hnsw_index.faiss")
- dataset.get_index("embeddings").save(index_path)
- # dataset.load_faiss_index("embeddings", index_path) # to reload the index
-
-
-@dataclass
-class RagExampleArguments:
- csv_path: str = field(
- default=str(Path(__file__).parent / "test_run" / "dummy-kb" / "my_knowledge_dataset.csv"),
- metadata={"help": "Path to a tab-separated csv file with columns 'title' and 'text'"},
- )
- question: Optional[str] = field(
- default=None,
- metadata={"help": "Question that is passed as input to RAG. Default is 'What does Moses' rod turn into ?'."},
- )
- rag_model_name: str = field(
- default="facebook/rag-sequence-nq",
- metadata={"help": "The RAG model to use. Either 'facebook/rag-sequence-nq' or 'facebook/rag-token-nq'"},
- )
- dpr_ctx_encoder_model_name: str = field(
- default="facebook/dpr-ctx_encoder-multiset-base",
- metadata={
- "help": (
- "The DPR context encoder model to use. Either 'facebook/dpr-ctx_encoder-single-nq-base' or"
- " 'facebook/dpr-ctx_encoder-multiset-base'"
- )
- },
- )
- output_dir: Optional[str] = field(
- default=str(Path(__file__).parent / "test_run" / "dummy-kb"),
- metadata={"help": "Path to a directory where the dataset passages and the index will be saved"},
- )
-
-
-@dataclass
-class ProcessingArguments:
- num_proc: Optional[int] = field(
- default=None,
- metadata={
- "help": "The number of processes to use to split the documents into passages. Default is single process."
- },
- )
- batch_size: int = field(
- default=16,
- metadata={
- "help": "The batch size to use when computing the passages embeddings using the DPR context encoder."
- },
- )
-
-
-@dataclass
-class IndexHnswArguments:
- d: int = field(
- default=768,
- metadata={"help": "The dimension of the embeddings to pass to the HNSW Faiss index."},
- )
- m: int = field(
- default=128,
- metadata={
- "help": (
- "The number of bi-directional links created for every new element during the HNSW index construction."
- )
- },
- )
-
-
-if __name__ == "__main__":
- logging.basicConfig(level=logging.WARNING)
- logger.setLevel(logging.INFO)
-
- parser = HfArgumentParser((RagExampleArguments, ProcessingArguments, IndexHnswArguments))
- rag_example_args, processing_args, index_hnsw_args = parser.parse_args_into_dataclasses()
- with TemporaryDirectory() as tmp_dir:
- rag_example_args.output_dir = rag_example_args.output_dir or tmp_dir
- main(rag_example_args, processing_args, index_hnsw_args)
diff --git a/spaces/chilge/Fushimi/inference_main.py b/spaces/chilge/Fushimi/inference_main.py
deleted file mode 100644
index 825e791db86d37e955f42e8cb34323dbb248ed32..0000000000000000000000000000000000000000
--- a/spaces/chilge/Fushimi/inference_main.py
+++ /dev/null
@@ -1,65 +0,0 @@
-import io
-import logging
-import time
-from pathlib import Path
-
-import librosa
-import numpy as np
-import soundfile
-
-from inference import infer_tool
-from inference import slicer
-from inference.infer_tool import Svc
-
-logging.getLogger('numba').setLevel(logging.WARNING)
-chunks_dict = infer_tool.read_temp("inference/chunks_temp.json")
-
-model_path = "logs/48k/G_174000-Copy1.pth"
-config_path = "configs/config.json"
-svc_model = Svc(model_path, config_path)
-infer_tool.mkdir(["raw", "results"])
-
-# 支持多个wav文件,放在raw文件夹下
-clean_names = ["君の知らない物語-src"]
-trans = [-5] # 音高调整,支持正负(半音)
-spk_list = ['yunhao'] # 每次同时合成多语者音色
-slice_db = -40 # 默认-40,嘈杂的音频可以-30,干声保留呼吸可以-50
-wav_format = 'flac' # 音频输出格式
-
-infer_tool.fill_a_to_b(trans, clean_names)
-for clean_name, tran in zip(clean_names, trans):
- raw_audio_path = f"raw/{clean_name}"
- if "." not in raw_audio_path:
- raw_audio_path += ".wav"
- infer_tool.format_wav(raw_audio_path)
- wav_path = Path(raw_audio_path).with_suffix('.wav')
- audio, sr = librosa.load(wav_path, mono=True, sr=None)
- wav_hash = infer_tool.get_md5(audio)
- if wav_hash in chunks_dict.keys():
- print("load chunks from temp")
- chunks = chunks_dict[wav_hash]["chunks"]
- else:
- chunks = slicer.cut(wav_path, db_thresh=slice_db)
- print(chunks)
- chunks_dict[wav_hash] = {"chunks": chunks, "time": int(time.time())}
- infer_tool.write_temp("inference/chunks_temp.json", chunks_dict)
- audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks)
-
- for spk in spk_list:
- audio = []
- for (slice_tag, data) in audio_data:
- print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======')
- length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample))
- raw_path = io.BytesIO()
- soundfile.write(raw_path, data, audio_sr, format="wav")
- raw_path.seek(0)
- if slice_tag:
- print('jump empty segment')
- _audio = np.zeros(length)
- else:
- out_audio, out_sr = svc_model.infer(spk, tran, raw_path)
- _audio = out_audio.cpu().numpy()
- audio.extend(list(_audio))
-
- res_path = f'./results/{clean_name}_{tran}key_{spk}.{wav_format}'
- soundfile.write(res_path, audio, svc_model.target_sample, format=wav_format)
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/ImageOps.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/ImageOps.py
deleted file mode 100644
index 17702778c134abcb51d7632367fbbf1a2f3048fa..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/ImageOps.py
+++ /dev/null
@@ -1,628 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# standard image operations
-#
-# History:
-# 2001-10-20 fl Created
-# 2001-10-23 fl Added autocontrast operator
-# 2001-12-18 fl Added Kevin's fit operator
-# 2004-03-14 fl Fixed potential division by zero in equalize
-# 2005-05-05 fl Fixed equalize for low number of values
-#
-# Copyright (c) 2001-2004 by Secret Labs AB
-# Copyright (c) 2001-2004 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-
-import functools
-import operator
-import re
-
-from . import ExifTags, Image, ImagePalette
-
-#
-# helpers
-
-
-def _border(border):
- if isinstance(border, tuple):
- if len(border) == 2:
- left, top = right, bottom = border
- elif len(border) == 4:
- left, top, right, bottom = border
- else:
- left = top = right = bottom = border
- return left, top, right, bottom
-
-
-def _color(color, mode):
- if isinstance(color, str):
- from . import ImageColor
-
- color = ImageColor.getcolor(color, mode)
- return color
-
-
-def _lut(image, lut):
- if image.mode == "P":
- # FIXME: apply to lookup table, not image data
- msg = "mode P support coming soon"
- raise NotImplementedError(msg)
- elif image.mode in ("L", "RGB"):
- if image.mode == "RGB" and len(lut) == 256:
- lut = lut + lut + lut
- return image.point(lut)
- else:
- msg = "not supported for this image mode"
- raise OSError(msg)
-
-
-#
-# actions
-
-
-def autocontrast(image, cutoff=0, ignore=None, mask=None, preserve_tone=False):
- """
- Maximize (normalize) image contrast. This function calculates a
- histogram of the input image (or mask region), removes ``cutoff`` percent of the
- lightest and darkest pixels from the histogram, and remaps the image
- so that the darkest pixel becomes black (0), and the lightest
- becomes white (255).
-
- :param image: The image to process.
- :param cutoff: The percent to cut off from the histogram on the low and
- high ends. Either a tuple of (low, high), or a single
- number for both.
- :param ignore: The background pixel value (use None for no background).
- :param mask: Histogram used in contrast operation is computed using pixels
- within the mask. If no mask is given the entire image is used
- for histogram computation.
- :param preserve_tone: Preserve image tone in Photoshop-like style autocontrast.
-
- .. versionadded:: 8.2.0
-
- :return: An image.
- """
- if preserve_tone:
- histogram = image.convert("L").histogram(mask)
- else:
- histogram = image.histogram(mask)
-
- lut = []
- for layer in range(0, len(histogram), 256):
- h = histogram[layer : layer + 256]
- if ignore is not None:
- # get rid of outliers
- try:
- h[ignore] = 0
- except TypeError:
- # assume sequence
- for ix in ignore:
- h[ix] = 0
- if cutoff:
- # cut off pixels from both ends of the histogram
- if not isinstance(cutoff, tuple):
- cutoff = (cutoff, cutoff)
- # get number of pixels
- n = 0
- for ix in range(256):
- n = n + h[ix]
- # remove cutoff% pixels from the low end
- cut = n * cutoff[0] // 100
- for lo in range(256):
- if cut > h[lo]:
- cut = cut - h[lo]
- h[lo] = 0
- else:
- h[lo] -= cut
- cut = 0
- if cut <= 0:
- break
- # remove cutoff% samples from the high end
- cut = n * cutoff[1] // 100
- for hi in range(255, -1, -1):
- if cut > h[hi]:
- cut = cut - h[hi]
- h[hi] = 0
- else:
- h[hi] -= cut
- cut = 0
- if cut <= 0:
- break
- # find lowest/highest samples after preprocessing
- for lo in range(256):
- if h[lo]:
- break
- for hi in range(255, -1, -1):
- if h[hi]:
- break
- if hi <= lo:
- # don't bother
- lut.extend(list(range(256)))
- else:
- scale = 255.0 / (hi - lo)
- offset = -lo * scale
- for ix in range(256):
- ix = int(ix * scale + offset)
- if ix < 0:
- ix = 0
- elif ix > 255:
- ix = 255
- lut.append(ix)
- return _lut(image, lut)
-
-
-def colorize(image, black, white, mid=None, blackpoint=0, whitepoint=255, midpoint=127):
- """
- Colorize grayscale image.
- This function calculates a color wedge which maps all black pixels in
- the source image to the first color and all white pixels to the
- second color. If ``mid`` is specified, it uses three-color mapping.
- The ``black`` and ``white`` arguments should be RGB tuples or color names;
- optionally you can use three-color mapping by also specifying ``mid``.
- Mapping positions for any of the colors can be specified
- (e.g. ``blackpoint``), where these parameters are the integer
- value corresponding to where the corresponding color should be mapped.
- These parameters must have logical order, such that
- ``blackpoint <= midpoint <= whitepoint`` (if ``mid`` is specified).
-
- :param image: The image to colorize.
- :param black: The color to use for black input pixels.
- :param white: The color to use for white input pixels.
- :param mid: The color to use for midtone input pixels.
- :param blackpoint: an int value [0, 255] for the black mapping.
- :param whitepoint: an int value [0, 255] for the white mapping.
- :param midpoint: an int value [0, 255] for the midtone mapping.
- :return: An image.
- """
-
- # Initial asserts
- assert image.mode == "L"
- if mid is None:
- assert 0 <= blackpoint <= whitepoint <= 255
- else:
- assert 0 <= blackpoint <= midpoint <= whitepoint <= 255
-
- # Define colors from arguments
- black = _color(black, "RGB")
- white = _color(white, "RGB")
- if mid is not None:
- mid = _color(mid, "RGB")
-
- # Empty lists for the mapping
- red = []
- green = []
- blue = []
-
- # Create the low-end values
- for i in range(0, blackpoint):
- red.append(black[0])
- green.append(black[1])
- blue.append(black[2])
-
- # Create the mapping (2-color)
- if mid is None:
- range_map = range(0, whitepoint - blackpoint)
-
- for i in range_map:
- red.append(black[0] + i * (white[0] - black[0]) // len(range_map))
- green.append(black[1] + i * (white[1] - black[1]) // len(range_map))
- blue.append(black[2] + i * (white[2] - black[2]) // len(range_map))
-
- # Create the mapping (3-color)
- else:
- range_map1 = range(0, midpoint - blackpoint)
- range_map2 = range(0, whitepoint - midpoint)
-
- for i in range_map1:
- red.append(black[0] + i * (mid[0] - black[0]) // len(range_map1))
- green.append(black[1] + i * (mid[1] - black[1]) // len(range_map1))
- blue.append(black[2] + i * (mid[2] - black[2]) // len(range_map1))
- for i in range_map2:
- red.append(mid[0] + i * (white[0] - mid[0]) // len(range_map2))
- green.append(mid[1] + i * (white[1] - mid[1]) // len(range_map2))
- blue.append(mid[2] + i * (white[2] - mid[2]) // len(range_map2))
-
- # Create the high-end values
- for i in range(0, 256 - whitepoint):
- red.append(white[0])
- green.append(white[1])
- blue.append(white[2])
-
- # Return converted image
- image = image.convert("RGB")
- return _lut(image, red + green + blue)
-
-
-def contain(image, size, method=Image.Resampling.BICUBIC):
- """
- Returns a resized version of the image, set to the maximum width and height
- within the requested size, while maintaining the original aspect ratio.
-
- :param image: The image to resize and crop.
- :param size: The requested output size in pixels, given as a
- (width, height) tuple.
- :param method: Resampling method to use. Default is
- :py:attr:`~PIL.Image.Resampling.BICUBIC`.
- See :ref:`concept-filters`.
- :return: An image.
- """
-
- im_ratio = image.width / image.height
- dest_ratio = size[0] / size[1]
-
- if im_ratio != dest_ratio:
- if im_ratio > dest_ratio:
- new_height = round(image.height / image.width * size[0])
- if new_height != size[1]:
- size = (size[0], new_height)
- else:
- new_width = round(image.width / image.height * size[1])
- if new_width != size[0]:
- size = (new_width, size[1])
- return image.resize(size, resample=method)
-
-
-def pad(image, size, method=Image.Resampling.BICUBIC, color=None, centering=(0.5, 0.5)):
- """
- Returns a resized and padded version of the image, expanded to fill the
- requested aspect ratio and size.
-
- :param image: The image to resize and crop.
- :param size: The requested output size in pixels, given as a
- (width, height) tuple.
- :param method: Resampling method to use. Default is
- :py:attr:`~PIL.Image.Resampling.BICUBIC`.
- See :ref:`concept-filters`.
- :param color: The background color of the padded image.
- :param centering: Control the position of the original image within the
- padded version.
-
- (0.5, 0.5) will keep the image centered
- (0, 0) will keep the image aligned to the top left
- (1, 1) will keep the image aligned to the bottom
- right
- :return: An image.
- """
-
- resized = contain(image, size, method)
- if resized.size == size:
- out = resized
- else:
- out = Image.new(image.mode, size, color)
- if resized.palette:
- out.putpalette(resized.getpalette())
- if resized.width != size[0]:
- x = round((size[0] - resized.width) * max(0, min(centering[0], 1)))
- out.paste(resized, (x, 0))
- else:
- y = round((size[1] - resized.height) * max(0, min(centering[1], 1)))
- out.paste(resized, (0, y))
- return out
-
-
-def crop(image, border=0):
- """
- Remove border from image. The same amount of pixels are removed
- from all four sides. This function works on all image modes.
-
- .. seealso:: :py:meth:`~PIL.Image.Image.crop`
-
- :param image: The image to crop.
- :param border: The number of pixels to remove.
- :return: An image.
- """
- left, top, right, bottom = _border(border)
- return image.crop((left, top, image.size[0] - right, image.size[1] - bottom))
-
-
-def scale(image, factor, resample=Image.Resampling.BICUBIC):
- """
- Returns a rescaled image by a specific factor given in parameter.
- A factor greater than 1 expands the image, between 0 and 1 contracts the
- image.
-
- :param image: The image to rescale.
- :param factor: The expansion factor, as a float.
- :param resample: Resampling method to use. Default is
- :py:attr:`~PIL.Image.Resampling.BICUBIC`.
- See :ref:`concept-filters`.
- :returns: An :py:class:`~PIL.Image.Image` object.
- """
- if factor == 1:
- return image.copy()
- elif factor <= 0:
- msg = "the factor must be greater than 0"
- raise ValueError(msg)
- else:
- size = (round(factor * image.width), round(factor * image.height))
- return image.resize(size, resample)
-
-
-def deform(image, deformer, resample=Image.Resampling.BILINEAR):
- """
- Deform the image.
-
- :param image: The image to deform.
- :param deformer: A deformer object. Any object that implements a
- ``getmesh`` method can be used.
- :param resample: An optional resampling filter. Same values possible as
- in the PIL.Image.transform function.
- :return: An image.
- """
- return image.transform(
- image.size, Image.Transform.MESH, deformer.getmesh(image), resample
- )
-
-
-def equalize(image, mask=None):
- """
- Equalize the image histogram. This function applies a non-linear
- mapping to the input image, in order to create a uniform
- distribution of grayscale values in the output image.
-
- :param image: The image to equalize.
- :param mask: An optional mask. If given, only the pixels selected by
- the mask are included in the analysis.
- :return: An image.
- """
- if image.mode == "P":
- image = image.convert("RGB")
- h = image.histogram(mask)
- lut = []
- for b in range(0, len(h), 256):
- histo = [_f for _f in h[b : b + 256] if _f]
- if len(histo) <= 1:
- lut.extend(list(range(256)))
- else:
- step = (functools.reduce(operator.add, histo) - histo[-1]) // 255
- if not step:
- lut.extend(list(range(256)))
- else:
- n = step // 2
- for i in range(256):
- lut.append(n // step)
- n = n + h[i + b]
- return _lut(image, lut)
-
-
-def expand(image, border=0, fill=0):
- """
- Add border to the image
-
- :param image: The image to expand.
- :param border: Border width, in pixels.
- :param fill: Pixel fill value (a color value). Default is 0 (black).
- :return: An image.
- """
- left, top, right, bottom = _border(border)
- width = left + image.size[0] + right
- height = top + image.size[1] + bottom
- color = _color(fill, image.mode)
- if image.palette:
- palette = ImagePalette.ImagePalette(palette=image.getpalette())
- if isinstance(color, tuple):
- color = palette.getcolor(color)
- else:
- palette = None
- out = Image.new(image.mode, (width, height), color)
- if palette:
- out.putpalette(palette.palette)
- out.paste(image, (left, top))
- return out
-
-
-def fit(image, size, method=Image.Resampling.BICUBIC, bleed=0.0, centering=(0.5, 0.5)):
- """
- Returns a resized and cropped version of the image, cropped to the
- requested aspect ratio and size.
-
- This function was contributed by Kevin Cazabon.
-
- :param image: The image to resize and crop.
- :param size: The requested output size in pixels, given as a
- (width, height) tuple.
- :param method: Resampling method to use. Default is
- :py:attr:`~PIL.Image.Resampling.BICUBIC`.
- See :ref:`concept-filters`.
- :param bleed: Remove a border around the outside of the image from all
- four edges. The value is a decimal percentage (use 0.01 for
- one percent). The default value is 0 (no border).
- Cannot be greater than or equal to 0.5.
- :param centering: Control the cropping position. Use (0.5, 0.5) for
- center cropping (e.g. if cropping the width, take 50% off
- of the left side, and therefore 50% off the right side).
- (0.0, 0.0) will crop from the top left corner (i.e. if
- cropping the width, take all of the crop off of the right
- side, and if cropping the height, take all of it off the
- bottom). (1.0, 0.0) will crop from the bottom left
- corner, etc. (i.e. if cropping the width, take all of the
- crop off the left side, and if cropping the height take
- none from the top, and therefore all off the bottom).
- :return: An image.
- """
-
- # by Kevin Cazabon, Feb 17/2000
- # kevin@cazabon.com
- # https://www.cazabon.com
-
- # ensure centering is mutable
- centering = list(centering)
-
- if not 0.0 <= centering[0] <= 1.0:
- centering[0] = 0.5
- if not 0.0 <= centering[1] <= 1.0:
- centering[1] = 0.5
-
- if not 0.0 <= bleed < 0.5:
- bleed = 0.0
-
- # calculate the area to use for resizing and cropping, subtracting
- # the 'bleed' around the edges
-
- # number of pixels to trim off on Top and Bottom, Left and Right
- bleed_pixels = (bleed * image.size[0], bleed * image.size[1])
-
- live_size = (
- image.size[0] - bleed_pixels[0] * 2,
- image.size[1] - bleed_pixels[1] * 2,
- )
-
- # calculate the aspect ratio of the live_size
- live_size_ratio = live_size[0] / live_size[1]
-
- # calculate the aspect ratio of the output image
- output_ratio = size[0] / size[1]
-
- # figure out if the sides or top/bottom will be cropped off
- if live_size_ratio == output_ratio:
- # live_size is already the needed ratio
- crop_width = live_size[0]
- crop_height = live_size[1]
- elif live_size_ratio >= output_ratio:
- # live_size is wider than what's needed, crop the sides
- crop_width = output_ratio * live_size[1]
- crop_height = live_size[1]
- else:
- # live_size is taller than what's needed, crop the top and bottom
- crop_width = live_size[0]
- crop_height = live_size[0] / output_ratio
-
- # make the crop
- crop_left = bleed_pixels[0] + (live_size[0] - crop_width) * centering[0]
- crop_top = bleed_pixels[1] + (live_size[1] - crop_height) * centering[1]
-
- crop = (crop_left, crop_top, crop_left + crop_width, crop_top + crop_height)
-
- # resize the image and return it
- return image.resize(size, method, box=crop)
-
-
-def flip(image):
- """
- Flip the image vertically (top to bottom).
-
- :param image: The image to flip.
- :return: An image.
- """
- return image.transpose(Image.Transpose.FLIP_TOP_BOTTOM)
-
-
-def grayscale(image):
- """
- Convert the image to grayscale.
-
- :param image: The image to convert.
- :return: An image.
- """
- return image.convert("L")
-
-
-def invert(image):
- """
- Invert (negate) the image.
-
- :param image: The image to invert.
- :return: An image.
- """
- lut = []
- for i in range(256):
- lut.append(255 - i)
- return image.point(lut) if image.mode == "1" else _lut(image, lut)
-
-
-def mirror(image):
- """
- Flip image horizontally (left to right).
-
- :param image: The image to mirror.
- :return: An image.
- """
- return image.transpose(Image.Transpose.FLIP_LEFT_RIGHT)
-
-
-def posterize(image, bits):
- """
- Reduce the number of bits for each color channel.
-
- :param image: The image to posterize.
- :param bits: The number of bits to keep for each channel (1-8).
- :return: An image.
- """
- lut = []
- mask = ~(2 ** (8 - bits) - 1)
- for i in range(256):
- lut.append(i & mask)
- return _lut(image, lut)
-
-
-def solarize(image, threshold=128):
- """
- Invert all pixel values above a threshold.
-
- :param image: The image to solarize.
- :param threshold: All pixels above this greyscale level are inverted.
- :return: An image.
- """
- lut = []
- for i in range(256):
- if i < threshold:
- lut.append(i)
- else:
- lut.append(255 - i)
- return _lut(image, lut)
-
-
-def exif_transpose(image, *, in_place=False):
- """
- If an image has an EXIF Orientation tag, other than 1, transpose the image
- accordingly, and remove the orientation data.
-
- :param image: The image to transpose.
- :param in_place: Boolean. Keyword-only argument.
- If ``True``, the original image is modified in-place, and ``None`` is returned.
- If ``False`` (default), a new :py:class:`~PIL.Image.Image` object is returned
- with the transposition applied. If there is no transposition, a copy of the
- image will be returned.
- """
- image_exif = image.getexif()
- orientation = image_exif.get(ExifTags.Base.Orientation)
- method = {
- 2: Image.Transpose.FLIP_LEFT_RIGHT,
- 3: Image.Transpose.ROTATE_180,
- 4: Image.Transpose.FLIP_TOP_BOTTOM,
- 5: Image.Transpose.TRANSPOSE,
- 6: Image.Transpose.ROTATE_270,
- 7: Image.Transpose.TRANSVERSE,
- 8: Image.Transpose.ROTATE_90,
- }.get(orientation)
- if method is not None:
- transposed_image = image.transpose(method)
- if in_place:
- image.im = transposed_image.im
- image.pyaccess = None
- image._size = transposed_image._size
- exif_image = image if in_place else transposed_image
-
- exif = exif_image.getexif()
- if ExifTags.Base.Orientation in exif:
- del exif[ExifTags.Base.Orientation]
- if "exif" in exif_image.info:
- exif_image.info["exif"] = exif.tobytes()
- elif "Raw profile type exif" in exif_image.info:
- exif_image.info["Raw profile type exif"] = exif.tobytes().hex()
- elif "XML:com.adobe.xmp" in exif_image.info:
- for pattern in (
- r'tiff:Orientation="([0-9])"',
- r"([0-9])",
- ):
- exif_image.info["XML:com.adobe.xmp"] = re.sub(
- pattern, "", exif_image.info["XML:com.adobe.xmp"]
- )
- if not in_place:
- return transposed_image
- elif not in_place:
- return image.copy()
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/functorch/dim/magic_trace.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/functorch/dim/magic_trace.py
deleted file mode 100644
index 8d4e5ec31ef897bacffae4371b18b441293a21f0..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/functorch/dim/magic_trace.py
+++ /dev/null
@@ -1,34 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the BSD-style license found in the
-# LICENSE file in the root directory of this source tree.
-from contextlib import contextmanager
-import os
-import subprocess
-import signal
-
-@contextmanager
-def magic_trace(output='trace.fxt', magic_trace_cache='/tmp/magic-trace'):
- pid = os.getpid()
- if not os.path.exists(magic_trace_cache):
- print(f"Downloading magic_trace to: {magic_trace_cache}")
- subprocess.run(['wget', '-O', magic_trace_cache, '-q',
- 'https://github.com/janestreet/magic-trace/releases/download/v1.0.2/magic-trace'])
- subprocess.run(['chmod', '+x', magic_trace_cache])
- args = [magic_trace_cache, 'attach', '-pid', str(pid), '-o', output]
- p = subprocess.Popen(args, stderr=subprocess.PIPE, encoding='utf-8')
- while True:
- x = p.stderr.readline()
- print(x)
- if 'Attached' in x:
- break
- try:
- yield
- finally:
- p.send_signal(signal.SIGINT)
- r = p.wait()
- print(p.stderr.read())
- p.stderr.close()
- if r != 0:
- raise ValueError(f'magic_trace exited abnormally: {r}')
diff --git a/spaces/cihyFjudo/fairness-paper-search/Disk Drill Pro 2 Activation Code Mac Recover Any File Type from Any Storage Device.md b/spaces/cihyFjudo/fairness-paper-search/Disk Drill Pro 2 Activation Code Mac Recover Any File Type from Any Storage Device.md
deleted file mode 100644
index 05f7f86cf09fc343e77a229c1a81ad8304efaaa5..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Disk Drill Pro 2 Activation Code Mac Recover Any File Type from Any Storage Device.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
Once your order for Disk Drill PRO or Enterprise is processed, we immediately send you the activation code to enter into the free Basic edition of Disk Drill. If it's entered correctly, your copy of Disk Drill will be upgraded right away.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Stalker For Mac How to Play the Legendary Survival Horror Game on Your Apple Device.md b/spaces/cihyFjudo/fairness-paper-search/Stalker For Mac How to Play the Legendary Survival Horror Game on Your Apple Device.md
deleted file mode 100644
index 00e415c7e20d76e5ae145ddd0020c6519623290e..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Stalker For Mac How to Play the Legendary Survival Horror Game on Your Apple Device.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
When holding the PDA in the lowered position, stalkers can track their position on the map while walking around. This will make it easier to navigate towards map markers like hidden stashes or mission objectives.
-
And to interact with the PDA it can be held closer to your eyes so it will fill almost the whole screen and the text on it is more readable. If you prefer the old style 2D PDA we even added an option to switch back to it, but real stalkers would not do that.
Special characters and common stalkers offer more unique tasks, along with expanded dialogue. Stalkers can sell and buy items from traders, or even visit local mechanics to upgrade their favorite weapons. In addition: Loners, Mercs and a few others can offer escort services for a fee. Many special characters also have connections around the Zone: you can offer them a payment to bribe an enemy faction, temporarily preventing them from attacking on sight.
-
The disguise system has improved. Your equipment and behavior affects the way stalkers observe and remember you. And a new experimental feature will make stealth kills more viable to kill enemies sneakily. Your rank and reputation counts. Exceptionally high or low values respectively will attract attention, possibly even from your enemies...
-
i'd just like to thank you for the abundance of various items in this mod. my entire life i mostly just played the vanilla versions of stalker and only tried out around 5-10 mods briefly. most of them were mediocre and poorly-built. but anomaly is obviously high-quality. since everyone already knows how good anomaly generally is, in my comment i'd like to thank you for something else which i don't think most people mention. for instance when playing the vanilla version of shadow of chernobyl, it was so nice to have two versions of the ecologist suit - a regular one and an improved one. and i kept thinking to myself: WHY NOT ADD MORE VARIATIONS? and this mod is so awesome - there are like 8 different versions of the ecologist suit! each with its own unique properties! this makes me so happy. oh and also the abundance of armor types and weapons in general is so cool. and it's awesome to have ecologists also wear actual body armor protecting from bullets. there's so much variety in this game, factions are so different, character models vary greatly and npcs utilize tons and tons of armor and weapon types, this is amazingly refreshing and im so happy to see variety and versatility in this game. thank you for adding dozens of new armor suits, thank you for adding dozens of new guns and thank you for making npcs actually have different models and utilize those items! this is just awesome, i love the huge amount of various items you have in this mod, this is exactly what vanilla stalker lacks and it's like somebody heard my thoughts and implemented this. thank you! have a good day
-
See Also: 'The Thing About Heroes' Episode GuideSynopsis:With Mac in Chicago hunting the 333 caller, the rest of the team heads to the subway to investigate the death of train operator Kevin Carmichael, whose body was found slumped in the broken window of his train. The CSIs and Flack are in for a surprise when the doors slam shut and the train springs into action, taking them all for a ride save for Danny, who stepped briefly onto the platform to get his kit. Danny runs to a control box only to find an MP3 player strapped to it, controlling the train. Danny hurls a rock at it and disables it just before the train crashes into a sitting car. When the CSIs get off the train, Flack notices they're at 33rd Street--on the 3 train. Stella realizes it's the work of Mac's stalker. In Chicago, Mac searches the Tribune Building and finds a hanging decomposed body in one of the rooms, along with a hangman puzzle, with the letters of the alphabet above it--save for the letters that spell out "coward." He fills in Chicago PD Detective Brennan on the 333 caller and the clues that led him to Chicago, but she reminds him he has no jurisdiction in the Windy City. While the CSIs work the Carmichael case, Chief Sinclair sends Flack to Chicago to keep an eye on Mac. In the lab, Adam shows Danny how the MP3 player worked, and points out a site on the internet where the saboteur picked up the technical know-how to program the MP3 player to do hijack the train.In the morgue in Chicago, Brennan tells Mac the dead man was in his mid-twenties and died thirty years ago of a gunshot wound to the stomach. His body was buried and dug up. Mac calls an old friend, Jimmy and meets up with him. He recalls Bobby O'Toole, the man who beat Jimmy's brother Will to death. Mac accuses Jimmy of being the 333 stalker, but Jimmy denies it and tells Mac to stay away from him. After Jimmy leaves, Mac picks up his discarded cigarette butt and Flack arrives. Mac fills him on what happened thirty years ago: sixteen-year-old Will was making deliveries for a mobster, allowing his fourteen-year-old brother Jimmy and his friend--Mac--to tag along. But a delivery to Bobby O'Toole--who lived in apartment 333--went terribly wrong, and Jimmy and Mac witnessed Bobby beating Will to death. Jimmy pulled a gun out of Bobby's drawer but it got knocked out of his hand; Mac picked it up but was unable to shoot Bobby. Mac gets the DNA report on the cigarette from Stella: the blood is a filial match to the blood on the puzzle pieces and the DNA on the MP3 player. Mac recalls Will and Jimmy had a younger brother: Andy. Back at the lab, Stella studies the puzzles, disturbed. Suddenly, she recalls a puzzle piece she found at Drew Bedford's apartment after she brought the first puzzle to him matched not the first puzzle but the second. She puts it together: Drew is the 333 stalker. She confirms it when partial prints off the gifts he's given her match the prints on the MP3 player.Mac and Flack return to New York and, along with Stella and Danny, prepare to storm Drew's apartment. Sinclair joins them. While Mac scours the wine racks, Drew knocks him unconscious. Mac awakens in a chair surrounded by lasers. Drew tells him if he trips the lasers, a gun will fire at his head, and promises the same will happen to whoever walks through the door to save him. Drew calls him a coward, saying that he could have saved his brother if he'd fired the gun. He shows Mac a newspaper article on his heroics after the drug bust, questioning Mac's status as a "hero." The team works frantically to find Mac, consulting a playlist on the MP3 player, which leads them to a forgotten subway tunnel. Flack pulls an ace out of his sleeve: Jimmy. Jimmy calls Drew and bursts into the room. The shotgun fires into his chest and Drew runs for him. Mac sets off the laser as Drew crosses in front of the gun and it hits Drew. Mac takes him down with a shot to the arm. Jimmy is unharmed because he was wearing a bulletproof vest, and Drew's wound isn't fatal. The family has lost enough, Mac tells Flack.Analysis:After ten episodes worth of build up, "The Thing About Heroes" delivers an exciting conclusion to the 333 storyline. Mac journeys to Chicago and finally connects the 333 caller to an incident from his past, Stella puts it together that Drew Bedford is the stalker and Mac finally has his showdown with the guy. We also learn what apparently set Drew off: after Mac took down the Irish mob in "Snow Day", the newspapers picked up on the story and started referring to Mac as a "hero," an assessment Drew clearly doesn't agree with.It's a rather daring move to label the show's hero a coward, even if it's a brand the audience will ultimately dismiss. Mac's inaction is imminently forgivable because he was a child, but Drew, though deranged, doesn't come off as entirely unsympathetic, either in the eyes of the audience or Mac. Indeed, Mac makes the risky move of shooting to wound rather than kill Drew, saying that Drew and his brother have lost enough. Though Mac might disagree, the move is a heroic one. Gary Sinise acquits himself well during the episode, toning down Mac's persecution complex and showing the CSI finally putting all the pieces together and confronting his past head on.Drew Bedford may have been an obvious suspect early on--he did show up in Stella's life right around the time the 333 caller starting bothering Mac--but he doesn't disappoint in the final showdown, in large part due to Kerr Smith's impassioned performance. Smith, who up until this point has had little to do but unsuccessfully woo Stella, turns up the intensity and makes us wish we'd seen more of him. He makes Drew's anger both real and sympathetic, and even though he's misguided, it's hard not to feel sorry for the man whose life was shaped by watching his older brother get beaten to death and who has become obsessed with getting revenge on the family friend who, as a child, was unable to fire a gun at the man beating his brother.Thankfully, Stella is the one to figure out Drew is the 333 caller when she recalls a puzzle piece she found at his apartment matched a puzzle the CSIs recovered after her visit, not before. It's gratifying that Stella never let her guard down with Drew, given that he came on way too strong and given her previous bad experience with perfect-boyfriend-turned-psycho Frankie back in season two. I didn't think much of Stella going to confront Drew alone twice in the last episode, "One Wedding and a Funeral", so it's nice to see her exhibiting sharper thinking here.It's too bad that the CSIs pretty much walk into Drew's trap and hand him Mac on a platter. I know Drew needed to get a hold of Mac to advance the plot along, but really did it need to be so easy for him? Stella, Flack and Danny are all with him, but none of them see fit to stick by Mac's side. Flack, Mac's confidant throughout the entire 333 saga, follows Danny into some back room rather than sticking with Mac. Flack's protectiveness of Danny is well documented, but this may have been the one case where Flack should have worried more about Mac than Danny. Stella accompanies Mac into the wine cellar, but is (somewhat understandably) sidetracked when she comes across pictures of herself that Drew must have taken. Mac unwisely wanders into the rows of wine racks by himself and unsurprisingly falls into Drew's trap.I did like Drew's laser set up; it reminded me of the one Mac used in "Snow Day" when he captured one of the mobsters and tied him up in a chair behind lasers rigged to set off an explosive. I wonder if Drew perhaps read about Mac's laser set up in that newspaper article and decided to mimic it. It was cool, even if it was a tad elaborate. But, then, hasn't that been Drew's thing all along? His whole ploy with the calls and romancing Stella was long and involved. I'm still not sure we needed all of the build up; I can't help but wonder if the storyline would have unfolded in a more interesting way if Drew had just approached Stella and then Mac had been drawn in with the puzzles. The whole "calling at 3:33am" thing got a little out of hand, and I loved Lindsay's reaction to it when she said, "No woman would make anonymous calls at 3:33 in the morning."It was fun to see Mac revisit his old Chicago stomping grounds, and watching him go through the Tribune Building while the frustrated cop tried to halt his progress earned a laugh. I wish the Chicago detective he worked with hadn't been so downright insufferable; Brennan's every other word was essentially to remind Mac that he didn't have jurisdiction in Chicago. She was so irritatingly obnoxious that I was hoping she'd cop the attitude in front of Flack so he could put her in her place. Mac might have a commanding presence, but some situations require a sharp tongue, and Flack's got that in spades.The opening scene with the out-of-control train was quite exciting, allowing the episode to begin on a note as thrilling as the one it ends up closing with. Perennial "damsel in distress" Danny actually gets to save someone else for the second time this season (after he rescued Hawkes in "The Deep") when he steps off the train to grab his kit and ends up being the only one of the team not stuck on the runaway train. Danny stops the train by doing what he does best in stressful situations: throwing something. This time, it's not a liquid solution but a rock, which disables the MP3 player controlling the train. Just as in "The Deep," much is made of Danny's heroics: both Stella and Mac feel the need to make note of the fact that the train didn't crash "thanks to Danny." It's rather endearing to see both of Danny's superiors are still aware of his desire for affirmation, and furthers the idea that Danny is in serious need of self-esteem boosting.And so the 333 saga ends, with Drew Bedford alive and clearly unrepentant. Will Mac come to regret his act of mercy, or have we seen the last of the 333 stalker? Time will tell, but I hope the door is closed on the saga. I'm glad it wasn't stretched out past the midway point in the season; there's only so much ammunition that can be got out of creepy phone calls and elaborate cat-and-mouse games. That being said, the final chapter in the saga certainly did pack a punch.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Stronghold 3 Gold Trainer 1.10 27781 Boost Your Performance and Enjoyment.md b/spaces/cihyFjudo/fairness-paper-search/Stronghold 3 Gold Trainer 1.10 27781 Boost Your Performance and Enjoyment.md
deleted file mode 100644
index 3fb729986909fc2606dbeae0db99f2c9b1942315..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Stronghold 3 Gold Trainer 1.10 27781 Boost Your Performance and Enjoyment.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/The History and Trivia of Sierra Charriba the Italian Version of the American Western Film Major Dundee.md b/spaces/cihyFjudo/fairness-paper-search/The History and Trivia of Sierra Charriba the Italian Version of the American Western Film Major Dundee.md
deleted file mode 100644
index 34a80e3d4246a8bd08a47a8ac473e2606bfdc2d6..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/The History and Trivia of Sierra Charriba the Italian Version of the American Western Film Major Dundee.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
", unsafe_allow_html=True)
-st.markdown('##')
-
-col1, col2 = st.columns(2)
-
-col1.subheader("Edit code")
-code = col1.text_area(label="", value=default_code, height=220,).strip()
-inputs = tokenizer(code, return_tensors='pt')
-token_list = [tokenizer.decode(t) for t in inputs["input_ids"][0]]
-
-with torch.no_grad():
- logits = model(input_ids=inputs["input_ids"]).logits[0]
- probs = softmax(logits, dim=-1)
-
-loss = calculate_loss(logits, inputs["input_ids"][0])
-norm_probs, sorted_token_ids = calculate_scores(probs.numpy(), inputs["input_ids"][0].numpy())
-
-if len(inputs['input_ids'])>1024:
- st.warning("Your input is longer than the maximum 1024 tokens and will be truncated.")
-st.sidebar.title("Info:")
-st.sidebar.markdown("This demo uses CodeParrot to highlight the parts of code with low probability. Since CodeParrot is an autoregressive model the tokens at the beginning tend to have a lower probability. E.g. the model can't know what you want to import because it has no access to information later in the code. However, as you can see in the example on the right it still can highlight bugs or unconventional naming.\n\nAt the bottom of the page is an example of how a better solution might look like. Try to copy paste it and press **CMD + Enter** to update the highlighting.")
-st.sidebar.title("Settings:")
-if st.sidebar.radio("Highlight mode:", ["Probability heuristics", "Scaled loss per token"]) == "Probability heuristics":
- scores = norm_probs
-else:
- scores = loss
-
-suggestion_threshold = st.sidebar.slider("Suggestion threshold", 0.0, 1.0, 0.2)
-
-col2.subheader("Highlighted code")
-col2.markdown('##')
-html_string = highlight_token_scores(token_list, scores, sep="")
-col2.markdown(html_string, unsafe_allow_html=True)
-col2.markdown('##')
-
-st.subheader("Model suggestions")
-top_k = {}
-for i in range(5):
- top_k[f"top-{i+1}"] = ["No prediction for first token"] + [repr(tokenizer.decode(idx)) for idx in sorted_token_ids[:, i]]
-df = pd.DataFrame({"tokens": [repr(t) for t in token_list], "scores": scores, **top_k})
-df.index.name = "position"
-df_filter = df.loc[df["scores"]<=suggestion_threshold]
-df_filter.reset_index(inplace=True)
-df_filter = df_filter[["tokens", "scores", "position", "top-1", "top-2", "top-3", "top-4", "top-5",]]
-df_filter = df_filter.style.apply(color_dataframe, axis=1)
-st.dataframe(df_filter)
-
-st.markdown('##')
-
-st.subheader("Possible solution")
-st.code(solution_code)
\ No newline at end of file
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/amrwbdec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/amrwbdec.c
deleted file mode 100644
index 9d75b972fa796c35e60f6ee6d0f26c48be00027a..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/amrwbdec.c
+++ /dev/null
@@ -1,1309 +0,0 @@
-/*
- * AMR wideband decoder
- * Copyright (c) 2010 Marcelo Galvao Povoa
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A particular PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * AMR wideband decoder
- */
-
-#include "config.h"
-
-#include "libavutil/channel_layout.h"
-#include "libavutil/common.h"
-#include "libavutil/lfg.h"
-
-#include "avcodec.h"
-#include "lsp.h"
-#include "celp_filters.h"
-#include "celp_math.h"
-#include "acelp_filters.h"
-#include "acelp_vectors.h"
-#include "acelp_pitch_delay.h"
-#include "codec_internal.h"
-#include "decode.h"
-
-#define AMR_USE_16BIT_TABLES
-#include "amr.h"
-
-#include "amrwbdata.h"
-#if ARCH_MIPS
-#include "mips/amrwbdec_mips.h"
-#endif /* ARCH_MIPS */
-
-typedef struct AMRWBContext {
- AMRWBFrame frame; ///< AMRWB parameters decoded from bitstream
- enum Mode fr_cur_mode; ///< mode index of current frame
- uint8_t fr_quality; ///< frame quality index (FQI)
- float isf_cur[LP_ORDER]; ///< working ISF vector from current frame
- float isf_q_past[LP_ORDER]; ///< quantized ISF vector of the previous frame
- float isf_past_final[LP_ORDER]; ///< final processed ISF vector of the previous frame
- double isp[4][LP_ORDER]; ///< ISP vectors from current frame
- double isp_sub4_past[LP_ORDER]; ///< ISP vector for the 4th subframe of the previous frame
-
- float lp_coef[4][LP_ORDER]; ///< Linear Prediction Coefficients from ISP vector
-
- uint8_t base_pitch_lag; ///< integer part of pitch lag for the next relative subframe
- uint8_t pitch_lag_int; ///< integer part of pitch lag of the previous subframe
-
- float excitation_buf[AMRWB_P_DELAY_MAX + LP_ORDER + 2 + AMRWB_SFR_SIZE]; ///< current excitation and all necessary excitation history
- float *excitation; ///< points to current excitation in excitation_buf[]
-
- float pitch_vector[AMRWB_SFR_SIZE]; ///< adaptive codebook (pitch) vector for current subframe
- float fixed_vector[AMRWB_SFR_SIZE]; ///< algebraic codebook (fixed) vector for current subframe
-
- float prediction_error[4]; ///< quantified prediction errors {20log10(^gamma_gc)} for previous four subframes
- float pitch_gain[6]; ///< quantified pitch gains for the current and previous five subframes
- float fixed_gain[2]; ///< quantified fixed gains for the current and previous subframes
-
- float tilt_coef; ///< {beta_1} related to the voicing of the previous subframe
-
- float prev_sparse_fixed_gain; ///< previous fixed gain; used by anti-sparseness to determine "onset"
- uint8_t prev_ir_filter_nr; ///< previous impulse response filter "impNr": 0 - strong, 1 - medium, 2 - none
- float prev_tr_gain; ///< previous initial gain used by noise enhancer for threshold
-
- float samples_az[LP_ORDER + AMRWB_SFR_SIZE]; ///< low-band samples and memory from synthesis at 12.8kHz
- float samples_up[UPS_MEM_SIZE + AMRWB_SFR_SIZE]; ///< low-band samples and memory processed for upsampling
- float samples_hb[LP_ORDER_16k + AMRWB_SFR_SIZE_16k]; ///< high-band samples and memory from synthesis at 16kHz
-
- float hpf_31_mem[2], hpf_400_mem[2]; ///< previous values in the high pass filters
- float demph_mem[1]; ///< previous value in the de-emphasis filter
- float bpf_6_7_mem[HB_FIR_SIZE]; ///< previous values in the high-band band pass filter
- float lpf_7_mem[HB_FIR_SIZE]; ///< previous values in the high-band low pass filter
-
- AVLFG prng; ///< random number generator for white noise excitation
- uint8_t first_frame; ///< flag active during decoding of the first frame
- ACELPFContext acelpf_ctx; ///< context for filters for ACELP-based codecs
- ACELPVContext acelpv_ctx; ///< context for vector operations for ACELP-based codecs
- CELPFContext celpf_ctx; ///< context for filters for CELP-based codecs
- CELPMContext celpm_ctx; ///< context for fixed point math operations
-
-} AMRWBContext;
-
-typedef struct AMRWBChannelsContext {
- AMRWBContext ch[2];
-} AMRWBChannelsContext;
-
-static av_cold int amrwb_decode_init(AVCodecContext *avctx)
-{
- AMRWBChannelsContext *s = avctx->priv_data;
- int i;
-
- if (avctx->ch_layout.nb_channels > 2) {
- avpriv_report_missing_feature(avctx, ">2 channel AMR");
- return AVERROR_PATCHWELCOME;
- }
-
- if (!avctx->ch_layout.nb_channels) {
- av_channel_layout_uninit(&avctx->ch_layout);
- avctx->ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_MONO;
- }
- if (!avctx->sample_rate)
- avctx->sample_rate = 16000;
- avctx->sample_fmt = AV_SAMPLE_FMT_FLTP;
-
- for (int ch = 0; ch < avctx->ch_layout.nb_channels; ch++) {
- AMRWBContext *ctx = &s->ch[ch];
-
- av_lfg_init(&ctx->prng, 1);
-
- ctx->excitation = &ctx->excitation_buf[AMRWB_P_DELAY_MAX + LP_ORDER + 1];
- ctx->first_frame = 1;
-
- for (i = 0; i < LP_ORDER; i++)
- ctx->isf_past_final[i] = isf_init[i] * (1.0f / (1 << 15));
-
- for (i = 0; i < 4; i++)
- ctx->prediction_error[i] = MIN_ENERGY;
-
- ff_acelp_filter_init(&ctx->acelpf_ctx);
- ff_acelp_vectors_init(&ctx->acelpv_ctx);
- ff_celp_filter_init(&ctx->celpf_ctx);
- ff_celp_math_init(&ctx->celpm_ctx);
- }
-
- return 0;
-}
-
-/**
- * Decode the frame header in the "MIME/storage" format. This format
- * is simpler and does not carry the auxiliary frame information.
- *
- * @param[in] ctx The Context
- * @param[in] buf Pointer to the input buffer
- *
- * @return The decoded header length in bytes
- */
-static int decode_mime_header(AMRWBContext *ctx, const uint8_t *buf)
-{
- /* Decode frame header (1st octet) */
- ctx->fr_cur_mode = buf[0] >> 3 & 0x0F;
- ctx->fr_quality = (buf[0] & 0x4) == 0x4;
-
- return 1;
-}
-
-/**
- * Decode quantized ISF vectors using 36-bit indexes (6K60 mode only).
- *
- * @param[in] ind Array of 5 indexes
- * @param[out] isf_q Buffer for isf_q[LP_ORDER]
- */
-static void decode_isf_indices_36b(uint16_t *ind, float *isf_q)
-{
- int i;
-
- for (i = 0; i < 9; i++)
- isf_q[i] = dico1_isf[ind[0]][i] * (1.0f / (1 << 15));
-
- for (i = 0; i < 7; i++)
- isf_q[i + 9] = dico2_isf[ind[1]][i] * (1.0f / (1 << 15));
-
- for (i = 0; i < 5; i++)
- isf_q[i] += dico21_isf_36b[ind[2]][i] * (1.0f / (1 << 15));
-
- for (i = 0; i < 4; i++)
- isf_q[i + 5] += dico22_isf_36b[ind[3]][i] * (1.0f / (1 << 15));
-
- for (i = 0; i < 7; i++)
- isf_q[i + 9] += dico23_isf_36b[ind[4]][i] * (1.0f / (1 << 15));
-}
-
-/**
- * Decode quantized ISF vectors using 46-bit indexes (except 6K60 mode).
- *
- * @param[in] ind Array of 7 indexes
- * @param[out] isf_q Buffer for isf_q[LP_ORDER]
- */
-static void decode_isf_indices_46b(uint16_t *ind, float *isf_q)
-{
- int i;
-
- for (i = 0; i < 9; i++)
- isf_q[i] = dico1_isf[ind[0]][i] * (1.0f / (1 << 15));
-
- for (i = 0; i < 7; i++)
- isf_q[i + 9] = dico2_isf[ind[1]][i] * (1.0f / (1 << 15));
-
- for (i = 0; i < 3; i++)
- isf_q[i] += dico21_isf[ind[2]][i] * (1.0f / (1 << 15));
-
- for (i = 0; i < 3; i++)
- isf_q[i + 3] += dico22_isf[ind[3]][i] * (1.0f / (1 << 15));
-
- for (i = 0; i < 3; i++)
- isf_q[i + 6] += dico23_isf[ind[4]][i] * (1.0f / (1 << 15));
-
- for (i = 0; i < 3; i++)
- isf_q[i + 9] += dico24_isf[ind[5]][i] * (1.0f / (1 << 15));
-
- for (i = 0; i < 4; i++)
- isf_q[i + 12] += dico25_isf[ind[6]][i] * (1.0f / (1 << 15));
-}
-
-/**
- * Apply mean and past ISF values using the prediction factor.
- * Updates past ISF vector.
- *
- * @param[in,out] isf_q Current quantized ISF
- * @param[in,out] isf_past Past quantized ISF
- */
-static void isf_add_mean_and_past(float *isf_q, float *isf_past)
-{
- int i;
- float tmp;
-
- for (i = 0; i < LP_ORDER; i++) {
- tmp = isf_q[i];
- isf_q[i] += isf_mean[i] * (1.0f / (1 << 15));
- isf_q[i] += PRED_FACTOR * isf_past[i];
- isf_past[i] = tmp;
- }
-}
-
-/**
- * Interpolate the fourth ISP vector from current and past frames
- * to obtain an ISP vector for each subframe.
- *
- * @param[in,out] isp_q ISPs for each subframe
- * @param[in] isp4_past Past ISP for subframe 4
- */
-static void interpolate_isp(double isp_q[4][LP_ORDER], const double *isp4_past)
-{
- int i, k;
-
- for (k = 0; k < 3; k++) {
- float c = isfp_inter[k];
- for (i = 0; i < LP_ORDER; i++)
- isp_q[k][i] = (1.0 - c) * isp4_past[i] + c * isp_q[3][i];
- }
-}
-
-/**
- * Decode an adaptive codebook index into pitch lag (except 6k60, 8k85 modes).
- * Calculate integer lag and fractional lag always using 1/4 resolution.
- * In 1st and 3rd subframes the index is relative to last subframe integer lag.
- *
- * @param[out] lag_int Decoded integer pitch lag
- * @param[out] lag_frac Decoded fractional pitch lag
- * @param[in] pitch_index Adaptive codebook pitch index
- * @param[in,out] base_lag_int Base integer lag used in relative subframes
- * @param[in] subframe Current subframe index (0 to 3)
- */
-static void decode_pitch_lag_high(int *lag_int, int *lag_frac, int pitch_index,
- uint8_t *base_lag_int, int subframe)
-{
- if (subframe == 0 || subframe == 2) {
- if (pitch_index < 376) {
- *lag_int = (pitch_index + 137) >> 2;
- *lag_frac = pitch_index - (*lag_int << 2) + 136;
- } else if (pitch_index < 440) {
- *lag_int = (pitch_index + 257 - 376) >> 1;
- *lag_frac = (pitch_index - (*lag_int << 1) + 256 - 376) * 2;
- /* the actual resolution is 1/2 but expressed as 1/4 */
- } else {
- *lag_int = pitch_index - 280;
- *lag_frac = 0;
- }
- /* minimum lag for next subframe */
- *base_lag_int = av_clip(*lag_int - 8 - (*lag_frac < 0),
- AMRWB_P_DELAY_MIN, AMRWB_P_DELAY_MAX - 15);
- // XXX: the spec states clearly that *base_lag_int should be
- // the nearest integer to *lag_int (minus 8), but the ref code
- // actually always uses its floor, I'm following the latter
- } else {
- *lag_int = (pitch_index + 1) >> 2;
- *lag_frac = pitch_index - (*lag_int << 2);
- *lag_int += *base_lag_int;
- }
-}
-
-/**
- * Decode an adaptive codebook index into pitch lag for 8k85 and 6k60 modes.
- * The description is analogous to decode_pitch_lag_high, but in 6k60 the
- * relative index is used for all subframes except the first.
- */
-static void decode_pitch_lag_low(int *lag_int, int *lag_frac, int pitch_index,
- uint8_t *base_lag_int, int subframe, enum Mode mode)
-{
- if (subframe == 0 || (subframe == 2 && mode != MODE_6k60)) {
- if (pitch_index < 116) {
- *lag_int = (pitch_index + 69) >> 1;
- *lag_frac = (pitch_index - (*lag_int << 1) + 68) * 2;
- } else {
- *lag_int = pitch_index - 24;
- *lag_frac = 0;
- }
- // XXX: same problem as before
- *base_lag_int = av_clip(*lag_int - 8 - (*lag_frac < 0),
- AMRWB_P_DELAY_MIN, AMRWB_P_DELAY_MAX - 15);
- } else {
- *lag_int = (pitch_index + 1) >> 1;
- *lag_frac = (pitch_index - (*lag_int << 1)) * 2;
- *lag_int += *base_lag_int;
- }
-}
-
-/**
- * Find the pitch vector by interpolating the past excitation at the
- * pitch delay, which is obtained in this function.
- *
- * @param[in,out] ctx The context
- * @param[in] amr_subframe Current subframe data
- * @param[in] subframe Current subframe index (0 to 3)
- */
-static void decode_pitch_vector(AMRWBContext *ctx,
- const AMRWBSubFrame *amr_subframe,
- const int subframe)
-{
- int pitch_lag_int, pitch_lag_frac;
- int i;
- float *exc = ctx->excitation;
- enum Mode mode = ctx->fr_cur_mode;
-
- if (mode <= MODE_8k85) {
- decode_pitch_lag_low(&pitch_lag_int, &pitch_lag_frac, amr_subframe->adap,
- &ctx->base_pitch_lag, subframe, mode);
- } else
- decode_pitch_lag_high(&pitch_lag_int, &pitch_lag_frac, amr_subframe->adap,
- &ctx->base_pitch_lag, subframe);
-
- ctx->pitch_lag_int = pitch_lag_int;
- pitch_lag_int += pitch_lag_frac > 0;
-
- /* Calculate the pitch vector by interpolating the past excitation at the
- pitch lag using a hamming windowed sinc function */
- ctx->acelpf_ctx.acelp_interpolatef(exc,
- exc + 1 - pitch_lag_int,
- ac_inter, 4,
- pitch_lag_frac + (pitch_lag_frac > 0 ? 0 : 4),
- LP_ORDER, AMRWB_SFR_SIZE + 1);
-
- /* Check which pitch signal path should be used
- * 6k60 and 8k85 modes have the ltp flag set to 0 */
- if (amr_subframe->ltp) {
- memcpy(ctx->pitch_vector, exc, AMRWB_SFR_SIZE * sizeof(float));
- } else {
- for (i = 0; i < AMRWB_SFR_SIZE; i++)
- ctx->pitch_vector[i] = 0.18 * exc[i - 1] + 0.64 * exc[i] +
- 0.18 * exc[i + 1];
- memcpy(exc, ctx->pitch_vector, AMRWB_SFR_SIZE * sizeof(float));
- }
-}
-
-/** Get x bits in the index interval [lsb,lsb+len-1] inclusive */
-#define BIT_STR(x,lsb,len) av_mod_uintp2((x) >> (lsb), (len))
-
-/** Get the bit at specified position */
-#define BIT_POS(x, p) (((x) >> (p)) & 1)
-
-/**
- * The next six functions decode_[i]p_track decode exactly i pulses
- * positions and amplitudes (-1 or 1) in a subframe track using
- * an encoded pulse indexing (TS 26.190 section 5.8.2).
- *
- * The results are given in out[], in which a negative number means
- * amplitude -1 and vice versa (i.e., ampl(x) = x / abs(x) ).
- *
- * @param[out] out Output buffer (writes i elements)
- * @param[in] code Pulse index (no. of bits varies, see below)
- * @param[in] m (log2) Number of potential positions
- * @param[in] off Offset for decoded positions
- */
-static inline void decode_1p_track(int *out, int code, int m, int off)
-{
- int pos = BIT_STR(code, 0, m) + off; ///code: m+1 bits
-
- out[0] = BIT_POS(code, m) ? -pos : pos;
-}
-
-static inline void decode_2p_track(int *out, int code, int m, int off) ///code: 2m+1 bits
-{
- int pos0 = BIT_STR(code, m, m) + off;
- int pos1 = BIT_STR(code, 0, m) + off;
-
- out[0] = BIT_POS(code, 2*m) ? -pos0 : pos0;
- out[1] = BIT_POS(code, 2*m) ? -pos1 : pos1;
- out[1] = pos0 > pos1 ? -out[1] : out[1];
-}
-
-static void decode_3p_track(int *out, int code, int m, int off) ///code: 3m+1 bits
-{
- int half_2p = BIT_POS(code, 2*m - 1) << (m - 1);
-
- decode_2p_track(out, BIT_STR(code, 0, 2*m - 1),
- m - 1, off + half_2p);
- decode_1p_track(out + 2, BIT_STR(code, 2*m, m + 1), m, off);
-}
-
-static void decode_4p_track(int *out, int code, int m, int off) ///code: 4m bits
-{
- int half_4p, subhalf_2p;
- int b_offset = 1 << (m - 1);
-
- switch (BIT_STR(code, 4*m - 2, 2)) { /* case ID (2 bits) */
- case 0: /* 0 pulses in A, 4 pulses in B or vice versa */
- half_4p = BIT_POS(code, 4*m - 3) << (m - 1); // which has 4 pulses
- subhalf_2p = BIT_POS(code, 2*m - 3) << (m - 2);
-
- decode_2p_track(out, BIT_STR(code, 0, 2*m - 3),
- m - 2, off + half_4p + subhalf_2p);
- decode_2p_track(out + 2, BIT_STR(code, 2*m - 2, 2*m - 1),
- m - 1, off + half_4p);
- break;
- case 1: /* 1 pulse in A, 3 pulses in B */
- decode_1p_track(out, BIT_STR(code, 3*m - 2, m),
- m - 1, off);
- decode_3p_track(out + 1, BIT_STR(code, 0, 3*m - 2),
- m - 1, off + b_offset);
- break;
- case 2: /* 2 pulses in each half */
- decode_2p_track(out, BIT_STR(code, 2*m - 1, 2*m - 1),
- m - 1, off);
- decode_2p_track(out + 2, BIT_STR(code, 0, 2*m - 1),
- m - 1, off + b_offset);
- break;
- case 3: /* 3 pulses in A, 1 pulse in B */
- decode_3p_track(out, BIT_STR(code, m, 3*m - 2),
- m - 1, off);
- decode_1p_track(out + 3, BIT_STR(code, 0, m),
- m - 1, off + b_offset);
- break;
- }
-}
-
-static void decode_5p_track(int *out, int code, int m, int off) ///code: 5m bits
-{
- int half_3p = BIT_POS(code, 5*m - 1) << (m - 1);
-
- decode_3p_track(out, BIT_STR(code, 2*m + 1, 3*m - 2),
- m - 1, off + half_3p);
-
- decode_2p_track(out + 3, BIT_STR(code, 0, 2*m + 1), m, off);
-}
-
-static void decode_6p_track(int *out, int code, int m, int off) ///code: 6m-2 bits
-{
- int b_offset = 1 << (m - 1);
- /* which half has more pulses in cases 0 to 2 */
- int half_more = BIT_POS(code, 6*m - 5) << (m - 1);
- int half_other = b_offset - half_more;
-
- switch (BIT_STR(code, 6*m - 4, 2)) { /* case ID (2 bits) */
- case 0: /* 0 pulses in A, 6 pulses in B or vice versa */
- decode_1p_track(out, BIT_STR(code, 0, m),
- m - 1, off + half_more);
- decode_5p_track(out + 1, BIT_STR(code, m, 5*m - 5),
- m - 1, off + half_more);
- break;
- case 1: /* 1 pulse in A, 5 pulses in B or vice versa */
- decode_1p_track(out, BIT_STR(code, 0, m),
- m - 1, off + half_other);
- decode_5p_track(out + 1, BIT_STR(code, m, 5*m - 5),
- m - 1, off + half_more);
- break;
- case 2: /* 2 pulses in A, 4 pulses in B or vice versa */
- decode_2p_track(out, BIT_STR(code, 0, 2*m - 1),
- m - 1, off + half_other);
- decode_4p_track(out + 2, BIT_STR(code, 2*m - 1, 4*m - 4),
- m - 1, off + half_more);
- break;
- case 3: /* 3 pulses in A, 3 pulses in B */
- decode_3p_track(out, BIT_STR(code, 3*m - 2, 3*m - 2),
- m - 1, off);
- decode_3p_track(out + 3, BIT_STR(code, 0, 3*m - 2),
- m - 1, off + b_offset);
- break;
- }
-}
-
-/**
- * Decode the algebraic codebook index to pulse positions and signs,
- * then construct the algebraic codebook vector.
- *
- * @param[out] fixed_vector Buffer for the fixed codebook excitation
- * @param[in] pulse_hi MSBs part of the pulse index array (higher modes only)
- * @param[in] pulse_lo LSBs part of the pulse index array
- * @param[in] mode Mode of the current frame
- */
-static void decode_fixed_vector(float *fixed_vector, const uint16_t *pulse_hi,
- const uint16_t *pulse_lo, const enum Mode mode)
-{
- /* sig_pos stores for each track the decoded pulse position indexes
- * (1-based) multiplied by its corresponding amplitude (+1 or -1) */
- int sig_pos[4][6];
- int spacing = (mode == MODE_6k60) ? 2 : 4;
- int i, j;
-
- switch (mode) {
- case MODE_6k60:
- for (i = 0; i < 2; i++)
- decode_1p_track(sig_pos[i], pulse_lo[i], 5, 1);
- break;
- case MODE_8k85:
- for (i = 0; i < 4; i++)
- decode_1p_track(sig_pos[i], pulse_lo[i], 4, 1);
- break;
- case MODE_12k65:
- for (i = 0; i < 4; i++)
- decode_2p_track(sig_pos[i], pulse_lo[i], 4, 1);
- break;
- case MODE_14k25:
- for (i = 0; i < 2; i++)
- decode_3p_track(sig_pos[i], pulse_lo[i], 4, 1);
- for (i = 2; i < 4; i++)
- decode_2p_track(sig_pos[i], pulse_lo[i], 4, 1);
- break;
- case MODE_15k85:
- for (i = 0; i < 4; i++)
- decode_3p_track(sig_pos[i], pulse_lo[i], 4, 1);
- break;
- case MODE_18k25:
- for (i = 0; i < 4; i++)
- decode_4p_track(sig_pos[i], (int) pulse_lo[i] +
- ((int) pulse_hi[i] << 14), 4, 1);
- break;
- case MODE_19k85:
- for (i = 0; i < 2; i++)
- decode_5p_track(sig_pos[i], (int) pulse_lo[i] +
- ((int) pulse_hi[i] << 10), 4, 1);
- for (i = 2; i < 4; i++)
- decode_4p_track(sig_pos[i], (int) pulse_lo[i] +
- ((int) pulse_hi[i] << 14), 4, 1);
- break;
- case MODE_23k05:
- case MODE_23k85:
- for (i = 0; i < 4; i++)
- decode_6p_track(sig_pos[i], (int) pulse_lo[i] +
- ((int) pulse_hi[i] << 11), 4, 1);
- break;
- }
-
- memset(fixed_vector, 0, sizeof(float) * AMRWB_SFR_SIZE);
-
- for (i = 0; i < 4; i++)
- for (j = 0; j < pulses_nb_per_mode_tr[mode][i]; j++) {
- int pos = (FFABS(sig_pos[i][j]) - 1) * spacing + i;
-
- fixed_vector[pos] += sig_pos[i][j] < 0 ? -1.0 : 1.0;
- }
-}
-
-/**
- * Decode pitch gain and fixed gain correction factor.
- *
- * @param[in] vq_gain Vector-quantized index for gains
- * @param[in] mode Mode of the current frame
- * @param[out] fixed_gain_factor Decoded fixed gain correction factor
- * @param[out] pitch_gain Decoded pitch gain
- */
-static void decode_gains(const uint8_t vq_gain, const enum Mode mode,
- float *fixed_gain_factor, float *pitch_gain)
-{
- const int16_t *gains = (mode <= MODE_8k85 ? qua_gain_6b[vq_gain] :
- qua_gain_7b[vq_gain]);
-
- *pitch_gain = gains[0] * (1.0f / (1 << 14));
- *fixed_gain_factor = gains[1] * (1.0f / (1 << 11));
-}
-
-/**
- * Apply pitch sharpening filters to the fixed codebook vector.
- *
- * @param[in] ctx The context
- * @param[in,out] fixed_vector Fixed codebook excitation
- */
-// XXX: Spec states this procedure should be applied when the pitch
-// lag is less than 64, but this checking seems absent in reference and AMR-NB
-static void pitch_sharpening(AMRWBContext *ctx, float *fixed_vector)
-{
- int i;
-
- /* Tilt part */
- for (i = AMRWB_SFR_SIZE - 1; i != 0; i--)
- fixed_vector[i] -= fixed_vector[i - 1] * ctx->tilt_coef;
-
- /* Periodicity enhancement part */
- for (i = ctx->pitch_lag_int; i < AMRWB_SFR_SIZE; i++)
- fixed_vector[i] += fixed_vector[i - ctx->pitch_lag_int] * 0.85;
-}
-
-/**
- * Calculate the voicing factor (-1.0 = unvoiced to 1.0 = voiced).
- *
- * @param[in] p_vector, f_vector Pitch and fixed excitation vectors
- * @param[in] p_gain, f_gain Pitch and fixed gains
- * @param[in] ctx The context
- */
-// XXX: There is something wrong with the precision here! The magnitudes
-// of the energies are not correct. Please check the reference code carefully
-static float voice_factor(float *p_vector, float p_gain,
- float *f_vector, float f_gain,
- CELPMContext *ctx)
-{
- double p_ener = (double) ctx->dot_productf(p_vector, p_vector,
- AMRWB_SFR_SIZE) *
- p_gain * p_gain;
- double f_ener = (double) ctx->dot_productf(f_vector, f_vector,
- AMRWB_SFR_SIZE) *
- f_gain * f_gain;
-
- return (p_ener - f_ener) / (p_ener + f_ener + 0.01);
-}
-
-/**
- * Reduce fixed vector sparseness by smoothing with one of three IR filters,
- * also known as "adaptive phase dispersion".
- *
- * @param[in] ctx The context
- * @param[in,out] fixed_vector Unfiltered fixed vector
- * @param[out] buf Space for modified vector if necessary
- *
- * @return The potentially overwritten filtered fixed vector address
- */
-static float *anti_sparseness(AMRWBContext *ctx,
- float *fixed_vector, float *buf)
-{
- int ir_filter_nr;
-
- if (ctx->fr_cur_mode > MODE_8k85) // no filtering in higher modes
- return fixed_vector;
-
- if (ctx->pitch_gain[0] < 0.6) {
- ir_filter_nr = 0; // strong filtering
- } else if (ctx->pitch_gain[0] < 0.9) {
- ir_filter_nr = 1; // medium filtering
- } else
- ir_filter_nr = 2; // no filtering
-
- /* detect 'onset' */
- if (ctx->fixed_gain[0] > 3.0 * ctx->fixed_gain[1]) {
- if (ir_filter_nr < 2)
- ir_filter_nr++;
- } else {
- int i, count = 0;
-
- for (i = 0; i < 6; i++)
- if (ctx->pitch_gain[i] < 0.6)
- count++;
-
- if (count > 2)
- ir_filter_nr = 0;
-
- if (ir_filter_nr > ctx->prev_ir_filter_nr + 1)
- ir_filter_nr--;
- }
-
- /* update ir filter strength history */
- ctx->prev_ir_filter_nr = ir_filter_nr;
-
- ir_filter_nr += (ctx->fr_cur_mode == MODE_8k85);
-
- if (ir_filter_nr < 2) {
- int i;
- const float *coef = ir_filters_lookup[ir_filter_nr];
-
- /* Circular convolution code in the reference
- * decoder was modified to avoid using one
- * extra array. The filtered vector is given by:
- *
- * c2(n) = sum(i,0,len-1){ c(i) * coef( (n - i + len) % len ) }
- */
-
- memset(buf, 0, sizeof(float) * AMRWB_SFR_SIZE);
- for (i = 0; i < AMRWB_SFR_SIZE; i++)
- if (fixed_vector[i])
- ff_celp_circ_addf(buf, buf, coef, i, fixed_vector[i],
- AMRWB_SFR_SIZE);
- fixed_vector = buf;
- }
-
- return fixed_vector;
-}
-
-/**
- * Calculate a stability factor {teta} based on distance between
- * current and past isf. A value of 1 shows maximum signal stability.
- */
-static float stability_factor(const float *isf, const float *isf_past)
-{
- int i;
- float acc = 0.0;
-
- for (i = 0; i < LP_ORDER - 1; i++)
- acc += (isf[i] - isf_past[i]) * (isf[i] - isf_past[i]);
-
- // XXX: This part is not so clear from the reference code
- // the result is more accurate changing the "/ 256" to "* 512"
- return FFMAX(0.0, 1.25 - acc * 0.8 * 512);
-}
-
-/**
- * Apply a non-linear fixed gain smoothing in order to reduce
- * fluctuation in the energy of excitation.
- *
- * @param[in] fixed_gain Unsmoothed fixed gain
- * @param[in,out] prev_tr_gain Previous threshold gain (updated)
- * @param[in] voice_fac Frame voicing factor
- * @param[in] stab_fac Frame stability factor
- *
- * @return The smoothed gain
- */
-static float noise_enhancer(float fixed_gain, float *prev_tr_gain,
- float voice_fac, float stab_fac)
-{
- float sm_fac = 0.5 * (1 - voice_fac) * stab_fac;
- float g0;
-
- // XXX: the following fixed-point constants used to in(de)crement
- // gain by 1.5dB were taken from the reference code, maybe it could
- // be simpler
- if (fixed_gain < *prev_tr_gain) {
- g0 = FFMIN(*prev_tr_gain, fixed_gain + fixed_gain *
- (6226 * (1.0f / (1 << 15)))); // +1.5 dB
- } else
- g0 = FFMAX(*prev_tr_gain, fixed_gain *
- (27536 * (1.0f / (1 << 15)))); // -1.5 dB
-
- *prev_tr_gain = g0; // update next frame threshold
-
- return sm_fac * g0 + (1 - sm_fac) * fixed_gain;
-}
-
-/**
- * Filter the fixed_vector to emphasize the higher frequencies.
- *
- * @param[in,out] fixed_vector Fixed codebook vector
- * @param[in] voice_fac Frame voicing factor
- */
-static void pitch_enhancer(float *fixed_vector, float voice_fac)
-{
- int i;
- float cpe = 0.125 * (1 + voice_fac);
- float last = fixed_vector[0]; // holds c(i - 1)
-
- fixed_vector[0] -= cpe * fixed_vector[1];
-
- for (i = 1; i < AMRWB_SFR_SIZE - 1; i++) {
- float cur = fixed_vector[i];
-
- fixed_vector[i] -= cpe * (last + fixed_vector[i + 1]);
- last = cur;
- }
-
- fixed_vector[AMRWB_SFR_SIZE - 1] -= cpe * last;
-}
-
-/**
- * Conduct 16th order linear predictive coding synthesis from excitation.
- *
- * @param[in] ctx Pointer to the AMRWBContext
- * @param[in] lpc Pointer to the LPC coefficients
- * @param[out] excitation Buffer for synthesis final excitation
- * @param[in] fixed_gain Fixed codebook gain for synthesis
- * @param[in] fixed_vector Algebraic codebook vector
- * @param[in,out] samples Pointer to the output samples and memory
- */
-static void synthesis(AMRWBContext *ctx, float *lpc, float *excitation,
- float fixed_gain, const float *fixed_vector,
- float *samples)
-{
- ctx->acelpv_ctx.weighted_vector_sumf(excitation, ctx->pitch_vector, fixed_vector,
- ctx->pitch_gain[0], fixed_gain, AMRWB_SFR_SIZE);
-
- /* emphasize pitch vector contribution in low bitrate modes */
- if (ctx->pitch_gain[0] > 0.5 && ctx->fr_cur_mode <= MODE_8k85) {
- int i;
- float energy = ctx->celpm_ctx.dot_productf(excitation, excitation,
- AMRWB_SFR_SIZE);
-
- // XXX: Weird part in both ref code and spec. A unknown parameter
- // {beta} seems to be identical to the current pitch gain
- float pitch_factor = 0.25 * ctx->pitch_gain[0] * ctx->pitch_gain[0];
-
- for (i = 0; i < AMRWB_SFR_SIZE; i++)
- excitation[i] += pitch_factor * ctx->pitch_vector[i];
-
- ff_scale_vector_to_given_sum_of_squares(excitation, excitation,
- energy, AMRWB_SFR_SIZE);
- }
-
- ctx->celpf_ctx.celp_lp_synthesis_filterf(samples, lpc, excitation,
- AMRWB_SFR_SIZE, LP_ORDER);
-}
-
-/**
- * Apply to synthesis a de-emphasis filter of the form:
- * H(z) = 1 / (1 - m * z^-1)
- *
- * @param[out] out Output buffer
- * @param[in] in Input samples array with in[-1]
- * @param[in] m Filter coefficient
- * @param[in,out] mem State from last filtering
- */
-static void de_emphasis(float *out, float *in, float m, float mem[1])
-{
- int i;
-
- out[0] = in[0] + m * mem[0];
-
- for (i = 1; i < AMRWB_SFR_SIZE; i++)
- out[i] = in[i] + out[i - 1] * m;
-
- mem[0] = out[AMRWB_SFR_SIZE - 1];
-}
-
-/**
- * Upsample a signal by 5/4 ratio (from 12.8kHz to 16kHz) using
- * a FIR interpolation filter. Uses past data from before *in address.
- *
- * @param[out] out Buffer for interpolated signal
- * @param[in] in Current signal data (length 0.8*o_size)
- * @param[in] o_size Output signal length
- * @param[in] ctx The context
- */
-static void upsample_5_4(float *out, const float *in, int o_size, CELPMContext *ctx)
-{
- const float *in0 = in - UPS_FIR_SIZE + 1;
- int i, j, k;
- int int_part = 0, frac_part;
-
- i = 0;
- for (j = 0; j < o_size / 5; j++) {
- out[i] = in[int_part];
- frac_part = 4;
- i++;
-
- for (k = 1; k < 5; k++) {
- out[i] = ctx->dot_productf(in0 + int_part,
- upsample_fir[4 - frac_part],
- UPS_MEM_SIZE);
- int_part++;
- frac_part--;
- i++;
- }
- }
-}
-
-/**
- * Calculate the high-band gain based on encoded index (23k85 mode) or
- * on the low-band speech signal and the Voice Activity Detection flag.
- *
- * @param[in] ctx The context
- * @param[in] synth LB speech synthesis at 12.8k
- * @param[in] hb_idx Gain index for mode 23k85 only
- * @param[in] vad VAD flag for the frame
- */
-static float find_hb_gain(AMRWBContext *ctx, const float *synth,
- uint16_t hb_idx, uint8_t vad)
-{
- int wsp = (vad > 0);
- float tilt;
- float tmp;
-
- if (ctx->fr_cur_mode == MODE_23k85)
- return qua_hb_gain[hb_idx] * (1.0f / (1 << 14));
-
- tmp = ctx->celpm_ctx.dot_productf(synth, synth + 1, AMRWB_SFR_SIZE - 1);
-
- if (tmp > 0) {
- tilt = tmp / ctx->celpm_ctx.dot_productf(synth, synth, AMRWB_SFR_SIZE);
- } else
- tilt = 0;
-
- /* return gain bounded by [0.1, 1.0] */
- return av_clipf((1.0 - tilt) * (1.25 - 0.25 * wsp), 0.1, 1.0);
-}
-
-/**
- * Generate the high-band excitation with the same energy from the lower
- * one and scaled by the given gain.
- *
- * @param[in] ctx The context
- * @param[out] hb_exc Buffer for the excitation
- * @param[in] synth_exc Low-band excitation used for synthesis
- * @param[in] hb_gain Wanted excitation gain
- */
-static void scaled_hb_excitation(AMRWBContext *ctx, float *hb_exc,
- const float *synth_exc, float hb_gain)
-{
- int i;
- float energy = ctx->celpm_ctx.dot_productf(synth_exc, synth_exc,
- AMRWB_SFR_SIZE);
-
- /* Generate a white-noise excitation */
- for (i = 0; i < AMRWB_SFR_SIZE_16k; i++)
- hb_exc[i] = 32768.0 - (uint16_t) av_lfg_get(&ctx->prng);
-
- ff_scale_vector_to_given_sum_of_squares(hb_exc, hb_exc,
- energy * hb_gain * hb_gain,
- AMRWB_SFR_SIZE_16k);
-}
-
-/**
- * Calculate the auto-correlation for the ISF difference vector.
- */
-static float auto_correlation(float *diff_isf, float mean, int lag)
-{
- int i;
- float sum = 0.0;
-
- for (i = 7; i < LP_ORDER - 2; i++) {
- float prod = (diff_isf[i] - mean) * (diff_isf[i - lag] - mean);
- sum += prod * prod;
- }
- return sum;
-}
-
-/**
- * Extrapolate a ISF vector to the 16kHz range (20th order LP)
- * used at mode 6k60 LP filter for the high frequency band.
- *
- * @param[out] isf Buffer for extrapolated isf; contains LP_ORDER
- * values on input
- */
-static void extrapolate_isf(float isf[LP_ORDER_16k])
-{
- float diff_isf[LP_ORDER - 2], diff_mean;
- float corr_lag[3];
- float est, scale;
- int i, j, i_max_corr;
-
- isf[LP_ORDER_16k - 1] = isf[LP_ORDER - 1];
-
- /* Calculate the difference vector */
- for (i = 0; i < LP_ORDER - 2; i++)
- diff_isf[i] = isf[i + 1] - isf[i];
-
- diff_mean = 0.0;
- for (i = 2; i < LP_ORDER - 2; i++)
- diff_mean += diff_isf[i] * (1.0f / (LP_ORDER - 4));
-
- /* Find which is the maximum autocorrelation */
- i_max_corr = 0;
- for (i = 0; i < 3; i++) {
- corr_lag[i] = auto_correlation(diff_isf, diff_mean, i + 2);
-
- if (corr_lag[i] > corr_lag[i_max_corr])
- i_max_corr = i;
- }
- i_max_corr++;
-
- for (i = LP_ORDER - 1; i < LP_ORDER_16k - 1; i++)
- isf[i] = isf[i - 1] + isf[i - 1 - i_max_corr]
- - isf[i - 2 - i_max_corr];
-
- /* Calculate an estimate for ISF(18) and scale ISF based on the error */
- est = 7965 + (isf[2] - isf[3] - isf[4]) / 6.0;
- scale = 0.5 * (FFMIN(est, 7600) - isf[LP_ORDER - 2]) /
- (isf[LP_ORDER_16k - 2] - isf[LP_ORDER - 2]);
-
- for (i = LP_ORDER - 1, j = 0; i < LP_ORDER_16k - 1; i++, j++)
- diff_isf[j] = scale * (isf[i] - isf[i - 1]);
-
- /* Stability insurance */
- for (i = 1; i < LP_ORDER_16k - LP_ORDER; i++)
- if (diff_isf[i] + diff_isf[i - 1] < 5.0) {
- if (diff_isf[i] > diff_isf[i - 1]) {
- diff_isf[i - 1] = 5.0 - diff_isf[i];
- } else
- diff_isf[i] = 5.0 - diff_isf[i - 1];
- }
-
- for (i = LP_ORDER - 1, j = 0; i < LP_ORDER_16k - 1; i++, j++)
- isf[i] = isf[i - 1] + diff_isf[j] * (1.0f / (1 << 15));
-
- /* Scale the ISF vector for 16000 Hz */
- for (i = 0; i < LP_ORDER_16k - 1; i++)
- isf[i] *= 0.8;
-}
-
-/**
- * Spectral expand the LP coefficients using the equation:
- * y[i] = x[i] * (gamma ** i)
- *
- * @param[out] out Output buffer (may use input array)
- * @param[in] lpc LP coefficients array
- * @param[in] gamma Weighting factor
- * @param[in] size LP array size
- */
-static void lpc_weighting(float *out, const float *lpc, float gamma, int size)
-{
- int i;
- float fac = gamma;
-
- for (i = 0; i < size; i++) {
- out[i] = lpc[i] * fac;
- fac *= gamma;
- }
-}
-
-/**
- * Conduct 20th order linear predictive coding synthesis for the high
- * frequency band excitation at 16kHz.
- *
- * @param[in] ctx The context
- * @param[in] subframe Current subframe index (0 to 3)
- * @param[in,out] samples Pointer to the output speech samples
- * @param[in] exc Generated white-noise scaled excitation
- * @param[in] isf Current frame isf vector
- * @param[in] isf_past Past frame final isf vector
- */
-static void hb_synthesis(AMRWBContext *ctx, int subframe, float *samples,
- const float *exc, const float *isf, const float *isf_past)
-{
- float hb_lpc[LP_ORDER_16k];
- enum Mode mode = ctx->fr_cur_mode;
-
- if (mode == MODE_6k60) {
- float e_isf[LP_ORDER_16k]; // ISF vector for extrapolation
- double e_isp[LP_ORDER_16k];
-
- ctx->acelpv_ctx.weighted_vector_sumf(e_isf, isf_past, isf, isfp_inter[subframe],
- 1.0 - isfp_inter[subframe], LP_ORDER);
-
- extrapolate_isf(e_isf);
-
- e_isf[LP_ORDER_16k - 1] *= 2.0;
- ff_acelp_lsf2lspd(e_isp, e_isf, LP_ORDER_16k);
- ff_amrwb_lsp2lpc(e_isp, hb_lpc, LP_ORDER_16k);
-
- lpc_weighting(hb_lpc, hb_lpc, 0.9, LP_ORDER_16k);
- } else {
- lpc_weighting(hb_lpc, ctx->lp_coef[subframe], 0.6, LP_ORDER);
- }
-
- ctx->celpf_ctx.celp_lp_synthesis_filterf(samples, hb_lpc, exc, AMRWB_SFR_SIZE_16k,
- (mode == MODE_6k60) ? LP_ORDER_16k : LP_ORDER);
-}
-
-/**
- * Apply a 15th order filter to high-band samples.
- * The filter characteristic depends on the given coefficients.
- *
- * @param[out] out Buffer for filtered output
- * @param[in] fir_coef Filter coefficients
- * @param[in,out] mem State from last filtering (updated)
- * @param[in] in Input speech data (high-band)
- *
- * @remark It is safe to pass the same array in in and out parameters
- */
-
-#ifndef hb_fir_filter
-static void hb_fir_filter(float *out, const float fir_coef[HB_FIR_SIZE + 1],
- float mem[HB_FIR_SIZE], const float *in)
-{
- int i, j;
- float data[AMRWB_SFR_SIZE_16k + HB_FIR_SIZE]; // past and current samples
-
- memcpy(data, mem, HB_FIR_SIZE * sizeof(float));
- memcpy(data + HB_FIR_SIZE, in, AMRWB_SFR_SIZE_16k * sizeof(float));
-
- for (i = 0; i < AMRWB_SFR_SIZE_16k; i++) {
- out[i] = 0.0;
- for (j = 0; j <= HB_FIR_SIZE; j++)
- out[i] += data[i + j] * fir_coef[j];
- }
-
- memcpy(mem, data + AMRWB_SFR_SIZE_16k, HB_FIR_SIZE * sizeof(float));
-}
-#endif /* hb_fir_filter */
-
-/**
- * Update context state before the next subframe.
- */
-static void update_sub_state(AMRWBContext *ctx)
-{
- memmove(&ctx->excitation_buf[0], &ctx->excitation_buf[AMRWB_SFR_SIZE],
- (AMRWB_P_DELAY_MAX + LP_ORDER + 1) * sizeof(float));
-
- memmove(&ctx->pitch_gain[1], &ctx->pitch_gain[0], 5 * sizeof(float));
- memmove(&ctx->fixed_gain[1], &ctx->fixed_gain[0], 1 * sizeof(float));
-
- memmove(&ctx->samples_az[0], &ctx->samples_az[AMRWB_SFR_SIZE],
- LP_ORDER * sizeof(float));
- memmove(&ctx->samples_up[0], &ctx->samples_up[AMRWB_SFR_SIZE],
- UPS_MEM_SIZE * sizeof(float));
- memmove(&ctx->samples_hb[0], &ctx->samples_hb[AMRWB_SFR_SIZE_16k],
- LP_ORDER_16k * sizeof(float));
-}
-
-static int amrwb_decode_frame(AVCodecContext *avctx, AVFrame *frame,
- int *got_frame_ptr, AVPacket *avpkt)
-{
- AMRWBChannelsContext *s = avctx->priv_data;
- const uint8_t *buf = avpkt->data;
- int buf_size = avpkt->size;
- int sub, i, ret;
-
- /* get output buffer */
- frame->nb_samples = 4 * AMRWB_SFR_SIZE_16k;
- if ((ret = ff_get_buffer(avctx, frame, 0)) < 0)
- return ret;
-
- for (int ch = 0; ch < avctx->ch_layout.nb_channels; ch++) {
- AMRWBContext *ctx = &s->ch[ch];
- AMRWBFrame *cf = &ctx->frame;
- int expected_fr_size, header_size;
- float spare_vector[AMRWB_SFR_SIZE]; // extra stack space to hold result from anti-sparseness processing
- float fixed_gain_factor; // fixed gain correction factor (gamma)
- float *synth_fixed_vector; // pointer to the fixed vector that synthesis should use
- float synth_fixed_gain; // the fixed gain that synthesis should use
- float voice_fac, stab_fac; // parameters used for gain smoothing
- float synth_exc[AMRWB_SFR_SIZE]; // post-processed excitation for synthesis
- float hb_exc[AMRWB_SFR_SIZE_16k]; // excitation for the high frequency band
- float hb_samples[AMRWB_SFR_SIZE_16k]; // filtered high-band samples from synthesis
- float hb_gain;
- float *buf_out = (float *)frame->extended_data[ch];
-
- header_size = decode_mime_header(ctx, buf);
- expected_fr_size = ((cf_sizes_wb[ctx->fr_cur_mode] + 7) >> 3) + 1;
-
- if (!ctx->fr_quality)
- av_log(avctx, AV_LOG_ERROR, "Encountered a bad or corrupted frame\n");
-
- if (ctx->fr_cur_mode == NO_DATA || !ctx->fr_quality) {
- /* The specification suggests a "random signal" and
- "a muting technique" to "gradually decrease the output level". */
- av_samples_set_silence(&frame->extended_data[ch], 0, frame->nb_samples, 1, AV_SAMPLE_FMT_FLT);
- buf += expected_fr_size;
- buf_size -= expected_fr_size;
- continue;
- }
- if (ctx->fr_cur_mode > MODE_SID) {
- av_log(avctx, AV_LOG_ERROR,
- "Invalid mode %d\n", ctx->fr_cur_mode);
- return AVERROR_INVALIDDATA;
- }
-
- if (buf_size < expected_fr_size) {
- av_log(avctx, AV_LOG_ERROR,
- "Frame too small (%d bytes). Truncated file?\n", buf_size);
- *got_frame_ptr = 0;
- return AVERROR_INVALIDDATA;
- }
-
- if (ctx->fr_cur_mode == MODE_SID) { /* Comfort noise frame */
- avpriv_request_sample(avctx, "SID mode");
- return AVERROR_PATCHWELCOME;
- }
-
- ff_amr_bit_reorder((uint16_t *) &ctx->frame, sizeof(AMRWBFrame),
- buf + header_size, amr_bit_orderings_by_mode[ctx->fr_cur_mode]);
-
- /* Decode the quantized ISF vector */
- if (ctx->fr_cur_mode == MODE_6k60) {
- decode_isf_indices_36b(cf->isp_id, ctx->isf_cur);
- } else {
- decode_isf_indices_46b(cf->isp_id, ctx->isf_cur);
- }
-
- isf_add_mean_and_past(ctx->isf_cur, ctx->isf_q_past);
- ff_set_min_dist_lsf(ctx->isf_cur, MIN_ISF_SPACING, LP_ORDER - 1);
-
- stab_fac = stability_factor(ctx->isf_cur, ctx->isf_past_final);
-
- ctx->isf_cur[LP_ORDER - 1] *= 2.0;
- ff_acelp_lsf2lspd(ctx->isp[3], ctx->isf_cur, LP_ORDER);
-
- /* Generate a ISP vector for each subframe */
- if (ctx->first_frame) {
- ctx->first_frame = 0;
- memcpy(ctx->isp_sub4_past, ctx->isp[3], LP_ORDER * sizeof(double));
- }
- interpolate_isp(ctx->isp, ctx->isp_sub4_past);
-
- for (sub = 0; sub < 4; sub++)
- ff_amrwb_lsp2lpc(ctx->isp[sub], ctx->lp_coef[sub], LP_ORDER);
-
- for (sub = 0; sub < 4; sub++) {
- const AMRWBSubFrame *cur_subframe = &cf->subframe[sub];
- float *sub_buf = buf_out + sub * AMRWB_SFR_SIZE_16k;
-
- /* Decode adaptive codebook (pitch vector) */
- decode_pitch_vector(ctx, cur_subframe, sub);
- /* Decode innovative codebook (fixed vector) */
- decode_fixed_vector(ctx->fixed_vector, cur_subframe->pul_ih,
- cur_subframe->pul_il, ctx->fr_cur_mode);
-
- pitch_sharpening(ctx, ctx->fixed_vector);
-
- decode_gains(cur_subframe->vq_gain, ctx->fr_cur_mode,
- &fixed_gain_factor, &ctx->pitch_gain[0]);
-
- ctx->fixed_gain[0] =
- ff_amr_set_fixed_gain(fixed_gain_factor,
- ctx->celpm_ctx.dot_productf(ctx->fixed_vector,
- ctx->fixed_vector,
- AMRWB_SFR_SIZE) /
- AMRWB_SFR_SIZE,
- ctx->prediction_error,
- ENERGY_MEAN, energy_pred_fac);
-
- /* Calculate voice factor and store tilt for next subframe */
- voice_fac = voice_factor(ctx->pitch_vector, ctx->pitch_gain[0],
- ctx->fixed_vector, ctx->fixed_gain[0],
- &ctx->celpm_ctx);
- ctx->tilt_coef = voice_fac * 0.25 + 0.25;
-
- /* Construct current excitation */
- for (i = 0; i < AMRWB_SFR_SIZE; i++) {
- ctx->excitation[i] *= ctx->pitch_gain[0];
- ctx->excitation[i] += ctx->fixed_gain[0] * ctx->fixed_vector[i];
- ctx->excitation[i] = truncf(ctx->excitation[i]);
- }
-
- /* Post-processing of excitation elements */
- synth_fixed_gain = noise_enhancer(ctx->fixed_gain[0], &ctx->prev_tr_gain,
- voice_fac, stab_fac);
-
- synth_fixed_vector = anti_sparseness(ctx, ctx->fixed_vector,
- spare_vector);
-
- pitch_enhancer(synth_fixed_vector, voice_fac);
-
- synthesis(ctx, ctx->lp_coef[sub], synth_exc, synth_fixed_gain,
- synth_fixed_vector, &ctx->samples_az[LP_ORDER]);
-
- /* Synthesis speech post-processing */
- de_emphasis(&ctx->samples_up[UPS_MEM_SIZE],
- &ctx->samples_az[LP_ORDER], PREEMPH_FAC, ctx->demph_mem);
-
- ctx->acelpf_ctx.acelp_apply_order_2_transfer_function(&ctx->samples_up[UPS_MEM_SIZE],
- &ctx->samples_up[UPS_MEM_SIZE], hpf_zeros, hpf_31_poles,
- hpf_31_gain, ctx->hpf_31_mem, AMRWB_SFR_SIZE);
-
- upsample_5_4(sub_buf, &ctx->samples_up[UPS_FIR_SIZE],
- AMRWB_SFR_SIZE_16k, &ctx->celpm_ctx);
-
- /* High frequency band (6.4 - 7.0 kHz) generation part */
- ctx->acelpf_ctx.acelp_apply_order_2_transfer_function(hb_samples,
- &ctx->samples_up[UPS_MEM_SIZE], hpf_zeros, hpf_400_poles,
- hpf_400_gain, ctx->hpf_400_mem, AMRWB_SFR_SIZE);
-
- hb_gain = find_hb_gain(ctx, hb_samples,
- cur_subframe->hb_gain, cf->vad);
-
- scaled_hb_excitation(ctx, hb_exc, synth_exc, hb_gain);
-
- hb_synthesis(ctx, sub, &ctx->samples_hb[LP_ORDER_16k],
- hb_exc, ctx->isf_cur, ctx->isf_past_final);
-
- /* High-band post-processing filters */
- hb_fir_filter(hb_samples, bpf_6_7_coef, ctx->bpf_6_7_mem,
- &ctx->samples_hb[LP_ORDER_16k]);
-
- if (ctx->fr_cur_mode == MODE_23k85)
- hb_fir_filter(hb_samples, lpf_7_coef, ctx->lpf_7_mem,
- hb_samples);
-
- /* Add the low and high frequency bands */
- for (i = 0; i < AMRWB_SFR_SIZE_16k; i++)
- sub_buf[i] = (sub_buf[i] + hb_samples[i]) * (1.0f / (1 << 15));
-
- /* Update buffers and history */
- update_sub_state(ctx);
- }
-
- /* update state for next frame */
- memcpy(ctx->isp_sub4_past, ctx->isp[3], LP_ORDER * sizeof(ctx->isp[3][0]));
- memcpy(ctx->isf_past_final, ctx->isf_cur, LP_ORDER * sizeof(float));
-
- buf += expected_fr_size;
- buf_size -= expected_fr_size;
- }
-
- *got_frame_ptr = 1;
-
- return buf - avpkt->data;
-}
-
-const FFCodec ff_amrwb_decoder = {
- .p.name = "amrwb",
- CODEC_LONG_NAME("AMR-WB (Adaptive Multi-Rate WideBand)"),
- .p.type = AVMEDIA_TYPE_AUDIO,
- .p.id = AV_CODEC_ID_AMR_WB,
- .priv_data_size = sizeof(AMRWBChannelsContext),
- .init = amrwb_decode_init,
- FF_CODEC_DECODE_CB(amrwb_decode_frame),
- .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_CHANNEL_CONF,
- .p.sample_fmts = (const enum AVSampleFormat[]){ AV_SAMPLE_FMT_FLTP,
- AV_SAMPLE_FMT_NONE },
-};
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hevc_data.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hevc_data.c
deleted file mode 100644
index 1633a41c136f0403175e8c65d88d4228a6a272af..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hevc_data.c
+++ /dev/null
@@ -1,75 +0,0 @@
-/*
- * HEVC shared tables
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include
-
-#include "hevc_data.h"
-
-const uint8_t ff_hevc_diag_scan4x4_x[16] = {
- 0, 0, 1, 0,
- 1, 2, 0, 1,
- 2, 3, 1, 2,
- 3, 2, 3, 3,
-};
-
-const uint8_t ff_hevc_diag_scan4x4_y[16] = {
- 0, 1, 0, 2,
- 1, 0, 3, 2,
- 1, 0, 3, 2,
- 1, 3, 2, 3,
-};
-
-const uint8_t ff_hevc_diag_scan8x8_x[64] = {
- 0, 0, 1, 0,
- 1, 2, 0, 1,
- 2, 3, 0, 1,
- 2, 3, 4, 0,
- 1, 2, 3, 4,
- 5, 0, 1, 2,
- 3, 4, 5, 6,
- 0, 1, 2, 3,
- 4, 5, 6, 7,
- 1, 2, 3, 4,
- 5, 6, 7, 2,
- 3, 4, 5, 6,
- 7, 3, 4, 5,
- 6, 7, 4, 5,
- 6, 7, 5, 6,
- 7, 6, 7, 7,
-};
-
-const uint8_t ff_hevc_diag_scan8x8_y[64] = {
- 0, 1, 0, 2,
- 1, 0, 3, 2,
- 1, 0, 4, 3,
- 2, 1, 0, 5,
- 4, 3, 2, 1,
- 0, 6, 5, 4,
- 3, 2, 1, 0,
- 7, 6, 5, 4,
- 3, 2, 1, 0,
- 7, 6, 5, 4,
- 3, 2, 1, 7,
- 6, 5, 4, 3,
- 2, 7, 6, 5,
- 4, 3, 7, 6,
- 5, 4, 7, 6,
- 5, 7, 6, 7,
-};
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/idctdsp_lasx.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/idctdsp_lasx.c
deleted file mode 100644
index 1cfab0e028216f0f86fb63dbc228a05bcb8c5379..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/loongarch/idctdsp_lasx.c
+++ /dev/null
@@ -1,124 +0,0 @@
-/*
- * Copyright (c) 2021 Loongson Technology Corporation Limited
- * Contributed by Hao Chen
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "idctdsp_loongarch.h"
-#include "libavutil/loongarch/loongson_intrinsics.h"
-
-void ff_put_pixels_clamped_lasx(const int16_t *block,
- uint8_t *av_restrict pixels,
- ptrdiff_t stride)
-{
- __m256i b0, b1, b2, b3;
- __m256i temp0, temp1;
- ptrdiff_t stride_2x = stride << 1;
- ptrdiff_t stride_4x = stride << 2;
- ptrdiff_t stride_3x = stride_2x + stride;
-
- DUP4_ARG2(__lasx_xvld, block, 0, block, 32, block, 64, block, 96,
- b0, b1, b2, b3);
- DUP4_ARG1(__lasx_xvclip255_h, b0, b1, b2, b3, b0, b1, b2, b3);
- DUP2_ARG2(__lasx_xvpickev_b, b1, b0, b3, b2, temp0, temp1);
- __lasx_xvstelm_d(temp0, pixels, 0, 0);
- __lasx_xvstelm_d(temp0, pixels + stride, 0, 2);
- __lasx_xvstelm_d(temp0, pixels + stride_2x, 0, 1);
- __lasx_xvstelm_d(temp0, pixels + stride_3x, 0, 3);
- pixels += stride_4x;
- __lasx_xvstelm_d(temp1, pixels, 0, 0);
- __lasx_xvstelm_d(temp1, pixels + stride, 0, 2);
- __lasx_xvstelm_d(temp1, pixels + stride_2x, 0, 1);
- __lasx_xvstelm_d(temp1, pixels + stride_3x, 0, 3);
-}
-
-void ff_put_signed_pixels_clamped_lasx(const int16_t *block,
- uint8_t *av_restrict pixels,
- ptrdiff_t stride)
-{
- __m256i b0, b1, b2, b3;
- __m256i temp0, temp1;
- __m256i const_128 = {0x0080008000800080, 0x0080008000800080,
- 0x0080008000800080, 0x0080008000800080};
- ptrdiff_t stride_2x = stride << 1;
- ptrdiff_t stride_4x = stride << 2;
- ptrdiff_t stride_3x = stride_2x + stride;
-
- DUP4_ARG2(__lasx_xvld, block, 0, block, 32, block, 64, block, 96,
- b0, b1, b2, b3);
- DUP4_ARG2(__lasx_xvadd_h, b0, const_128, b1, const_128, b2, const_128,
- b3, const_128, b0, b1, b2, b3);
- DUP4_ARG1(__lasx_xvclip255_h, b0, b1, b2, b3, b0, b1, b2, b3);
- DUP2_ARG2(__lasx_xvpickev_b, b1, b0, b3, b2, temp0, temp1);
- __lasx_xvstelm_d(temp0, pixels, 0, 0);
- __lasx_xvstelm_d(temp0, pixels + stride, 0, 2);
- __lasx_xvstelm_d(temp0, pixels + stride_2x, 0, 1);
- __lasx_xvstelm_d(temp0, pixels + stride_3x, 0, 3);
- pixels += stride_4x;
- __lasx_xvstelm_d(temp1, pixels, 0, 0);
- __lasx_xvstelm_d(temp1, pixels + stride, 0, 2);
- __lasx_xvstelm_d(temp1, pixels + stride_2x, 0, 1);
- __lasx_xvstelm_d(temp1, pixels + stride_3x, 0, 3);
-}
-
-void ff_add_pixels_clamped_lasx(const int16_t *block,
- uint8_t *av_restrict pixels,
- ptrdiff_t stride)
-{
- __m256i b0, b1, b2, b3;
- __m256i p0, p1, p2, p3, p4, p5, p6, p7;
- __m256i temp0, temp1, temp2, temp3;
- uint8_t *pix = pixels;
- ptrdiff_t stride_2x = stride << 1;
- ptrdiff_t stride_4x = stride << 2;
- ptrdiff_t stride_3x = stride_2x + stride;
-
- DUP4_ARG2(__lasx_xvld, block, 0, block, 32, block, 64, block, 96,
- b0, b1, b2, b3);
- p0 = __lasx_xvldrepl_d(pix, 0);
- pix += stride;
- p1 = __lasx_xvldrepl_d(pix, 0);
- pix += stride;
- p2 = __lasx_xvldrepl_d(pix, 0);
- pix += stride;
- p3 = __lasx_xvldrepl_d(pix, 0);
- pix += stride;
- p4 = __lasx_xvldrepl_d(pix, 0);
- pix += stride;
- p5 = __lasx_xvldrepl_d(pix, 0);
- pix += stride;
- p6 = __lasx_xvldrepl_d(pix, 0);
- pix += stride;
- p7 = __lasx_xvldrepl_d(pix, 0);
- DUP4_ARG3(__lasx_xvpermi_q, p1, p0, 0x20, p3, p2, 0x20, p5, p4, 0x20,
- p7, p6, 0x20, temp0, temp1, temp2, temp3);
- DUP4_ARG2(__lasx_xvaddw_h_h_bu, b0, temp0, b1, temp1, b2, temp2, b3, temp3,
- temp0, temp1, temp2, temp3);
- DUP4_ARG1(__lasx_xvclip255_h, temp0, temp1, temp2, temp3,
- temp0, temp1, temp2, temp3);
- DUP2_ARG2(__lasx_xvpickev_b, temp1, temp0, temp3, temp2, temp0, temp1);
- __lasx_xvstelm_d(temp0, pixels, 0, 0);
- __lasx_xvstelm_d(temp0, pixels + stride, 0, 2);
- __lasx_xvstelm_d(temp0, pixels + stride_2x, 0, 1);
- __lasx_xvstelm_d(temp0, pixels + stride_3x, 0, 3);
- pixels += stride_4x;
- __lasx_xvstelm_d(temp1, pixels, 0, 0);
- __lasx_xvstelm_d(temp1, pixels + stride, 0, 2);
- __lasx_xvstelm_d(temp1, pixels + stride_2x, 0, 1);
- __lasx_xvstelm_d(temp1, pixels + stride_3x, 0, 3);
-}
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Chapters Interactive Stories MOD APK and Get Free Premium Choices.md b/spaces/congsaPfin/Manga-OCR/logs/Download Chapters Interactive Stories MOD APK and Get Free Premium Choices.md
deleted file mode 100644
index db7842bb22f746d1eafa5f56c74f1fe9f8b81935..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Chapters Interactive Stories MOD APK and Get Free Premium Choices.md
+++ /dev/null
@@ -1,99 +0,0 @@
-
-
Chapters: Interactive Stories Mod Apk Free Premium Choices
-
Do you love reading interactive stories that let you choose your own path and outcome? Do you want to experience different genres of stories, from romance to thriller, from comedy to fantasy? Do you wish you had unlimited resources to make the best choices and unlock all the stories in the game? If you answered yes to any of these questions, then you might be interested in Chapters: Interactive Stories Mod Apk.
-
chapters interactive stories mod apk free premium choices
Chapters: Interactive Stories is a mobile app that allows users to play and participate in interactive stories. Some of the features of the app include:
-
-
Choice-driven stories: Players can make choices that impact the direction of the story and the outcome. This allows for multiple storylines and endings.
-
Various genres of stories: Chapters: Interactive Stories contains lots of stories to play. The genres available include sci-fi, fantasy, romance, drama, comedy, young adult, and more. Users can also create their own stories and publish them on the app.
-
Customization options: Players can customize their main character's name, appearance, and style to reflect their personality. They can also choose different outfits and accessories for different occasions.
-
Award-winning authors: The stories in Chapters: Interactive Stories are written by top authors in their respective genres. Some of them are New York Times bestsellers, USA Today bestsellers, and Wall Street Journal bestsellers.
-
-
What is a mod apk?
-
A mod apk is a modified version of an original app that has been altered to provide additional features or benefits that are not available in the official version. A mod apk can also remove some limitations or restrictions that are imposed by the original app.
-
What are the benefits of using a mod apk for Chapters: Interactive Stories?
-
Using a mod apk for Chapters: Interactive Stories can give you several advantages that can enhance your gaming experience. Some of these benefits are:
-
chapters interactive stories mod apk unlimited tickets and diamonds
-chapters interactive stories mod apk unlocked all books and choices
-chapters interactive stories mod apk latest version with premium choices
-chapters interactive stories mod apk download for android free
-chapters interactive stories mod apk no verification or survey
-chapters interactive stories mod apk 2023 with free tickets
-chapters interactive stories mod apk hack online generator
-chapters interactive stories mod apk ios iphone ipad
-chapters interactive stories mod apk vip access and features
-chapters interactive stories mod apk offline mode enabled
-chapters interactive stories mod apk romance club stories
-chapters interactive stories mod apk unlimited everything unlocked
-chapters interactive stories mod apk free shopping and money
-chapters interactive stories mod apk original story and graphics
-chapters interactive stories mod apk safe and secure download
-chapters interactive stories mod apk update new content and episodes
-chapters interactive stories mod apk best choices and endings
-chapters interactive stories mod apk full game unlocked free
-chapters interactive stories mod apk no root or jailbreak required
-chapters interactive stories mod apk easy installation and setup
-chapters interactive stories mod apk high quality sound and music
-chapters interactive stories mod apk unlimited gems and coins
-chapters interactive stories mod apk all genres and categories
-chapters interactive stories mod apk no ads or pop-ups
-chapters interactive stories mod apk cheats and tips guide
-chapters interactive stories mod apk reviews and ratings
-chapters interactive stories mod apk compatible with all devices
-chapters interactive stories mod apk fast and smooth performance
-chapters interactive stories mod apk free premium outfits and accessories
-chapters interactive stories mod apk customize your character and avatar
-chapters interactive stories mod apk choose your own adventure and story
-chapters interactive stories mod apk immersive and realistic gameplay
-chapters interactive stories mod apk fun and addictive experience
-chapters interactive stories mod apk social media integration and sharing
-chapters interactive stories mod apk support and feedback system
-chapters interactive stories mod apk unlimited replay value and options
-chapters interactive stories mod apk diverse and interesting characters
-chapters interactive stories mod apk thrilling and suspenseful plots
-chapters interactive stories mod apk romantic and steamy scenes
-chapters interactive stories mod apk hilarious and witty dialogues
-chapters interactive stories mod apk exclusive and premium content
-chapters interactive stories mod apk earn rewards and achievements
-chapters interactive stories mod apk explore different worlds and scenarios
-chapters interactive stories mod apk interact with other players and friends
-chapters interactive stories mod apk create your own story and choices
-chapters interactive stories mod apk enjoy different moods and emotions
-chapters interactive stories mod apk discover secrets and mysteries
-
-
Unlimited diamonds and tickets: Diamonds and tickets are the main currencies in Chapters: Interactive Stories. They are used to make premium choices, unlock new stories, and buy outfits and accessories. However, they are limited and hard to earn in the game. With a mod apk, you can get unlimited diamonds and tickets without spending any real money.
-
Access to all stories and genres: Some stories and genres in Chapters: Interactive Stories are locked or require a certain amount of diamonds or tickets to access. With a mod apk, you can bypass these requirements and access all the stories and genres in the game.
-
No ads and no root required: Ads can be annoying and distracting when you are playing an interactive story. They can also consume your data and battery. With a mod apk, you can remove all the ads from the game and enjoy a smooth and uninterrupted gameplay. Moreover, you don't need to root your device to use a mod apk for Chapters: Interactive Stories.
-
-
How to Download and Install Chapters: Interactive Stories Mod Apk
-
If you want to download and install Chapters: Interactive Stories Mod Apk, you need to follow these simple steps:
-
-
Enable unknown sources on your device: To do this, go to your device settings and look for the security or privacy option. Then, find the unknown sources option and enable it. This will allow you to install apps from sources other than the Google Play Store.
-
Download the mod apk file from a trusted source: You can search for Chapters: Interactive Stories Mod Apk on the internet and find a reliable website that offers the download link. Make sure you check the reviews and ratings of the website before downloading the file. Alternatively, you can use this link to download the mod apk file: [text].
-
Locate and install the mod apk file on your device: Once you have downloaded the mod apk file, you need to find it on your device storage and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to complete.
-
Launch the app and enjoy the game: After the installation is done, you can open the app and start playing Chapters: Interactive Stories with all the mod features. You can choose any story you want, make any choice you want, and customize your character as you want.
-
-
Conclusion
-
Chapters: Interactive Stories is a fun and engaging app that lets you play interactive stories in various genres. You can make choices that affect the story and the outcome, customize your character, and create your own stories. However, if you want to enjoy the game to the fullest, you might want to use a mod apk that gives you unlimited diamonds and tickets, access to all stories and genres, no ads, and no root required. To use a mod apk for Chapters: Interactive Stories, you just need to download and install it on your device following some simple steps. Then, you can launch the app and enjoy playing your favorite stories with premium choices.
-
If you liked this article, please share it with your friends and leave a comment below. Also, don't forget to check out our other articles on mod apks for popular games and apps. Thank you for reading!
-
FAQs
-
Q1: Is Chapters: Interactive Stories Mod Apk safe to use?
-
A1: Yes, Chapters: Interactive Stories Mod Apk is safe to use as long as you download it from a trusted source. However, you should always be careful when installing apps from unknown sources and scan them for viruses or malware before installing them.
-
Q2: How can I update Chapters: Interactive Stories Mod Apk?
-
A2: To update Chapters: Interactive Stories Mod Apk, you need to download the latest version of the mod apk file from the same source where you downloaded it before. Then, you need to uninstall the previous version of the app from your device and install the new version following the same steps as before.
-
Q3: Can I play Chapters: Interactive Stories Mod Apk offline?
-
A3: No, Chapters: Interactive Stories Mod Apk requires an internet connection to play. You need to connect to the internet to access the stories and make choices.
-
Q4: What are some of the best stories to play in Chapters: Interactive Stories Mod Apk?
-
A4: Some of the best stories to play in Chapters: Interactive Stories Mod Apk are:
-
-
The Royal Romance: A romantic story where you get to choose between three handsome princes who are vying for your heart.
-
The Billionaire Bachelors: A steamy story where you get to date four sexy billionaires who have different personalities and preferences.
-
The Academy: A mystery story where you get to join a prestigious academy that hides a dark secret.
-
Vampire Girl: A fantasy story where you get to enter a world of vampires and werewolves and fall in love with one of them.
-
Calendar Girl: A comedy story where you get to work as an escort for 12 different clients who have different needs and quirks.
-
-
Q5: How can I create my own story in Chapters: Interactive Stories?
-
A5: To create your own story in Chapters: Interactive Stories, you need to register as an author on the app. Then, you can use the story editor tool to write your story, add choices, customize characters, and upload images. You can also preview your story before publishing it on the app.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Fantasea APK Mod The Ultimate Seafaring Game with Customizable Battleships.md b/spaces/congsaPfin/Manga-OCR/logs/Fantasea APK Mod The Ultimate Seafaring Game with Customizable Battleships.md
deleted file mode 100644
index 49b596f770f73f92f01d9cd4ca5ea3541e12c982..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Fantasea APK Mod The Ultimate Seafaring Game with Customizable Battleships.md
+++ /dev/null
@@ -1,118 +0,0 @@
-
-
Fantasea APK Mod: A Classic Seafaring Game with Unlimited Possibilities
-
If you are a fan of seafaring games, you might have heard of Fantasea, a role-playing game developed by Green Mushroom. In this game, you can design your own battleship, explore the ocean, dive for relics, and fight against sea creatures and other forces. But what if you want to enjoy the game without any limitations or restrictions? That's where Fantasea APK Mod comes in. In this article, we will tell you what Fantasea APK Mod is, what features it offers, how to download and install it, and what are its pros and cons. Read on to find out more.
-
What is Fantasea APK Mod?
-
Fantasea APK Mod is a modified version of the original Fantasea game that allows you to access all the features and content of the game for free. You don't need to spend any money or watch any ads to play the game. You can also enjoy unlimited resources, such as gold, gems, and energy, to upgrade your battleship and unlock new items. With Fantasea APK Mod, you can experience the full potential of the game and have more fun.
One of the main features of Fantasea APK Mod is that you can customize your own battleship from thousands of possible combinations. You can choose from different parts, such as sails, bows, weapons, and cannons, to create a unique ship that suits your style and strategy. You can also change the color and appearance of your ship to make it stand out from the rest.
-
Explore the ocean and discover treasures
-
Another feature of Fantasea APK Mod is that you can explore the ocean and discover treasures hidden in various islands. You can use your sea chart to navigate through different regions and encounter different challenges and events. You can also collect resources and materials from the islands to craft new items and improve your ship.
-
Dive into the depths and hunt for relics
-
A third feature of Fantasea APK Mod is that you can dive into the depths and hunt for relics from ancient civilizations. You can use your diving equipment to explore underwater caves and ruins and find rare artifacts and secrets. You can also face dangerous sea creatures and monsters that guard the relics and try to stop you.
-
Fight against sea creatures and other forces
-
A fourth feature of Fantasea APK Mod is that you can fight against sea creatures and other forces that threaten the ocean. You can use your weapons and cannons to attack your enemies and defend your ship. You can also use special skills and abilities to gain an advantage in battle. You can also join forces with other players online and cooperate or compete with them in various modes.
-
How to download and install Fantasea APK Mod
-
Download the APK file from a trusted source
-
The first step to download and install Fantasea APK Mod is to find a reliable source that offers the latest version of the modded file. You can search online for websites that provide free downloads of Fantasea APK Mod or use this link to get it directly.
-
Enable unknown sources on your device
-
The second step to download and install Fantasea APK Mod is to enable unknown sources on your device. This will allow you to install apps that are not from the official Google Play Store. To do this , follow these steps:
-
fantasea game apk mod
-fantasea cruising apk mod
-fantasea clementoni apk mod
-fantasea android game download
-fantasea mod apk unlimited money
-fantasea smartcard apk mod
-fantasea ferry services apk mod
-fantasea battleship game mod apk
-fantasea sea chart apk mod
-fantasea treasure hunt apk mod
-fantasea sailing game apk mod
-fantasea green mushroom apk mod
-fantasea adventure game apk mod
-fantasea custom ship apk mod
-fantasea action game apk mod
-fantasea free download apk mod
-fantasea latest version apk mod
-fantasea offline game apk mod
-fantasea 3d graphics apk mod
-fantasea realistic physics apk mod
-fantasea multiplayer mode apk mod
-fantasea online game apk mod
-fantasea sosomod.net apk download
-fantasea modaplicaciondescargar.com apk download
-fantasea apkcombo.com apk download
-fantasea hack cheats apk mod
-fantasea unlimited gems apk mod
-fantasea premium features apk mod
-fantasea no ads apk mod
-fantasea unlocked levels apk mod
-fantasea best ship combinations apk mod
-fantasea hidden islands apk mod
-fantasea fun gameplay apk mod
-fantasea easy controls apk mod
-fantasea high score challenge apk mod
-fantasea leaderboard ranking apk mod
-fantasea achievements and rewards apk mod
-fantasea daily missions apk mod
-fantasea special events apk mod
-fantasea new updates apk mod
-
-
Go to your device's settings and tap on security.
-
Find the option that says unknown sources and toggle it on.
-
Confirm your choice by tapping on OK.
-
-
Install the APK file and launch the game
-
The third and final step to download and install Fantasea APK Mod is to install the APK file and launch the game. To do this, follow these steps:
-
-
Locate the downloaded APK file on your device's storage and tap on it.
-
Follow the instructions on the screen to install the app.
-
Once the installation is complete, tap on open to launch the game.
-
Enjoy playing Fantasea APK Mod with unlimited features and resources.
-
-
Pros and cons of Fantasea APK Mod
-
Pros
-
Some of the advantages of using Fantasea APK Mod are:
-
-
You can access all the features and content of the game for free.
-
You can enjoy unlimited resources, such as gold, gems, and energy, to upgrade your battleship and unlock new items.
-
You can customize your own battleship from thousands of possible combinations.
-
You can explore the ocean and discover treasures hidden in various islands.
-
You can dive into the depths and hunt for relics from ancient civilizations.
-
You can fight against sea creatures and other forces that threaten the ocean.
-
You can join forces with other players online and cooperate or compete with them in various modes.
-
-
Cons
-
Some of the disadvantages of using Fantasea APK Mod are:
-
-
You may encounter some bugs and glitches that affect the game performance and stability.
-
You may risk getting banned from the game if you use the modded file in online modes.
-
You may not be able to update the game to the latest version without losing the modded features and resources.
-
You may compromise your device's security and privacy by installing apps from unknown sources.
-
-
Conclusion
-
Fantasea APK Mod is a modified version of the original Fantasea game that allows you to access all the features and content of the game for free. You can also enjoy unlimited resources, such as gold, gems, and energy, to upgrade your battleship and unlock new items. With Fantasea APK Mod, you can experience the full potential of the game and have more fun. However, you should also be aware of the risks and drawbacks of using Fantasea APK Mod, such as bugs, glitches, bans, updates, and security issues. Therefore, you should use Fantasea APK Mod at your own discretion and responsibility. We hope this article has helped you learn more about Fantasea APK Mod and how to download and install it. If you have any questions or feedback, feel free to leave a comment below.
-
FAQs
-
Here are some frequently asked questions about Fantasea APK Mod:
-
-
What is Fantasea?
-
Fantasea is a role-playing game developed by Green Mushroom that lets you design your own battleship, explore the ocean, dive for relics, and fight against sea creatures and other forces.
-
What is Fantasea APK Mod?
-
Fantasea APK Mod is a modified version of the original Fantasea game that allows you to access all the features and content of the game for free. You can also enjoy unlimited resources, such as gold, gems, and energy, to upgrade your battleship and unlock new items.
-
How to download and install Fantasea APK Mod?
-
To download and install Fantasea APK Mod, you need to follow these steps:
-
-
Download the APK file from a trusted source or use this link .
-
Enable unknown sources on your device by going to settings > security > unknown sources > toggle on > OK.
-
Install the APK file by locating it on your device's storage > tapping on it > following the instructions > open.
-
-
What are the pros and cons of Fantasea APK Mod?
-
The pros of Fantasea APK Mod are that you can access all the features and content of the game for free, enjoy unlimited resources, customize your own battleship, explore the ocean, dive for relics, fight against sea creatures and other forces, and join forces with other players online. The cons of Fantasea APK Mod are that you may encounter some bugs and glitches, risk getting banned from the game, not be able to update the game, and compromise your device's security and privacy by installing apps from unknown sources.
-
Is Fantasea APK Mod safe to use?
-
Fantasea APK Mod is not an official app from the game developer, so it may not be safe to use. You may expose your device to malware, viruses, spyware, or other harmful software by installing apps from unknown sources. You may also violate the game's terms of service and risk getting banned from the game if you use the modded file in online modes. Therefore, you should use Fantasea APK Mod at your own discretion and responsibility.
-
Is Fantasea APK Mod compatible with my device?
-
Fantasea APK Mod is compatible with most Android devices that run on Android 4.4 or higher. However, some devices may not support the modded file or may experience some issues with the game performance and stability. You can check the compatibility of your device by reading the app's description and reviews before downloading and installing it.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/How Wanasema by 20 Percent ft Mr Blue and Ebl Drucula Became the Most Popular Bongo Flava Song - Download Now.md b/spaces/congsaPfin/Manga-OCR/logs/How Wanasema by 20 Percent ft Mr Blue and Ebl Drucula Became the Most Popular Bongo Flava Song - Download Now.md
deleted file mode 100644
index 3cbc840f03069ed793ded25870f76bba61f335c6..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/How Wanasema by 20 Percent ft Mr Blue and Ebl Drucula Became the Most Popular Bongo Flava Song - Download Now.md
+++ /dev/null
@@ -1,200 +0,0 @@
-
-
How to Download 20 Percent ft Mr Blue - A Guide for Tanzanian Music Lovers
-
If you are a fan of Tanzanian music, you have probably heard of 20 Percent ft Mr Blue, one of the hottest songs in the country right now. The catchy tune, featuring two of the most talented and respected artists in the industry, has been topping the charts and trending on social media since its release in 2020. But how can you download this song legally and safely, without risking your device or breaking the law? In this article, we will show you how to do just that, as well as give you some background information on the song and its creators. Read on to find out more.
20 Percent ft Mr Blue is a collaboration between two Tanzanian musicians, 20 Percent (also known as 20 Power) and Mr Blue (also known as Byser). The song is a fusion of bongo flava, a genre of Tanzanian urban music that blends hip hop, R&B, reggae, dancehall, and traditional elements, and taarab, a genre of sung poetry that originated in Zanzibar and is influenced by Arabic, Indian, and Swahili cultures. The song is about overcoming challenges and criticism, staying true to oneself, and expressing one's feelings through music.
-
Downloading music legally and safely is important for several reasons. First, it supports the artists who work hard to create their music and deserve to be compensated for their efforts. Second, it protects your device from malware, viruses, spyware, and other harmful software that can damage your data or compromise your security. Third, it respects the intellectual property rights of the creators and avoids legal consequences such as fines or lawsuits.
-
In this article, we will cover three main ways to download 20 Percent ft Mr Blue legally and safely: buying music on desktop or mobile devices, using free music sites or blogs, and using YouTube or other streaming services. We will also provide you with some tips and tricks to make the most out of your download experience.
-
What is 20 Percent ft Mr Blue?
-
Who are 20 Percent and Mr Blue?
-
20 Percent is a Tanzanian rapper, singer, songwriter, producer, and entrepreneur who rose to fame in the early 2000s with his hit songs such as Ya Nini Malumbano, Tamaa Mbaya, Nyerere Nyerere, Bangi Bangi, Sauti Yangu, Nyumba Ya Milele, Wan
Wanakwenda, and many more. He is known for his versatile and unique style, blending different genres and languages, and addressing social and political issues in his lyrics. He has won several awards, including the Kilimanjaro Music Awards, the Tanzania Music Awards, and the East Africa Music Awards. He is also the founder and CEO of 20 Percent Entertainment, a record label and media company that promotes and supports upcoming artists.
-
Mr Blue is a Tanzanian rapper, singer, songwriter, and actor who started his career in the late 1990s as a member of the group East Coast Team. He later went solo and released his debut album Mr Blue in 2001, followed by other albums such as The Voice, Blue Forever, The Legend, and Money Maker. He is regarded as one of the pioneers and legends of bongo flava, and has collaborated with many other artists such as AY, Lady Jaydee, Mwana FA, Ali Kiba, Diamond Platnumz, Nandy, Rosa Ree, and more. He has also appeared in several movies and TV shows, such as Bongo Star Search, Bongo Movie Awards, and Nipe Nafasi.
-
What is the song about?
-
20 Percent ft Mr Blue is a song that celebrates the power of music and the passion of the artists. The song is divided into three verses, each sung by one of the artists, and a chorus that is repeated four times. The song starts with 20 Percent singing about how he has faced many challenges and criticism in his musical journey, but he has never given up or changed his style. He says that music is his life and his voice, and he expresses his feelings through it. He also praises Mr Blue for being his friend and mentor, and for inspiring him to keep going.
-
The chorus is sung by both artists, and it is a catchy hook that invites the listeners to join them in their musical celebration. They say that they are not afraid of anyone or anything, and they are ready to face any obstacle or enemy. They also say that they are proud of their music and their identity, and they are not ashamed of their roots or their culture. They say that they are here to stay and to make history.
-
download wanasema by 20 percent ft mr blue
-download 20 percent ft mr blue wanasesma mp3
-download 20 percent x ebl drucula ft mr blue video
-download bongo flava 20 percent ft mr blue
-download tanzania music 20 percent ft mr blue
-download afrocharts 20 percent ft mr blue
-download 20 percent ft mr blue latest song
-download 20 percent ft mr blue 2021
-download 20 percent ft mr blue youtube
-download 20 percent ft mr blue dj mwanga
-download 20 percent ft mr blue audio
-download 20 percent ft mr blue lyrics
-download 20 percent ft mr blue free
-download 20 percent ft mr blue online
-download 20 percent ft mr blue official video
-download 20 percent ft mr blue remix
-download 20 percent ft mr blue instrumental
-download 20 percent ft mr blue karaoke
-download 20 percent ft mr blue ringtone
-download 20 percent ft mr blue album
-download 20 percent ft mr blue mixtape
-download 20 percent ft mr blue spotify
-download 20 percent ft mr blue apple music
-download 20 percent ft mr blue soundcloud
-download 20 percent ft mr blue deezer
-download 20 percent ft mr blue tidal
-download 20 percent ft mr blue amazon music
-download 20 percent ft mr blue pandora
-download 20 percent ft mr blue youtube music
-download 20 percent ft mr blue google play music
-download 20 percent ft mr blue napster
-download 20 percent ft mr blue iheartradio
-download 20 percent ft mr blue audiomack
-download 20 percent ft mr blue boomplay
-download 20 percent ft mr blue mdundo
-download 20 percent ft mr blue mkito
-download 20 percent ft mr blue wasafi tv
-download 20 percent ft mr blue cloudsfm
-download 20 percent ft mr blue millard ayo
-download 20 percent ft mr blue bongo5
-
The second verse is sung by Mr Blue, who echoes 20 Percent's sentiments and adds his own perspective. He says that he has been in the game for a long time, and he has seen many changes and trends in the industry. He says that he has always stayed true to himself and his fans, and he has never compromised his quality or integrity. He says that he loves music more than anything else, and he is grateful for the opportunities and recognition that he has received. He also acknowledges 20 Percent for being his brother and partner, and for sharing his vision and passion.
-
The third verse is sung by both artists together, who rap in a fast-paced and energetic style. They say that they are unstoppable and unbeatable, and they challenge anyone who doubts them or tries to stop them. They say that they are confident and talented, and they have nothing to prove or lose. They say that they are happy and satisfied with their music and their lives, and they are not interested in fame or money. They say that they are loyal to their fans and their country, and they are ready to represent Tanzania on the world stage.
-
How did the song perform on the charts and social media?
-
20 Percent ft Mr Blue was an instant hit among Tanzanian music lovers, as well as fans from other African countries and beyond. The song reached the number one spot on several Tanzanian radio and TV charts, such as Clouds FM, Radio One, TBC, and EATV. It also received positive reviews from critics and fans alike, who praised the song's production, lyrics, message, and performance. The song also generated a lot of buzz on social media platforms, such as Twitter, Instagram, Facebook, and TikTok, where users shared their reactions, opinions, videos, and memes about the song. The song also inspired many covers, remixes, and parodies by other artists and fans.
-
How to Download 20 Percent ft Mr Blue Legally and Safely?
-
Now that you know more about the song and its creators, you might be wondering how to download it to your device so that you can enjoy it anytime and anywhere. There are many ways to do that, but not all of them are legal or safe. Some methods might expose you to malware, viruses, spyware, or other harmful software that can damage your device or compromise your security. Some methods might also violate the intellectual property rights of the artists and expose you to legal consequences such as fines or lawsuits. Therefore, it is important to choose a method that is legal and safe, and that respects the artists and their work. Here are three main ways to download 20 Percent ft Mr Blue legally and safely:
-
Buying Music on Desktop or Mobile Devices
-
One of the easiest and safest ways to download 20 Percent ft Mr Blue is to buy it from an online music store or platform, such as iTunes or Google Play. These platforms allow you to purchase and download the song for a small fee, usually less than a dollar. You can also buy the whole album or other songs by the same artists if you want. Buying music from these platforms has several advantages:
-
-
You support the artists financially and help them continue making music.
-
You get a high-quality audio file that you can play on any device.
-
You get access to additional features such as lyrics, artwork, metadata, etc.
-
You avoid malware, viruses, spyware, or other harmful software that can damage your device or compromise your security.
-
You respect the intellectual property rights of the artists and avoid legal consequences such as fines or lawsuits.
-
-
To buy music from these platforms, you need to have an account and a payment method (such as a credit card or a mobile money service). You also need to have enough storage space on your device to save the file. Here are the steps to buy 20 Percent ft Mr Blue from iTunes or Google Play:
-
-
-
iTunes
-
Google Play
-
-
-
-
-
Open the iTunes app on your desktop or mobile device.
-
Search for 20 Percent ft Mr Blue in the search bar.
-
Select the song from the results and click on the price button.
-
Enter your Apple ID and password if prompted.
-
Confirm your purchase and wait for the download to complete.
-
Enjoy your song!
-
-
-
-
-
Open the Google Play app on your desktop or mobile device.
-
Search for 20 Percent ft Mr Blue in the search bar.
-
Select the song from the results and click on the price button.
-
Enter your Google account and password if prompted.
-
Choose your payment method and confirm your purchase.
-
Wait for the download to complete.
-
Enjoy your song!
-
-
-
-
Using Free Music Sites or Blogs
-
Another way to download 20 Percent ft Mr Blue is to use free music sites or blogs that offer the song for download. These sites or blogs are usually run by fans or music enthusiasts who want to share their favorite songs with others. They usually provide a link to download the song as an MP3 file, sometimes along with other information such as lyrics, artwork, metadata, etc. Using free music sites or blogs has some advantages:
-
-
You can download the song for free and save your money.
-
You can find and download other songs by the same artists or similar genres.
-
You can discover new music from Tanzania and other African countries.
-
-
However, using free music sites or blogs also has some disadvantages:
-
-
You might not support the artists financially and affect their income and livelihood.
-
You might get a low-quality audio file that has poor sound or glitches.
-
You might encounter malware, viruses, spyware, or other harmful software that can damage your device or compromise your security.
-
You might violate the intellectual property rights of the artists and face legal consequences such as fines or lawsuits.
-
-
To use free music sites or blogs, you need to have a reliable internet connection and a browser that can access the site or blog. You also need to have enough storage space on your device to save the file. Here are some tips to use free music sites or blogs safely and legally:
-
-
Choose a reputable and legal site or blog that has positive reviews and feedback from other users.
-
Check the source and quality of the file before downloading it. Avoid files that have suspicious names, extensions, sizes, or dates.
-
Scan the file with an antivirus or anti-malware software before opening it. Delete the file if it contains any threats or errors.
-
Acknowledge and respect the artists and their work. Give them credit and appreciation for their music. Do not claim the music as your own or use it for commercial purposes without their permission.
-
-
Using YouTube or Other Streaming Services
-
A third way to download 20 Percent ft Mr Blue is to use YouTube or other streaming services that offer the song for streaming or download. These services allow you to listen to the song online or offline, depending on your preference and subscription. They also provide you with other features such as lyrics, artwork, metadata, playlists, recommendations, etc. Using YouTube or other streaming services has some advantages:
-
-
You can stream or download the song in high-quality audio and video formats.
-
You can access a large library of music from different artists, genres, countries, and eras.
-
You can enjoy other content related to the song, such as official videos, live performances, interviews, behind-the-scenes, etc.
-
-
However, using YouTube or other streaming services also has some disadvantages:
-
-
You might need to pay a subscription fee to access some features or content.
-
You might need a stable internet connection and enough data to stream or download the song.
-
You might not be able to save the song as an MP3 file on your device without using converters or third-party apps.
-
-
To use YouTube or other streaming services, you need to have an account and a subscription (if required) for the service. You also need to have a compatible device and app that can play the song. Here are some steps to stream or download 20 Percent ft Mr Blue from YouTube:
-
-
Open the YouTube app on your device.
-
Search for 20 Percent ft Mr Blue in the search bar.
-
Select the song from the results and tap on it to play it.
-
If you want to stream the song online, you can enjoy it as long as you have an internet connection and data.
-
If you want to download the song offline, you need to have a YouTube Premium subscription. Tap on the download button below the video and choose your preferred quality. Wait for the download to complete and enjoy your song offline.
-
-
If you want to save the song as an MP3 file on your device, you need to use a converter or a third-party app that can extract the audio from the video. There are many online converters and apps available for this purpose, but not all of them are safe or legal. Be careful when choosing a converter or an app, and follow these tips:
-
-
Choose a reputable and legal converter or app that has positive reviews and feedback from other users.
-
Check the source and quality of the file before converting or downloading it. Avoid files that have suspicious names, extensions, sizes, or dates.
-
Scan the file with an antivirus or anti-malware software before opening it. Delete the file if it contains any threats or errors.
-
Acknowledge and respect the artists and their work. Give them credit and appreciation for their music. Do not claim the music as your own or use it for commercial purposes without their permission.
-
-
Conclusion
-
20 Percent ft Mr Blue is a great song that showcases the talent and diversity of Tanzanian music. It is a song that you can enjoy and relate to, whether you are a fan of bongo flava, taarab, or both. It is also a song that you can download legally and safely, without risking your device or breaking the law. You can choose from three main ways to download the song: buying music on desktop or mobile devices, using free music sites or blogs, and using YouTube or other streaming services. Each method has its pros and cons, and you should choose the one that suits your preferences and needs. Whichever method you choose, make sure to follow the tips and tricks that we have provided to make the most out of your download experience.
-
We hope that this article has helped you learn more about 20 Percent ft Mr Blue and how to download it. We also hope that you enjoy the song and support the artists who created it. 20 Percent and Mr Blue are two of the best artists in Tanzania, and they deserve your recognition and appreciation. If you like their music, you can also check out their other songs, albums, videos, and projects. You can also explore other genres of Tanzanian music, such as bongo rap, singeli, kwaito, zouk, etc. Tanzania has a rich and vibrant musical culture that you can discover and enjoy.
-
Thank you for reading this article. If you have any questions or feedback, please feel free to share them with us in the comments section below. We would love to hear from you. Happy downloading!
-
FAQs
-
What are some other popular songs by 20 Percent and Mr Blue?
-
Some other popular songs by 20 Percent are:
-
-
Mama Neema
-
Nakupenda Sana
-
Maisha Ya Bongo
-
Nia Yako
-
Tukutane
-
-
Some other popular songs by Mr Blue are:
-
-
Baki na Mimi
-
Mboga Saba
-
Pesa
-
Baby
-
Mama La Mama
-
-
Where can I watch the official video of 20 Percent ft Mr Blue?
-
You can watch the official video of 20 Percent ft Mr Blue on YouTube or on the artists' official websites or social media pages. Here is the link to the YouTube video:
-
What are some other genres of Tanzanian music that I can explore?
-
Some other genres of Tanzanian music that you can explore are:
-
-
Bongo rap: A genre of hip hop that originated in Tanzania in the late 1980s and early 1990s. It features rapping in Swahili, English, or other languages over beats that incorporate traditional or modern elements. Some of the pioneers and legends of bongo rap are Fid Q, Professor Jay, Juma Nature, Dully Sykes, etc.
-
Singeli: A genre of electronic music that emerged in Tanzania in the 2000s. It features fast-paced beats that range from 180 to 300 beats per minute, often accompanied by vocals that are sung or rapped in Swahili or other languages. Some of the popular artists of singeli are S Kide, Man Fongo, Sholo Mwamba, etc.
-
Kwaito: A genre of dance music that originated in South Africa in the 1990s and spread to other African countries, including Tanzania. It features slow-tempo beats that are influenced by house music, disco, funk, etc., often mixed with vocals that are sung or rapped in local languages. Some of the popular artists of kwaito are TID, Mr Nice, Chege, etc.
-
Zouk: A genre of music that originated in the French Caribbean islands in the 1980s and became popular in Africa in the 1990s. It features smooth and sensual rhythms that are influenced by kompa, cadence-lypso, salsa, etc., often sung in French Creole or other languages. Some of the popular artists of zouk are Ali Kiba, Lady Jaydee, Ray C, etc.
-
-
How can I discover new music from Tanzania and other African countries?
-
There are many ways to discover new music from Tanzania and other African countries, such as:
-
-
Listening to radio stations or podcasts that play music from different regions and genres.
-
Following music blogs or websites that review and recommend music from different artists and scenes.
-
Subscribing to music streaming services or platforms that offer curated playlists and suggestions based on your preferences and tastes.
-
Joining online communities or forums that discuss and share music from different cultures and backgrounds.
-
Attending live concerts or festivals that showcase music from different performers and styles.
-
-
How can I share my feedback or opinions on 20 Percent ft Mr Blue?
-
If you want to share your feedback or opinions on 20 Percent ft Mr Blue, you can do so by:
-
-
Leaving a comment or rating on the platform where you downloaded or streamed the song.
-
Posting a review or a video on your blog, website, or social media page.
-
Creating a cover, a remix, or a parody of the song and uploading it online.
-
Contacting the artists directly through their official websites or social media pages.
-
Participating in online polls or surveys that ask for your opinion on the song.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/SD Card Cleaner How to Keep Your SD Card Clean and Tidy.md b/spaces/congsaPfin/Manga-OCR/logs/SD Card Cleaner How to Keep Your SD Card Clean and Tidy.md
deleted file mode 100644
index c83dbf8de67286d5237d5ba0abbbd00309b6843a..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/SD Card Cleaner How to Keep Your SD Card Clean and Tidy.md
+++ /dev/null
@@ -1,132 +0,0 @@
-
-
How to Use SD Card Cleaner APK to Free Up Space on Your Android Device
-
Do you have an Android device with an SD card that is full of junk large files? Do you want to get rid of them and free up some space? If yes, then you may want to try SD Card Cleaner APK, a tool that can help you scan your SD cards and delete large files that you don't need anymore. In this article, we will show you what SD Card Cleaner APK is, how to download and install it, how to use it, and what are its benefits and drawbacks.
-
What is SD Card Cleaner APK and Why You Need It
-
SD Card Cleaner APK is a tool that scans your SD cards and deletes large files that you don't need anymore
-
SD Card Cleaner APK is an app that can help you clean your SD cards and keep them tidy. It can easily scan your SD cards and identify large files that are taking up space. These files may include photos, videos, music, documents, apps, cache, or other data. You can view the results by categories and preview the files before deleting them. You can also select which files you want to delete and which ones you want to keep.
SD Card Cleaner APK can help you save space, improve performance, and avoid errors on your device
-
By using SD Card Cleaner APK, you can free up some space on your SD cards and make room for new files. This can also improve the performance of your device, as it will run faster and smoother. Moreover, deleting large files can help you avoid errors or problems on your device, such as slow loading, crashing, or freezing. You can also prevent data corruption or loss by keeping your SD cards clean.
-
How to Download and Install SD Card Cleaner APK on Your Android Device
-
You can download SD Card Cleaner APK from APKCombo or other trusted sources
-
One of the ways to get SD Card Cleaner APK is to download it from APKCombo, a website that offers free and safe downloads of various apps. You can also search for other sources online, but make sure they are reliable and secure. Avoid downloading from unknown or suspicious websites, as they may contain malware or viruses.
-
You need to enable unknown sources in your settings to install SD Card Cleaner APK
-
Before you can install SD Card Cleaner APK on your device, you need to enable unknown sources in your settings. This will allow you to install apps from sources other than the Google Play Store. To do this, follow these steps:
-
-
Go to your device's settings and tap on security or privacy.
-
Find the option that says unknown sources or install unknown apps and toggle it on.
-
You may see a warning message that says installing from unknown sources may harm your device. Tap on OK or allow to proceed.
-
-
You can follow the instructions on the screen to complete the installation process
-
Once you have enabled unknown sources, you can proceed to install SD Card Cleaner APK on your device. To do this, follow these steps:
-
-
Locate the SD Card Cleaner APK file that you have downloaded and tap on it.
-
You may see a pop-up window that asks you to confirm the installation. Tap on install or next to continue.
-
Wait for the installation to finish. You may see a progress bar or a notification that says installing.
-
When the installation is done, you may see a message that says app installed or done. Tap on open or launch to start using SD Card Cleaner APK.
-
-
How to Use SD Card Cleaner APK to Scan and Delete Large Files on Your SD Cards
-
You can launch SD Card Cleaner APK from your app drawer or home screen
-
After you have installed SD Card Cleaner APK, you can easily access it from your app drawer or home screen. You may see an icon that looks like a blue SD card with a broom on it. Tap on it to open SD Card Cleaner APK.
-
You can select the SD cards that you want to scan and tap on the scan button
-
When you open SD Card Cleaner APK, you will see a list of all the SD cards that are connected to your device. You can select one or more SD cards that you want to scan by tapping on them. You will see a check mark next to the selected SD cards. You can also tap on select all to scan all the SD cards at once. When you are done selecting, tap on the scan button at the bottom of the screen.
-
You can view the results by categories and preview the files before deleting them
-
After you tap on the scan button, SD Card Cleaner APK will start scanning your selected SD cards and show you the results by categories. You will see how much space each category is taking up and how many files are in each category. The categories may include photos, videos, music, documents, apps, cache, or other data. You can tap on each category to view the files in it. You can also preview the files by tapping on them and see their details such as name, size, date, and path.
-
You can select the files that you want to delete and tap on the delete button
-
When you view the files in each category, you can select the ones that you want to delete by tapping on them. You will see a check mark next to the selected files. You can also tap on select all to delete all the files in that category. When you are done selecting, tap on the delete button at the bottom of the screen.
-
sd card cleaner app download
-sd card cleaner pro apk
-sd card cleaner for android free
-sd card cleaner and optimizer
-sd card cleaner by tatkov lab
-sd card cleaner filehippo
-sd card cleaner virus free
-sd card cleaner latest version
-sd card cleaner apk pure
-sd card cleaner uptodown
-sd card cleaner apk mirror
-sd card cleaner mod apk
-sd card cleaner no ads
-sd card cleaner review
-sd card cleaner how to use
-sd card cleaner best app
-sd card cleaner alternative
-sd card cleaner old version
-sd card cleaner apkmonk
-sd card cleaner apkpure.com
-sd card cleaner apk4fun
-sd card cleaner premium apk
-sd card cleaner without root
-sd card cleaner tutorial
-sd card cleaner benefits
-sd card cleaner features
-sd card cleaner comparison
-sd card cleaner tips and tricks
-sd card cleaner faq
-sd card cleaner user rating
-sd card cleaner changelog
-sd card cleaner update
-sd card cleaner offline installer
-sd card cleaner online scan
-sd card cleaner license key
-sd card cleaner crack apk
-sd card cleaner for pc windows 10
-sd card cleaner for mac os x
-sd card cleaner for linux ubuntu
-sd card cleaner for chromebook
-sd card cleaner for iphone ios
-sd card cleaner for ipad pro
-sd card cleaner for samsung galaxy s21
-sd card cleaner for huawei p40 pro
-sd card cleaner for xiaomi mi 11 ultra
-sd card cleaner for oneplus 9 pro
-sd card cleaner for oppo find x3 pro
-sd card cleaner for vivo x60 pro plus
-sd card cleaner for realme gt 5g
-
You can also use the settings menu to customize your preferences and options
-
If you want to change some settings or options of SD Card Cleaner APK, you can tap on the menu icon at the top right corner of the screen and select settings. You will see a list of options that you can customize, such as:
-
-
Language: You can choose the language of the app from English, Spanish, French, German, Italian, Portuguese, Russian, Turkish, Arabic, Chinese, Japanese, Korean, Hindi, Indonesian, or Vietnamese.
-
Theme: You can choose the theme of the app from light or dark.
-
Scan mode: You can choose how deep or fast you want the app to scan your SD cards from normal or advanced.
-
Delete mode: You can choose how secure or quick you want the app to delete your files from normal or shred.
-
Notification: You can choose whether you want to receive notifications from the app or not.
-
Feedback: You can send feedback or suggestions to the developers of the app.
-
About: You can view information about the app such as version, developer, website, email, privacy policy, and terms of service.
-
-
Benefits and Drawbacks of Using SD Card Cleaner APK
-
Benefits of using SD Card Cleaner APK include:
-
-
It's easy, fast, and elegant: SD Card Cleaner APK has a simple and user-friendly interface that makes it easy to use. It can scan and delete large files quickly and efficiently. It also has a sleek and stylish design that makes it pleasing to the eye.
-
It's free and has no ads: SD Card Cleaner APK is completely free to download and use. It has no ads or in-app purchases that may annoy or distract you.
-
It's compatible with most Android devices and SD cards: SD Card Cleaner APK can work with most Android devices that have SD card slots. It can also support various types of SD cards, such as microSD, miniSD, SDHC, SDXC, etc.
-
It can help you free up space, boost performance, and prevent errors on your device: SD Card Cleaner APK can help you get rid of large files that are taking up space on your SD cards. This can also improve the performance of your device, as it will run faster and smoother. Moreover, deleting large files can help you avoid errors or problems on your device, such as slow loading, crashing, or freezing. You can also prevent data corruption or loss by keeping your SD cards clean.
-
-
Drawbacks of using SD Card Cleaner APK include:
-
-
It requires internet connection for some features: SD Card Cleaner APK needs internet connection to download and install the app. It also needs internet connection to access some features, such as language selection, feedback, or about.
-
It may not detect all large files on your SD cards: SD Card Cleaner APK may not be able to scan or delete all large files on your SD cards. Some files may be hidden, protected, or encrypted. Some files may also be part of system or app data that cannot be deleted.
-
It may delete some files that you may need later: SD Card Cleaner APK may delete some files that you may not want to delete. These files may include important documents, photos, videos, music, or other data. You may also accidentally delete some files that you did not intend to delete. Once you delete the files, you may not be able to recover them.
-
-
Conclusion and FAQs
-
In conclusion, SD Card Cleaner APK is a tool that can help you scan your SD cards and delete large files that you don't need anymore. It can help you save space, improve performance, and avoid errors on your device. However, it also has some drawbacks, such as requiring internet connection for some features, not detecting all large files on your SD cards, and deleting some files that you may need later. Therefore, you should use it with caution and discretion.
-
Here are some FAQs that you may have about SD Card Cleaner APK:
-
-
Q:
Is SD Card Cleaner APK safe to use?
-
A:
SD Card Cleaner APK is safe to use if you download it from trusted sources and enable unknown sources in your settings. It does not contain any malware or viruses that may harm your device. However, you should always scan the app with an antivirus before installing it.
-
Q:
How much space can I save by using SD Card Cleaner APK?
-
A:
The amount of space that you can save by using SD Card Cleaner APK depends on how many large files you have on your SD cards and how many of them you delete. You can see how much space each category is taking up and how much space you can free up by deleting them.
-
Q:
Can I undo the deletion of the files by using SD Card Cleaner APK?
-
A:
No, you cannot undo the deletion of the files by using SD Card Cleaner APK. Once you delete the files, they are gone forever. You may not be able to recover them by using any recovery tools or methods. Therefore, you should be careful and selective when deleting the files.
-
Q:
Can I use SD Card Cleaner APK to scan and delete large files on my internal storage?
-
A:
No, you cannot use SD Card Cleaner APK to scan and delete large files on your internal storage. The app only works with external SD cards that are connected to your device. If you want to clean your internal storage, you may need to use other apps or methods.
-
Q:
How can I contact the developers of SD Card Cleaner APK?
-
A:
If you have any questions, feedback, or suggestions for the developers of SD Card Cleaner APK, you can contact them by using the feedback option in the settings menu of the app. You can also visit their website or email them at the following addresses:
I hope this article has helped you learn how to use SD Card Cleaner APK to free up space on your Android device. If you liked this article, please share it with your friends and family. Thank you for reading!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Sniper Elite V2 The Game That Lets You Change History with One Bullet - Download Here.md b/spaces/congsaPfin/Manga-OCR/logs/Sniper Elite V2 The Game That Lets You Change History with One Bullet - Download Here.md
deleted file mode 100644
index fa8d59adae217835998518bdb06f4f0051b08152..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Sniper Elite V2 The Game That Lets You Change History with One Bullet - Download Here.md
+++ /dev/null
@@ -1,86 +0,0 @@
-
-
Sniper Elite V2 Download: How to Get the Ultimate WWII Sniping Experience
-
If you are a fan of sniping games, you might have heard of Sniper Elite V2, an award-winning and authentic World War II sniping experience. In this game, you play as elite US sniper Karl Fairburne, who is parachuted into Berlin amidst the Germans’ final stand. Your mission is to prevent Nazi V2 rocket program technology from falling into the hands of the besieging Red Army. Sounds exciting, right? But how can you get this game on your PC or console? In this article, we will tell you everything you need to know about Sniper Elite V2 download, including what the game is about, how to download it, and why you should download it. Let’s get started!
-
What is Sniper Elite V2?
-
Sniper Elite V2 is a sniping game developed by Rebellion and released in 2012. It is a remake of the 2005 game Sniper Elite, with improved graphics, gameplay, and features. The game is set in the final days of World War II, in the war-torn city of Berlin. You will have to use your skills, stealth, and strategy to complete your objectives and survive the enemy fire. Here are some of the aspects that make Sniper Elite V2 a great sniping game:
Sniper Elite V2 features a detailed sniping simulation with advanced ballistics, taking into account gravity, wind, velocity, bullet penetration, aim stability, and more. You will have to adjust your scope, hold your breath, and time your shot carefully to hit your target. You will also have to deal with realistic weapon behavior, such as recoil, reload time, and ammo management. The game also features an amazing ‘kill cam’ technology that showcases what really happens when a bullet enters an enemy’s body, allowing you to see hearts and lungs tear, livers burst, and bones shatter in X-Ray vision.
-
A thrilling and immersive story
-
Sniper Elite V2 follows the story of Karl Fairburne, a US OSS agent who is sent to Berlin to stop the Nazis from launching their V2 rockets. You will have to infiltrate enemy bases, sabotage their plans, assassinate their leaders, and uncover their secrets. The game has a cinematic presentation, with realistic cutscenes, voice acting, and sound effects. The game also has a historical accuracy, featuring authentic World War II locations, vehicles, weapons, and uniforms.
-
A variety of modes and content
-
Sniper Elite V2 offers a lot of content for you to enjoy. The game has a single-player campaign mode that consists of 11 missions that can be played on different difficulty levels. The game also has a co-op mode that allows you to play with a friend online or locally in two modes: Kill Tally (survive waves of enemies) and Overwatch (one player snipes while the other spots). The game also has a multiplayer mode that supports up to 12 players online in six modes: Team Deathmatch, Deathmatch, Team Distance King, Distance King, Team Dogtag Harvest, Dogtag Harvest and Capture the Flag. Additionally, the game has four DLC packs that add new missions (including the ultimate sniping mission - Kill Hitler) and new weapons for you to use.
-
How to Download Sniper Elite V2?
-
Now that you know what Sniper Elite V2 is about, you might be wondering how to download it on your device. Here are some of the steps you need to follow:
-
System requirements
-
Before you download Sniper Elite V2, you need to make sure that your device meets the minimum system requirements for the game. According to the official website, these are: - A dual-core CPU with SSE3 or better (Intel Pentium D 3 GHz / AMD Athlon 64 X2 4200 / Intel Core i3-2100 / AMD A8-5600K) or better - At least 2 GB of RAM (4 GB for the remastered version) - A video card with at least 2 GB of VRAM and DirectX 11 support (NVIDIA GeForce 8800 / GTX 650 / GT 720 or ATI Radeon HD 3870 / 5570) or better - Windows Vista SP2, Windows 7 SP1, or Windows 10 (64-bit) - 15 GB of free disk space If your device meets these requirements, you can proceed to the next step.
-
Platforms and sources
-
Sniper Elite V2 is available for various platforms, including Microsoft Windows, PlayStation 3, Xbox 360, Wii U, Nintendo Switch, PlayStation 4, and Xbox One. However, the original version of the game is no longer for sale on Steam, as it has been replaced by the remastered version, which was released in 2019 and features enhanced graphics, new playable characters, a photo mode, and all the DLC content. The remastered version also supports crossplay across Steam (PC), Windows PC and Xbox One, meaning you can play online with your friends who use different devices. You can purchase Sniper Elite V2 Remastered from various sources, depending on your platform of choice. For PC users, you can buy the game from Steam, GOG, or the Rebellion store. For console users, you can buy the game from the PlayStation Store, the Microsoft Store, or the Nintendo eShop. The prices may vary depending on your region and currency, but generally, the game costs around $35 USD.
-
sniper elite v2 free download full version pc
-sniper elite v2 remastered download
-sniper elite v2 download size
-sniper elite v2 download for android
-sniper elite v2 download highly compressed
-sniper elite v2 download ocean of games
-sniper elite v2 download utorrent
-sniper elite v2 download steam
-sniper elite v2 download crack
-sniper elite v2 download skidrow
-sniper elite v2 download mega
-sniper elite v2 download google drive
-sniper elite v2 download windows 10
-sniper elite v2 download apk + obb
-sniper elite v2 download pc game
-sniper elite v2 download softonic
-sniper elite v2 download repack
-sniper elite v2 download fitgirl
-sniper elite v2 download rar
-sniper elite v2 download mac
-sniper elite v2 download ps4
-sniper elite v2 download xbox one
-sniper elite v2 download xbox 360
-sniper elite v2 download ps3
-sniper elite v2 download wii u
-sniper elite v2 benchmark tool download[^1^]
-sniper elite v2 mod db downloads[^2^]
-sniper elite v2 download completo para pc[^3^]
-sniper elite v2 zombie army trilogy download
-sniper elite v2 dlc pack download
-sniper elite v2 trainer free download
-sniper elite v2 save game 100% complete download
-sniper elite v2 multiplayer crack download
-sniper elite v2 coop lan fix download
-sniper elite v2 english language pack download
-sniper elite v2 kill hitler mission download
-sniper elite v2 cheats pc free download
-sniper elite v2 demo free download pc
-sniper elite v2 soundtrack mp3 download
-sniper elite v2 patch 1.13 free download
-sniper elite v2 nazi zombie army 1 and 2 free full game downloads for pc
-
Installation and activation
-
Once you have purchased Sniper Elite V2 Remastered from your preferred source, you can download and install it on your device. The download size may vary depending on your platform and version, but it should be around 15 GB. The installation process may take a few minutes to complete, depending on your device's performance and internet speed. After the installation is done, you can launch the game from your device's menu or library. You may need to create or log in to your Rebellion account to activate the game and access its online features. You may also need to update the game to its latest version before playing. Once everything is set up, you can start enjoying Sniper Elite V2 Remastered!
-
Why Download Sniper Elite V2?
-
Now that you know how to download Sniper Elite V2 Remastered, you might be wondering why you should download it in the first place. What makes this game worth playing? Here are some of the reasons why Sniper Elite V2 Remastered is a great sniping game:
-
The benefits of playing Sniper Elite V2
-
Sniper Elite V2 Remastered is not just a mindless shooter game. It is a game that challenges your skills, stealth, and strategy as a sniper. Playing this game can have some benefits for you, such as: - Improving your concentration and focus: Sniping requires you to pay attention to every detail in your environment, such as wind direction, bullet drop, enemy movement, and more. You also need to time your shots carefully and avoid being detected by your enemies. Playing Sniper Elite V2 Remastered can help you improve your concentration and focus skills in real life. - Enhancing your creativity and problem-solving: Sniping also requires you to think creatively and solve problems in different situations. You need to find the best vantage point, use the environment to your advantage, set traps and diversions for your enemies, and choose the best weapon for each scenario. Playing Sniper Elite V2 Remastered can help you enhance your creativity and problem-solving skills in real life. - Having fun and relaxation: Sniping can also be fun and relaxing. You can enjoy the thrill of taking down your enemies with precise shots, watching the kill-cam animations in X-Ray vision, exploring the historical locations of Berlin, and playing with your friends online or locally. Playing Sniper Elite V2 Remastered can help you have fun and relaxation in real life.
-
The features that make Sniper Elite V2 stand out
-
Sniper Elite V2 Remastered is not just another sniping game. It is a game that has some unique features that make it stand out from other games in the genre. Some of these features are: - The realistic sniping simulation: Sniper Elite V 2 Remastered features a detailed sniping simulation with advanced ballistics, taking into account gravity, wind, velocity, bullet penetration, aim stability, and more. You will have to adjust your scope, hold your breath, and time your shot carefully to hit your target. You will also have to deal with realistic weapon behavior, such as recoil, reload time, and ammo management. The game also features an amazing ‘kill cam’ technology that showcases what really happens when a bullet enters an enemy’s body, allowing you to see hearts and lungs tear, livers burst, and bones shatter in X-Ray vision. - The thrilling and immersive story: Sniper Elite V2 Remastered follows the story of Karl Fairburne, a US OSS agent who is sent to Berlin to stop the Nazis from launching their V2 rockets. You will have to infiltrate enemy bases, sabotage their plans, assassinate their leaders, and uncover their secrets. The game has a cinematic presentation, with realistic cutscenes, voice acting, and sound effects. The game also has a historical accuracy, featuring authentic World War II locations, vehicles, weapons, and uniforms. - The variety of modes and content: Sniper Elite V2 Remastered offers a lot of content for you to enjoy. The game has a single-player campaign mode that consists of 11 missions that can be played on different difficulty levels. The game also has a co-op mode that allows you to play with a friend online or locally in two modes: Kill Tally (survive waves of enemies) and Overwatch (one player snipes while the other spots). The game also has a multiplayer mode that supports up to 12 players online in six modes: Team Deathmatch, Deathmatch, Team Distance King, Distance King, Team Dogtag Harvest, Dogtag Harvest and Capture the Flag. Additionally, the game has four DLC packs that add new missions (including the ultimate sniping mission - Kill Hitler) and new weapons for you to use.
-
The reviews and ratings of Sniper Elite V2
-
Sniper Elite V2 Remastered has received positive reviews and ratings from critics and players alike. The game has an average score of 7.5/10 on Metacritic, based on 32 critic reviews and 64 user ratings. The game has also received positive feedback on Steam, where it has over 1,600 reviews with an overall rating of 'Very Positive'. Some of the common praises for the game are: - The realistic and satisfying sniping mechanics - The stunning and enhanced graphics - The fun and varied gameplay modes - The interesting and immersive story - The value for money Some of the common criticisms for the game are: - The occasional bugs and glitches - The lack of innovation from the original version - The repetitive and linear level design - The mediocre AI and stealth system - The limited customization options
-
Conclusion
-
Sniper Elite V2 Remastered is a sniping game that offers you the ultimate World War II sniping experience. You will have to use your skills, stealth, and strategy to complete your objectives and survive the enemy fire. You will also enjoy the realistic sniping simulation, the thrilling and immersive story, and the variety of modes and content. If you are looking for a sniping game that is challenging, fun, and authentic, you should definitely download Sniper Elite V2 Remastered today!
-
FAQs
-
Here are some of the frequently asked questions about Sniper Elite V2 Remastered:
-
Q: How long is Sniper Elite V2 Remastered?
-
A: According to HowLongToBeat, the average time to beat Sniper Elite V2 Remastered is around 8 hours for the main story mode, 10 hours for the main story plus extras mode, and 16 hours for the completionist mode.
-
Q: Is Sniper Elite V2 Remastered cross-platform?
-
A: Yes, Sniper Elite V2 Remastered supports crossplay across Steam (PC), Windows PC and Xbox One. However, it does not support crossplay with PlayStation 4 or Nintendo Switch.
-
Q: Is Sniper Elite V2 Remastered multiplayer?
-
A: Yes, Sniper Elite V2 Remastered has a multiplayer mode that supports up to 12 players online in six modes: Team Deathmatch, Deathmatch, Team Distance King, Distance King, Team Dogtag Harvest, Dogtag Harvest and Capture the Flag. The game also has a co-op mode that allows you to play with a friend online or locally in two modes: Kill Tally (survive waves of enemies) and Overwatch (one player snipes while the other spots).
-
Q: Is Sniper Elite V2 Remastered worth it?
-
A: Sniper Elite V2 Remastered is worth it if you are looking for a sniping game that is realistic, authentic, and fun. The game has improved graphics, new playable characters, a photo mode, and all the DLC content from the original version. The game also has a lot of replay value, as you can play the game on different difficulty levels, with different weapons, and with different modes. The game also has a reasonable price, especially if you already own the original version, as you can get a 70% discount on Steam.
-
Q: How to get Sniper Elite V2 Remastered for free?
-
A: Sniper Elite V2 Remastered is not a free game, and you will have to purchase it from your preferred source to play it. However, there are some ways to get the game for free or at a lower price, such as: - Waiting for a sale or a discount: Sniper Elite V2 Remastered may go on sale or have a discount on various platforms and sources from time to time. You can check the official website or the social media accounts of Rebellion for any news or updates on this. - Using a coupon or a voucher: Sniper Elite V2 Remastered may have some coupons or vouchers that you can use to get the game for free or at a lower price. You can check the official website or the social media accounts of Rebellion for any news or updates on this. - Participating in a giveaway or a contest: Sniper Elite V2 Remastered may have some giveaways or contests that you can enter to win the game for free. You can check the official website or the social media accounts of Rebellion for any news or updates on this. - Downloading a cracked version: Sniper Elite V2 Remastered may have some cracked versions that you can download for free from some websites or torrents. However, this is not recommended, as it is illegal, unethical, and risky. You may face legal consequences, lose your account, damage your device, or expose your personal information by doing this.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Source for Downloading 3D Objects for Windows 10 - Free and Premium Models Available.md b/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Source for Downloading 3D Objects for Windows 10 - Free and Premium Models Available.md
deleted file mode 100644
index 087df576189bd330a256f414634d190a4a0aa46c..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Source for Downloading 3D Objects for Windows 10 - Free and Premium Models Available.md
+++ /dev/null
@@ -1,137 +0,0 @@
-
-
How to Download 3D Objects for Windows 10
-
Have you ever wanted to create, view, or use 3D objects on your computer? If you have Windows 10, you can do that easily with some built-in apps and online resources. In this article, we will show you how to download 3D objects for Windows 10 and what you can do with them.
-
What are 3D Objects and Why They Are Useful
-
3D objects are digital models that represent physical things in three dimensions. They can have different shapes, colors, textures, and animations. You can use them for various purposes, such as:
What are the Benefits of Using 3D Objects in Windows 10
-
Windows 10 is a great platform for working with 3D objects. Here are some of the benefits of using them in Windows 10:
-
-
You can access a variety of free and paid apps from the Microsoft Store that let you create, view, edit, and share 3D objects.
-
You can use the built-in Paint 3D app to draw, paint, or sculpt your own 3D objects or import them from other sources.
-
You can use the built-in Mixed Reality Viewer app to place your 3D objects in your real environment and take mixed reality photos.
-
You can use the built-in Remix 3D community to browse, download, remix, and upload thousands of 3D objects created by other users.
-
You can use the built-in File Explorer to manage your 3D objects in a dedicated folder called "3D Objects".
-
-
How to Download 3D Objects for Windows 10
-
There are two main ways to download 3D objects for Windows 10: from the Microsoft Store or from other websites. Let's see how to do both.
-
From the Microsoft Store
-
The Microsoft Store offers several apps that let you download or create your own 3D objects. Here are some of the most popular ones:
-
How to download 3d objects for windows 10 for free
-Best sites to download 3d objects for windows 10
-Download 3d objects for windows 10 from Microsoft Store
-Download 3d objects for windows 10 for Paint 3D
-Download 3d objects for windows 10 for Blender
-Download 3d objects for windows 10 for SketchUp
-Download 3d objects for windows 10 for Unity
-Download 3d objects for windows 10 for Unreal Engine
-Download 3d objects for windows 10 for Maya
-Download 3d objects for windows 10 for Cinema 4D
-Download 3d objects for windows 10 for ZBrush
-Download 3d objects for windows 10 for VRChat
-Download 3d objects for windows 10 for Minecraft
-Download 3d objects for windows 10 for Roblox
-Download 3d objects for windows 10 for Sims 4
-Download 3d objects for windows 10 for PowerPoint
-Download 3d objects for windows 10 for Word
-Download 3d objects for windows 10 for Excel
-Download 3d objects for windows 10 for Photoshop
-Download 3d objects for windows 10 for Illustrator
-Download 3d objects for windows 10 in STL format
-Download 3d objects for windows 10 in OBJ format
-Download 3d objects for windows 10 in FBX format
-Download 3d objects for windows 10 in GLTF format
-Download 3d objects for windows 10 in DAE format
-Download free and premium 3d objects for windows 10
-Download royalty-free and licensed 3d objects for windows 10
-Download realistic and stylized 3d objects for windows 10
-Download low-poly and high-poly 3d objects for windows 10
-Download animated and rigged 3d objects for windows 10
-Download printable and editable 3d objects for windows 10
-Download custom and pre-made 3d objects for windows 10
-Download human and animal 3d objects for windows 10
-Download furniture and architecture 3d objects for windows 10
-Download nature and landscape 3d objects for windows 10
-Download vehicles and weapons 3d objects for windows 10
-Download food and drink 3d objects for windows 10
-Download art and design 3d objects for windows 10
-Download sci-fi and fantasy 3d objects for windows
-
How to Use 3D Builder App
-
The 3D Builder app is a free app that lets you view, create, edit, and print your own or downloaded models. You can also scan real objects with your webcam and turn them into digital models. To use it, follow these steps:
-
-
Open the Microsoft Store app on your PC and search for "3D Builder".
-
Click on "Get" or "Install" to download and install the app.
-
Open the app from the Start menu or the taskbar.
-
Select "New scene" to start a new project or "Load model" to open an existing one.
-
Use the toolbar on the left to add shapes, text, stickers, or custom models.
-
Use the toolbar on the top to modify, rotate, scale, group, or split your models.
-
Select "Save as" from the menu bar to save your project as a 3MF, STL, OBJ, PLY, or VRML file.
-
Select "Print" from the menu bar to send your model to a 3D printer or an online printing service.
-
-
How to Use Paint 3D App
-
The Paint 3D app is a free app that lets you draw, paint, or sculpt your own 3D objects or import them from other sources. You can also add stickers, effects, text, or backgrounds to your creations. To use it, follow these steps:
-
-
Open the Paint 3D app from the Start menu or the taskbar.
-
Select "New" to start a new project or "Open" to open an existing one.
-
Use the toolbar on the top to switch between 2D and 3D modes.
-
Use the tools on the right to draw, paint, or sculpt your models.
-
Use the tools on the left to add stickers, effects, text, or backgrounds.
-
Select "Menu" from the top left corner to save your project as a 3MF, PNG, JPEG, GIF, or BMP file.
-
Select "Share" from the top right corner to share your project via email, social media, or Remix 3D.
-
-
How to Use Mixed Reality Viewer App
-
The Mixed Reality Viewer app is a free app that lets you view your 3D objects in your real environment using your device's camera. You can also take mixed reality photos or videos and share them with others. To use it, follow these steps:
-
-
Open the Mixed Reality Viewer app from the Start menu or the taskbar.
-
Select "Browse" to choose a 3D object from your PC or Remix 3D.
-
Select "Mixed reality" to turn on your device's camera and place your 3D object in your real environment.
-
Use the buttons on the bottom to move, rotate, scale, or animate your 3D object.
-
Select "Capture" to take a mixed reality photo or video and save it to your PC.
-
Select "Share" to share your mixed reality photo or video via email, social media, or Remix 3D.
-
-
From Other Websites
-
If you want more options for downloading 3D objects for Windows 10, you can also visit some of the websites that offer free or paid models. Here are some of the most popular ones:
-
How to Use Sketchfab Website
-
The Sketchfab website is a platform that hosts over 4 million 3D models created by artists and professionals. You can browse, download, remix, and upload models in various formats and categories. To use it, follow these steps:
-
-
Visit the Sketchfab website and sign up for a free account.
-
Browse the models by categories, tags, formats, licenses, or collections.
-
Click on a model that you like and view it in 3D on your browser.
-
Select "Download" from the bottom right corner if the model is available for free download.
-
Select "Buy" from the bottom right corner if the model is available for purchase.
-
Select "Add to collection" from the bottom right corner if you want to save the model for later use.
-
Select "Share" from the bottom right corner if you want to share the model via email, social media, or embed code.
-
-
How to Use Tinkercad Website
-
The Tinkercad website is a platform that lets you create your own 3D models using simple shapes and tools. You can also browse, download, remix, and upload models created by other users. To use it, follow these steps:
-
-
Visit the Tinkercad website and sign up for a free account.
-
Select "Create new design" to start a new project or "Learn" to take some tutorials.
-
Use the toolbar on the right to add shapes, text, numbers, symbols, or imported models.
-
Use the toolbar on the top to modify, rotate, scale, group, align, or duplicate your models.
-
Select "Export" from the top right corner to save your project as a STL, stories.
-
YouTube: A video-sharing platform that lets you upload and view 3D videos in various formats and resolutions.
-
WordPress: A website-building platform that lets you embed 3D models in your posts or pages using plugins or shortcodes.
-
Google Poly: A platform that lets you browse, download, remix, and upload 3D models for virtual and augmented reality.
-
-
How can I print 3D objects with a 3D printer?
-
If you want to print your downloaded or created 3D objects with a 3D printer, you need to make sure that your models are compatible with your printer and that you have the right software and settings. Here are some general steps to follow:
-
-
Save your 3D model as a STL, OBJ, or 3MF file, which are the most common formats for 3D printing.
-
Open your 3D model in a slicing software, such as Cura, Slic3r, or PrusaSlicer, which will convert your model into layers and instructions for your printer.
-
Adjust the settings for your printer, such as the nozzle size, layer height, infill density, print speed, and temperature.
-
Preview the sliced model and check for any errors or issues.
-
Save the sliced model as a G-code file, which is the language that your printer understands.
-
Transfer the G-code file to your printer via USB, SD card, or Wi-Fi.
-
Start the printing process and monitor the progress.
-
Remove the printed object from the printer and clean it up if needed.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Treasure Of Nadia Final Apk Download.md b/spaces/congsaPfin/Manga-OCR/logs/Treasure Of Nadia Final Apk Download.md
deleted file mode 100644
index 0c8e44178baf83440aac37d2ea7f4ecf002cc584..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Treasure Of Nadia Final Apk Download.md
+++ /dev/null
@@ -1,102 +0,0 @@
-
-
Treasure of Nadia: A Guide to Downloading the Final APK Version
-
Treasure of Nadia is a popular adventure game for Android devices that follows the story of a young treasure hunter who inherits his father's legacy and meets 12 beautiful women along the way. The game is the sequel to Lust Epidemic, and it features improved graphics, animations, and gameplay. If you are looking for a way to download the final APK version of Treasure of Nadia for free, you have come to the right place. In this article, I will show you how to find, install, and enjoy this game on your Android device.
An APK file is a package file format that contains all the files and data needed to run an Android application. APK stands for Android Package Kit, and it is similar to an EXE file on Windows or a DMG file on Mac. APK files can be downloaded from various sources online, such as official websites, app stores, or third-party platforms. However, not all APK files are safe and reliable, so you need to be careful when downloading them.
-
Why download an APK file?
-
There are several reasons why you might want to download an APK file instead of using the Google Play Store or other app stores. Some of them are:
-
-
You want to access an app that is not available in your region or country.
-
You want to update an app that is not yet updated on the app store.
-
You want to try a beta or modded version of an app that has extra features or content.
-
You want to save bandwidth or storage space by downloading a smaller or compressed file.
-
You want to backup or share an app with someone else.
-
-
How to download an APK file?
-
To download an APK file, you need to find a trustworthy source that offers the file you are looking for. You can use a web browser or a file manager app on your Android device to search for and download the APK file. Some of the most popular sources for APK files are:
-
-
Source
Description
-
[Malavida](^1^)
A website that offers free downloads of Android apps and games, including Treasure of Nadia.
-
[APKCombo](^2^)
A website that provides fast and easy downloads of APK files for various Android devices and versions.
-
[APKPure](https://apkpure.com/)
A website that hosts safe and verified APK files for thousands of Android apps and games.
-
[Aptoide](https://www.aptoide.com/)
An alternative app store that allows users to download and install apps without any restrictions.
-
[Uptodown](https://www.uptodown.com/android)
A website that offers downloads of Android apps and games in different languages and regions.
-
-
How to install an APK file?
-
To install an APK file on your Android device, you need to follow these steps:
-
-
Enable the installation of apps from unknown sources on your device settings. This option is usually found under Security or Privacy settings.
-
Locate the downloaded APK file on your device storage using a file manager app or your web browser.
-
Tap on the APK file and follow the instructions on the screen to install it.
-
Wait for the installation process to finish and launch the app from your app drawer or home screen.
-
-
How to enjoy Treasure of Nadia?
-
To enjoy Treasure of Nadia on your Android device, you need to have at least 4 GB of free storage space and Android 4.0.3 or higher. The game has a rating of 18+ and contains explicit scenes and content that may not be suitable for everyone. The game also requires an internet connection to play and update. Here are some tips on how to enjoy Treasure of Nadia:
-
sonic 3 and knuckles apk free download for android
-sonic 3 and knuckles android port game jolt
-sonic 3 and knuckles complete apk download android
-sonic 3 and knuckles mod apk android
-sonic 3 and knuckles android emulator download
-sonic 3 and knuckles apk full version android
-sonic 3 and knuckles android apk no ads
-sonic 3 and knuckles android apk offline
-sonic 3 and knuckles android apk latest version
-sonic 3 and knuckles android apk mega drive
-sonic 3 and knuckles android apk cheats
-sonic 3 and knuckles android apk hack
-sonic 3 and knuckles android apk data
-sonic 3 and knuckles android apk obb
-sonic 3 and knuckles android apk revdl
-sonic 3 and knuckles android apk rexdl
-sonic 3 and knuckles android apk uptodown
-sonic 3 and knuckles android apk pure
-sonic 3 and knuckles android apk mob.org
-sonic 3 and knuckles android apk apkpure
-sonic 3 and knuckles android apk appmirror
-sonic 3 and knuckles android apk appvn
-sonic 3 and knuckles android apk happymod
-sonic 3 and knuckles android apk moddroid
-sonic 3 and knuckles android apk an1.com
-sonic 3 and knuckles download for android phone
-sonic 3 and knuckles download for android tablet
-sonic 3 and knuckles download for android tv
-sonic 3 and knuckles download for android device
-sonic 3 and knuckles download for android free
-sonic 3 and knuckles download for android online
-sonic 3 and knuckles download for android without internet
-sonic 3 and knuckles download for android without root
-sonic 3 and knuckles download for android without emulator
-sonic 3 and knuckles download for android with controller support
-how to download sonic 3 and knuckles on android
-how to install sonic 3 and knuckles on android
-how to play sonic 3 and knuckles on android
-how to get sonic 3 and knuckles on android
-how to run sonic 3 and knuckles on android
-where to download sonic 3 and knuckles for android
-where to find sonic 3 and knuckles for android
-where to get sonic 3 and knuckles for android
-best site to download sonic 3 and knuckles for android
-best way to download sonic 3 and knuckles for android
-best app to download sonic 3 and knuckles for android
-
Explore the island of Newhaven and discover its secrets and treasures.
-
Interact with the characters and build relationships with them.
-
Collect items and craft tools to help you in your quest.
-
Complete puzzles and mini-games to unlock new scenes and rewards.
-
Customize your appearance and wardrobe to suit your style.
-
-
Conclusion
-
Treasure of Nadia is a fun and exciting game that will keep you entertained for hours. It has a captivating story, stunning graphics, and engaging gameplay. If you want to download the final APK version of Treasure of Nadia for free, you can use one of the sources mentioned above. Just make sure to follow the steps on how to install and enjoy the game on your Android device. I hope you found this article helpful and informative. Happy treasure hunting!
-
FAQs
-
What is the difference between the final APK version and the previous versions of Treasure of Nadia?
-
The final APK version of Treasure of Nadia is the latest and most complete version of the game. It contains all the updates, bug fixes, and content that the developers have released so far. It also has a better performance and compatibility with different Android devices and versions.
-
Is Treasure of Nadia safe to download and play?
-
Treasure of Nadia is safe to download and play as long as you use a reliable source for the APK file. However, you should always be careful when downloading any file from the internet, as some sources may contain malware or viruses that can harm your device or steal your data. You should also scan the APK file with an antivirus app before installing it.
-
How can I update Treasure of Nadia?
-
Treasure of Nadia requires an internet connection to play and update. The game will automatically check for updates when you launch it and prompt you to download them if available. You can also manually check for updates by going to the settings menu in the game and tapping on the update button.
-
How can I save my progress in Treasure of Nadia?
-
Treasure of Nadia has an auto-save feature that saves your progress every time you exit the game or change locations. You can also manually save your progress by going to the settings menu in the game and tapping on the save button. You can load your saved progress by tapping on the load button in the same menu.
-
How can I contact the developers of Treasure of Nadia?
-
If you have any questions, feedback, or issues with Treasure of Nadia, you can contact the developers by visiting their official website at [NLT Media](https://nlt-media.com/). You can also follow them on their social media accounts on [Twitter](https://twitter.com/nltmedia), [Patreon](https://www.patreon.com/nlt), or [Discord](https://discord.gg/nlt).
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Unlock Max Level and Unlimited Coins in Plant vs Zombie 1 with MOD APK.md b/spaces/congsaPfin/Manga-OCR/logs/Unlock Max Level and Unlimited Coins in Plant vs Zombie 1 with MOD APK.md
deleted file mode 100644
index b230677ea206624a1c89f49c2f334512b846d2bf..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Unlock Max Level and Unlimited Coins in Plant vs Zombie 1 with MOD APK.md
+++ /dev/null
@@ -1,92 +0,0 @@
-
-
Plant vs Zombie 1 Mod APK Max Level: A Guide for Beginners
-
Do you love playing Plant vs Zombie 1, the classic tower defense game where you have to plant various plants to protect your lawn from waves of zombies? If yes, then you might want to try playing it with mod apk max level, a modified version of the game that gives you unlimited coins, suns, and max level for all plants and zombies. Sounds awesome, right?
-
In this article, we will show you how to download and install Plant vs Zombie 1 Mod APK Max Level, how to play it, what are the features and benefits of it, what are the drawbacks and risks of it, and whether it is worth it or not. By the end of this article, you will have a clear idea of whether you should play Plant vs Zombie 1 with mod apk max level or not. Let's get started!
How to download and install Plant vs Zombie 1 Mod APK Max Level
-
Before you can play Plant vs Zombie 1 with mod apk max level, you need to download and install the mod apk file on your device. Here are the steps you need to follow:
-
Step 1: Find a reliable source for the mod apk file
-
There are many websites that offer mod apk files for various games, but not all of them are trustworthy. Some of them may contain malware or viruses that can harm your device or steal your personal information. Therefore, you need to be careful when choosing a source for the mod apk file.
-
One of the reliable sources that we recommend is [GetModsAPK](^i^), where you can find the latest version of Plant vs Zombie 1 Mod APK Max Level. This website has a good reputation and provides safe and secure downloads. You can also read the reviews and ratings from other users before downloading the mod apk file.
-
plant vs zombie 1 unlimited suns mod apk
-plant vs zombie 1 hack apk max level unlocked
-plant vs zombie 1 mod apk free download latest version
-plant vs zombie 1 cheat apk unlimited coins and gems
-plant vs zombie 1 mod apk offline no root
-plant vs zombie 1 mod apk android 1 com
-plant vs zombie 1 mod apk revdl with obb data
-plant vs zombie 1 mod apk unlimited everything all plants
-plant vs zombie 1 mod apk full version no ads
-plant vs zombie 1 mod apk unlimited money and stars
-plant vs zombie 1 mod apk rexdl with zen garden
-plant vs zombie 1 mod apk happymod with premium plants
-plant vs zombie 1 mod apk pure with all levels unlocked
-plant vs zombie 1 mod apk mediafıre with god mode
-plant vs zombie 1 mod apk mega with infinite suns
-plant vs zombie 1 mod apk original with max level plants
-plant vs zombie 1 mod apk an1 with unlimited gems and coins
-plant vs zombie 1 mod apk apkpure with all modes unlocked
-plant vs zombie 1 mod apk apkmody with unlimited suns and money
-plant vs zombie 1 mod apk apkmirror with all plants unlocked
-plant vs zombie 1 mod apk android republic with max level zombies
-plant vs zombie 1 mod apk blackmod with unlimited stars and coins
-plant vs zombie 1 mod apk by rexdl with all premium plants
-plant vs zombie 1 mod apk by revdl with infinite suns and money
-plant vs zombie 1 mod apk by happymod with all levels unlocked
-
Step 2: Enable unknown sources on your device
-
Since the mod apk file is not from the official Google Play Store, you need to enable unknown sources on your device to allow the installation of third-party apps. To do this, go to your device settings, then security, then unknown sources, and toggle it on. You may see a warning message that says installing apps from unknown sources may harm your device, but don
Step 3: Download and install the mod apk file
-
Now that you have enabled unknown sources on your device, you can proceed to download and install the mod apk file. To do this, go to the website where you found the mod apk file, such as [GetModsAPK](^i^), and click on the download button. You may see a pop-up window that asks you to confirm the download, just click on OK.
-
Once the download is complete, you will see a notification that says the mod apk file is ready to install. Tap on the notification and follow the instructions on the screen to install the mod apk file. You may see another warning message that says the app may harm your device, but don't worry, just click on install anyway.
-
After the installation is done, you will see a message that says the app has been installed successfully. You can now open the app and enjoy playing Plant vs Zombie 1 with mod apk max level.
-
How to play Plant vs Zombie 1 Mod APK Max Level
-
Playing Plant vs Zombie 1 with mod apk max level is not much different from playing the original game, except that you have more coins, suns, and levels to use. Here are the steps you need to follow:
-
Step 1: Choose your game mode and difficulty level
-
When you open the app, you will see a menu where you can choose your game mode and difficulty level. There are four game modes to choose from: Adventure, Survival, Puzzle, and Mini-Games. Each game mode has its own challenges and objectives. You can also choose your difficulty level from Easy, Normal, or Hard.
-
We recommend starting with Adventure mode, which is the main story mode of the game. In this mode, you have to complete 50 levels across five different worlds: Day, Night, Pool, Fog, and Roof. Each world has its own unique plants, zombies, and obstacles. You can also unlock more plants and zombies as you progress through the levels.
-
Step 2: Plant your plants and defend your lawn from zombies
-
The core gameplay of Plant vs Zombie 1 is simple but addictive. You have to plant various plants on your lawn to defend it from waves of zombies that want to eat your brains. Each plant has its own abilities and costs a certain amount of suns to plant. Suns are generated by sunflowers or falling from the sky.
-
You have to strategically place your plants on different lanes and columns to maximize their effectiveness and prevent the zombies from reaching your house. You can also dig up or replace your plants if needed. You have to be careful of some zombies that have special abilities or weapons that can destroy or bypass your plants.
-
Step 3: Use coins, suns, and power-ups to boost your gameplay
-
With mod apk max level, you have unlimited coins and suns to use in the game. Coins are used to buy items from Crazy Dave's shop, such as extra seed slots, plant upgrades, mini-games, and more. Suns are used to plant more plants during the game.
-
You can also use power-ups to enhance your gameplay. Power-ups are special abilities that can be activated by tapping on them. There are three types of power-ups: Pinch, Flick, and Zap. Pinch allows you to pinch zombies' heads off with your fingers. Flick allows you to flick zombies off the screen with your fingers. Zap allows you to electrocute zombies with a lightning bolt.
-
Power-ups cost coins to use and have a cooldown time before they can be used again. They can be very useful when you are facing a large horde of zombies or a tough boss.
-
What are the features and benefits of Plant vs Zombie 1 Mod APK Max Level
-
Plant vs Zombie 1 Mod APK Max Level has many features and benefits that make it more fun and enjoyable than the original game. Here are some of them:
-
Feature 1: Unlimited coins and suns
-
With mod apk max level, you never have to worry about running out of coins or suns in the game. You can buy anything you want from Crazy Dave's shop without any limitations. You can also plant as many plants as you want without any restrictions.
-
This feature gives you more freedom and flexibility in your gameplay. You can experiment with different combinations of plants and items to find the best strategy for each level. You can also try out different game modes and difficulty levels without any fear of losing.
-
Feature 2: Max level for all plants and zombies
-
With mod apk max level, all your plants and zombies are at their maximum level. This means that they have the highest stats and abilities possible. For example, your peashooters can shoot faster and farther, your wall-nuts can withstand more damage, and your zombies can move faster and stronger.
-
This feature makes your gameplay more exciting and challenging. You can face more powerful enemies and bosses that require more strategy and skill to defeat. You can also enjoy the full potential of your plants and zombies and see how they perform in different situations.
-
Feature 3: No ads and no root required
-
With mod apk max level, you don't have to deal with annoying ads that interrupt your gameplay or waste your time. You can play the game without any distractions or interruptions.
-
Also, you don't need to root your device to use mod apk max level. Rooting is a process that gives you full access to your device's system, but it also voids your warranty and exposes you to security risks. With mod apk max level, you can enjoy the game without any risks or hassles.
-
What are the drawbacks and risks of Plant vs Zombie 1 Mod APK Max Level
-
Plant vs Zombie 1 Mod APK Max Level is not perfect, however. It also has some drawbacks and risks that you should be aware of before using it. Here are some of them:
-
Drawback 1: Possible compatibility issues with some devices
-
Since mod apk max level is a modified version of the game, it may not be compatible with some devices or operating systems. Some users have reported that the game crashes or freezes on their devices, or that some features don't work properly.
-
To avoid this problem, you should check the compatibility of the mod apk file with your device before downloading and installing it. You should also make sure that your device has enough storage space and memory to run the game smoothly.
-
Drawback 2: Possible security risks from malware or viruses
-
As mentioned earlier, not all sources for mod apk files are trustworthy. Some of them may contain malware or viruses that can infect your device or steal your personal information. This can cause serious damage to your device or compromise your privacy and security.
-
To avoid this problem, you should only download mod apk files from reliable sources, such as [GetModsAPK](^i^). You should also scan the mod apk file with a reputable antivirus software before installing it. You should also be careful of granting permissions to the app that may seem suspicious or unnecessary.
-
Drawback 3: Possible loss of game progress or data
-
Another risk of using mod apk max level is that you may lose your game progress or data if something goes wrong. For example, if the game crashes or updates, you may lose all your coins, suns, levels, plants, zombies, and items. This can be very frustrating and disappointing, especially if you have spent a lot of time and effort on the game.
-
To avoid this problem, you should backup your game data before using mod apk max level. You can do this by using a cloud service, such as Google Drive or Dropbox, or by using a third-party app, such as Titanium Backup or Helium Backup. You should also avoid updating the game unless there is a new version of mod apk max level available.
-
Conclusion: Is Plant vs Zombie 1 Mod APK Max Level worth it?
-
Plant vs Zombie 1 Mod APK Max Level is a fun and exciting way to play Plant vs Zombie 1, the classic tower defense game. It gives you unlimited coins, suns, and max level for all plants and zombies, which makes your gameplay more enjoyable and challenging. It also removes ads and does not require root access, which makes it more convenient and safe to use.
-
However, it also has some drawbacks and risks that you should consider before using it. It may not be compatible with some devices or operating systems, it may contain malware or viruses that can harm your device or steal your personal information, and it may cause loss of game progress or data if something goes wrong.
-
Therefore, we recommend using Plant vs Zombie 1 Mod APK Max Level only if you are willing to take these risks and if you are looking for a new way to experience the game. Otherwise, you may want to stick with the original game or look for other alternatives.
-
FAQs
-
Here are some frequently asked questions about Plant vs Zombie 1 Mod APK Max Level:
-
Q1: Is Plant vs Zombie 1 Mod APK Max Level safe to use?
-
A1: Plant vs Zombie 1 Mod APK Max Level is safe to use if you download it from a reliable source, such as [GetModsAPK](^i^), and if you scan it with a reputable antivirus software before installing it. You should also be careful of granting permissions to the app that may seem suspicious or unnecessary. However, there is always a risk of malware or viruses when downloading and installing mod apk files from unknown sources, so you should use it at your own discretion and responsibility.
-
Q2: How can I update Plant vs Zombie 1 Mod APK Max Level?
-
A2: Plant vs Zombie 1 Mod APK Max Level is not updated automatically, unlike the original game. You have to manually check for updates from the website where you downloaded the mod apk file, such as [GetModsAPK](^i^), and download and install the new version of the mod apk file. You should also backup your game data before updating, as you may lose your game progress or data if something goes wrong.
-
Q3: How can I backup my game data before using Plant vs Zombie 1 Mod APK Max Level?
-
A3: You can backup your game data by using a cloud service, such as Google Drive or Dropbox, or by using a third-party app, such as Titanium Backup or Helium Backup. You should backup your game data regularly, especially before using mod apk max level, updating the game, or uninstalling the app. This way, you can restore your game data if you lose it or if you want to switch back to the original game.
-
Q4: How can I uninstall Plant vs Zombie 1 Mod APK Max Level?
-
A4: You can uninstall Plant vs Zombie 1 Mod APK Max Level by following the same steps as uninstalling any other app on your device. Go to your device settings, then apps, then Plant vs Zombie 1, then uninstall. You may see a message that says the app has been uninstalled successfully. You can also delete the mod apk file from your device storage if you want to free up some space.
-
Q5: Where can I find more tips and tricks for Plant vs Zombie 1 Mod APK Max Level?
-
A5: You can find more tips and tricks for Plant vs Zombie 1 Mod APK Max Level by visiting online forums, blogs, or YouTube channels that are dedicated to the game. There, you can learn from other players who have used mod apk max level and share your own experiences and feedback. You can also ask questions and get answers from experts and enthusiasts.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Warpath MOD APK The Best Way to Experience the Story-Based War Against the Black Crow.md b/spaces/congsaPfin/Manga-OCR/logs/Warpath MOD APK The Best Way to Experience the Story-Based War Against the Black Crow.md
deleted file mode 100644
index ba9206860eb5cbdf19616cfb1ce1d77ecf4c8c76..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Warpath MOD APK The Best Way to Experience the Story-Based War Against the Black Crow.md
+++ /dev/null
@@ -1,121 +0,0 @@
-
-
Warpath Mod APK Blackmod: How to Download and Play the Ultimate War Game
-
Introduction
-
If you are a fan of war games, you might have heard of Warpath, a real-time strategy game that lets you command your army in a massive world war. In this game, you can choose from hundreds of historical units, customize your weapons, and join forces with other players to fight against the evil Raven faction.
-
But what if you want to enjoy the game without any limitations or restrictions? What if you want to have unlimited resources, unlocked units, and free upgrades? Well, there is a way to do that, and it is called Warpath Mod APK Blackmod.
In this article, we will show you what Warpath Mod APK Blackmod is, why you should use it, and how to download and play it on your device. So, without further ado, let's get started!
-
What is Warpath?
-
Warpath is a mobile war game developed by Lilith Games, the same company behind popular titles like Rise of Kingdoms and AFK Arena. Warpath was released in November 2020 and has since gained millions of downloads and positive reviews from players around the world.
-
Warpath is set in an alternate history where World War II never ended. You play as a commander of one of the three factions: The Allies, The Axis, or The Legion. Your goal is to build your base, train your troops, and lead them into battle against the Raven faction, a mysterious enemy that threatens to destroy the world.
-
Warpath features stunning graphics, realistic physics, and immersive sound effects that make you feel like you are in the middle of a war zone. You can also interact with other players through chat, alliance, and diplomacy systems. You can cooperate or compete with them in various modes such as campaign, PvP, and alliance wars.
-
What is Blackmod?
-
Blackmod is a website that provides modded versions of various Android games. A modded version is a modified version that has some features or functions that are not available in the original version. For example, a modded version might have unlimited money, unlocked items, or free premium features.
-
Blackmod is one of the most popular and trusted sources of modded games on the internet. It has a large collection of games from different genres and categories. You can find action, adventure, role-playing, simulation, strategy, puzzle, casual, and more games on Blackmod. You can also request for mods that are not available on the website.
-
warpath mod apk blackmod download
-warpath mod apk blackmod latest version
-warpath mod apk blackmod unlimited money
-warpath mod apk blackmod free
-warpath mod apk blackmod android
-warpath mod apk blackmod ios
-warpath mod apk blackmod no root
-warpath mod apk blackmod online
-warpath mod apk blackmod offline
-warpath mod apk blackmod hack
-warpath mod apk blackmod cheats
-warpath mod apk blackmod generator
-warpath mod apk blackmod update
-warpath mod apk blackmod review
-warpath mod apk blackmod gameplay
-warpath mod apk blackmod features
-warpath mod apk blackmod tips
-warpath mod apk blackmod tricks
-warpath mod apk blackmod guide
-warpath mod apk blackmod tutorial
-warpath mod apk blackmod install
-warpath mod apk blackmod how to play
-warpath mod apk blackmod best settings
-warpath mod apk blackmod strategy
-warpath mod apk blackmod missions
-warpath mod apk blackmod weapons
-warpath mod apk blackmod tanks
-warpath mod apk blackmod units
-warpath mod apk blackmod allies
-warpath mod apk blackmod enemies
-warpath mod apk blackmod maps
-warpath mod apk blackmod graphics
-warpath mod apk blackmod sound
-warpath mod apk blackmod music
-warpath mod apk blackmod voice acting
-warpath mod apk blackmod story
-warpath mod apk blackmod characters
-warpath mod apk blackmod customization
-warpath mod apk blackmod skins
-warpath mod apk blackmod codes
-warpath mod apk blackmod coupons
-warpath mod apk blackmod rewards
-warpath mod apk blackmod events
-warpath mod apk blackmod challenges
-warpath mod apk blackmod achievements
-warpath mod apk blackmod ranking
-warpath mod apk blackmod leaderboard
-warpath mod apk blackmod multiplayer
-warpath mod apk blackmod co-op
-
Blackmod is easy to use and safe to download. You don't need to root your device or sign up for an account to access the mods. You just need to visit the website, search for the game you want, download the mod file, and install it on your device. You can also check the comments section for feedback from other users.
-
Why use Warpath Mod APK Blackmod?
-
Warpath Mod APK Blackmod is a modded version of Warpath that has several advantages over the original version. Here are some of the reasons why you should use Warpath Mod APK Blackmod:
-
-
You can get unlimited gold, oil, steel, and other resources that you need to build your base and upgrade your units and weapons. You don't have to worry about running out of resources or waiting for them to generate.
-
You can unlock all the units, weapons, and commanders that are available in the game. You don't have to spend real money or complete missions to get them. You can choose from tanks, planes, artillery, infantry, and more. You can also customize your units with different skins and attachments.
-
You can get free upgrades for your units, weapons, and buildings. You don't have to spend resources or time to level up your army and base. You can instantly improve your combat power and efficiency.
-
You can enjoy the game without any ads or pop-ups. You don't have to watch videos or click on banners to get rewards or access features. You can have a smooth and uninterrupted gaming experience.
-
-
With Warpath Mod APK Blackmod, you can have more fun and excitement in playing Warpath. You can dominate the battlefield and crush your enemies with ease. You can also explore more features and content that the game has to offer.
-
How to Download and Install Warpath Mod APK Blackmod
-
If you are interested in using Warpath Mod APK Blackmod, you need to follow these simple steps to download and install it on your device:
-
Step 1: Visit the Blackmod website
-
The first thing you need to do is to visit the Blackmod website at https://blackmod.net/. This is where you can find the mod file for Warpath and other games. You can use any browser that you prefer, such as Chrome, Firefox, or Safari.
-
Step 2: Search for Warpath Mod APK
-
Once you are on the Blackmod website, you need to search for Warpath Mod APK in the search bar. You can also browse through the categories or use the filters to find the game. You will see a list of results that match your query. You need to select the one that has the latest version and the most downloads.
-
Step 3: Download the mod file
-
After you select the Warpath Mod APK that you want, you will be redirected to a page that contains more information about the mod, such as the features, screenshots, installation instructions, and comments. You need to scroll down to the bottom of the page and click on the download button. You will then see a pop-up window that asks you to verify that you are not a robot. You need to complete a simple captcha test and then click on continue. You will then see another pop-up window that shows you a countdown timer. You need to wait for a few seconds until the timer reaches zero and then click on get link. You will then be taken to another page where you can finally download the mod file.
-
Step 4: Enable unknown sources on your device
-
Before you install the mod file on your device, you need to make sure that you have enabled unknown sources on your device. This is a security setting that allows you to install apps from sources other than the Google Play Store. To enable unknown sources, you need to go to your device settings, then security, then unknown sources, and then toggle it on. You might see a warning message that tells you about the risks of installing apps from unknown sources. You need to ignore it and click on OK.
-
Step 5: Install the mod file
-
Now that you have downloaded the mod file and enabled unknown sources on your device, you are ready to install it on your device. To do that, you need to locate the mod file in your device storage, usually in the downloads folder. You need to tap on it and then click on install. You might see another warning message that tells you about the permissions that the app requires. You need to accept them and click on next. The installation process will take a few seconds or minutes depending on your device speed. Once it is done, you will see a message that says app installed. You can then click on open or done.
-
How to Play Warpath Mod APK Blackmod
-
Congratulations! You have successfully downloaded and installed Warpath Mod APK Blackmod on your device. Now you can enjoy playing the game with unlimited features and benefits. Here are some tips on how to play Warpath Mod APK Blackmod:
-
Step 1: Launch the game and choose your server
-
When you launch the game for the first time, you will be asked to choose your server region. You can choose from Asia, Europe, America, or Oceania. It is recommended that you choose the server that is closest to your location for better performance and connection. You can also change your server later if you want to join a different alliance or play with your friends.
-
Step 2: Create your commander and join an alliance
-
After you choose your server, you will be asked to create your commander name and avatar. You can choose from different styles and colors, or you can use a random generator. You can also change your name and avatar later if you want. You will then be introduced to the game tutorial, which will guide you through the basics of the game. You will also be given some rewards and tips to help you start your journey.
-
One of the most important things you need to do in Warpath is to join an alliance. An alliance is a group of players who share the same goals and interests. You can cooperate with your alliance members in various ways, such as chatting, trading, donating, helping, and fighting. You can also participate in alliance events and missions that can give you more rewards and benefits. You can join an existing alliance or create your own alliance if you have enough resources.
-
Step 3: Build your base and train your troops
-
Your base is your main headquarters in Warpath. It is where you can manage your resources, units, buildings, and research. You need to build and upgrade different buildings in your base, such as barracks, factories, warehouses, hospitals, command center, and more. Each building has a different function and benefit that can improve your army and base.
-
Your troops are your main force in Warpath. They are divided into four types: infantry, tank, artillery, and air force. Each type has its own strengths and weaknesses, as well as different units and weapons that you can choose from. You need to train and upgrade your troops in your barracks and factories, as well as equip them with the best weapons and attachments that you can find or craft. You can also customize your troops with different skins and formations.
-
Step 4: Explore the map and fight enemies
-
The map is the main battlefield in Warpath. It is where you can see the whole world of Warpath, as well as other players' bases and alliances. You can also find various resources, items, enemies, and events on the map. You need to explore the map and fight enemies to gain more resources, experience, and rewards. You can also capture or occupy different territories on the map to expand your influence and power.
-
The enemies in Warpath are mainly the Raven faction, a rogue group that wants to destroy the world with their advanced technology and weapons. They have different bases and outposts on the map that you need to attack and destroy. They also have different units and weapons that you need to counter and defeat. You can also encounter other players on the map who might be your allies or enemies depending on your alliance and diplomacy status.
-
Step 5: Enjoy the unlimited features of the mod
-
The best part of playing Warpath Mod APK Blackmod is that you can enjoy the unlimited features of the mod that make the game more fun and easy. You can have unlimited resources that you can use to build your base and upgrade your units and weapons. You can unlock all the units, weapons, commanders, skins, attachments, and more that are available in the game. You can get free upgrades for your units, weapons, buildings, researches, etc. You can enjoy the game without any ads or pop-ups that might interrupt or annoy you.
-
With Warpath Mod APK Blackmod, you can have the ultimate war game experience that you have always dreamed of. You can dominate the battlefield and crush your enemies with ease. You can also explore more features and content that the game has to offer.
-
Conclusion
-
Warpath is a great war game that lets you command your army in a massive world war. It has stunning graphics, realistic physics, immersive sound effects, interactive gameplay, and social features that make it one of the best war games on mobile devices.
-
But if you want to have more fun and excitement in playing Warpath, you should try Warpath Mod APK Blackmod. It is a modded version of Warpath that has unlimited features and benefits that make the game more enjoyable and easy. You can have unlimited resources, unlocked units, free upgrades, and more. You can also download and install it easily and safely from the Blackmod website.
-
If you are interested in Warpath Mod APK Blackmod, you can follow the steps that we have provided in this article. You can also check out the Blackmod website for more modded games that you might like. We hope that this article has helped you and that you have fun playing Warpath Mod APK Blackmod!
-
FAQs
-
Here are some of the frequently asked questions that you might have about Warpath Mod APK Blackmod:
-
-
Is Warpath Mod APK Blackmod safe to use?
-
Yes, Warpath Mod APK Blackmod is safe to use as long as you download it from the Blackmod website, which is a trusted and reliable source of modded games. You don't need to root your device or sign up for an account to use the mod. You also don't need to worry about viruses or malware that might harm your device or data.
-
Is Warpath Mod APK Blackmod compatible with my device?
-
Warpath Mod APK Blackmod is compatible with most Android devices that have Android 4.4 or higher versions. However, some devices might have different specifications or settings that might affect the performance or compatibility of the mod. If you encounter any issues or errors while using the mod, you can try to clear the cache, restart your device, or reinstall the mod.
-
Can I play Warpath Mod APK Blackmod online with other players?
-
Yes, you can play Warpath Mod APK Blackmod online with other players who are using the same mod or server as you. You can chat, trade, donate, help, and fight with them in various modes and events. However, you might not be able to play with players who are using the original version of Warpath or a different mod or server than you.
-
Can I update Warpath Mod APK Blackmod to the latest version?
-
Yes, you can update Warpath Mod APK Blackmod to the latest version whenever there is a new update available on the Blackmod website. You just need to download the new mod file and install it over the old one. You don't need to uninstall or delete the old one. However, you might lose some of your progress or data if you update the mod without backing it up first.
-
Can I request for a new feature or function in Warpath Mod APK Blackmod?
-
Yes, you can request for a new feature or function in Warpath Mod APK Blackmod if you have any suggestions or ideas that can improve the mod. You can leave a comment on the Blackmod website or contact the mod developer directly. However, there is no guarantee that your request will be granted or implemented.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Advance Steel 2018 Crack Xforce 32 Tips and Tricks for Using the Software.md b/spaces/contluForse/HuggingGPT/assets/Advance Steel 2018 Crack Xforce 32 Tips and Tricks for Using the Software.md
deleted file mode 100644
index 6480ee37cf4bca8020867d72fb8feb8235031eb7..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Advance Steel 2018 Crack Xforce 32 Tips and Tricks for Using the Software.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/Arnold 2012 X Force 2012 X32.exe.iso A Comprehensive Review and Comparison.md b/spaces/contluForse/HuggingGPT/assets/Arnold 2012 X Force 2012 X32.exe.iso A Comprehensive Review and Comparison.md
deleted file mode 100644
index e9207d6d17b797afe4b5cddac6de67c8901d6939..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Arnold 2012 X Force 2012 X32.exe.iso A Comprehensive Review and Comparison.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
I have a 2012 MacBook Pro with a built in DVD/CD player. My hard drive crashed so I had to get a new one and my computer was upgraded to Catalina. I am no longer able to use my DVD/CD player. Any suggestions. The CD, DVD, iPod is checked. I am at a loss.
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/Detective Byomkesh Bakshy! Hd 720p Downloadl - Uncover the Conspiracy to Unsettle Calcutta in this Movie.md b/spaces/contluForse/HuggingGPT/assets/Detective Byomkesh Bakshy! Hd 720p Downloadl - Uncover the Conspiracy to Unsettle Calcutta in this Movie.md
deleted file mode 100644
index 8094778832b3fb2c14f3e0d6a469366df9490526..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Detective Byomkesh Bakshy! Hd 720p Downloadl - Uncover the Conspiracy to Unsettle Calcutta in this Movie.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/Enzo Ferrari la storia del fondatore della Scuderia Ferrari nel film completo 20.md b/spaces/contluForse/HuggingGPT/assets/Enzo Ferrari la storia del fondatore della Scuderia Ferrari nel film completo 20.md
deleted file mode 100644
index be1ca49f331878fb1e1da3ef31bf1b7b2b0c5cb5..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Enzo Ferrari la storia del fondatore della Scuderia Ferrari nel film completo 20.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-'Run until you can't run anymore. Then run some more. Find a new source of energy and will. Then run even faster.' The words of Scott Jurek, a dominant force - 4d29de3e1b
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Ryuichi Sakamoto Discography 19782012 HOT.md b/spaces/diacanFperku/AutoGPT/Ryuichi Sakamoto Discography 19782012 HOT.md
deleted file mode 100644
index 1689a6eededb98afdf283394996e871f31702cab..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Ryuichi Sakamoto Discography 19782012 HOT.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Ryuichi Sakamoto - Discography (1978-2012) > http://tiurll.com/1m2m3y b28dd56074 ... pop innovator Ryuichi Sakamoto is among the most ... 4d29de3e1b
-
-
-
diff --git a/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/transforms.py b/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/transforms.py
deleted file mode 100644
index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/transforms.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
-
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {
- 'tails': tails,
- 'tail_bound': tail_bound
- }
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(
- inputs[..., None] >= bin_locations,
- dim=-1
- ) - 1
-
-
-def unconstrained_rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails='linear',
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == 'linear':
- unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError('{} tails are not implemented.'.format(tails))
-
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative
- )
-
- return outputs, logabsdet
-
-def rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0., right=1., bottom=0., top=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError('Input to a transform is not within its domain')
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError('Minimal bin width too large for the number of bins')
- if min_bin_height * num_bins > 1.0:
- raise ValueError('Minimal bin height too large for the number of bins')
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (((inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta)
- + input_heights * (input_delta - input_derivatives)))
- b = (input_heights * input_derivatives
- - (inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta))
- c = - input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (input_delta * theta.pow(2)
- + input_derivatives * theta_one_minus_theta)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/dirge/voicevox/test/test_mora_to_text.py b/spaces/dirge/voicevox/test/test_mora_to_text.py
deleted file mode 100644
index 691681dd1b202731eb5dde45e083b4d6c7526743..0000000000000000000000000000000000000000
--- a/spaces/dirge/voicevox/test/test_mora_to_text.py
+++ /dev/null
@@ -1,29 +0,0 @@
-from unittest import TestCase
-
-# TODO: import from voicevox_engine.synthesis_engine.mora
-from voicevox_engine.synthesis_engine.synthesis_engine_base import mora_to_text
-
-
-class TestMoraToText(TestCase):
- def test_voice(self):
- self.assertEqual(mora_to_text("a"), "ア")
- self.assertEqual(mora_to_text("i"), "イ")
- self.assertEqual(mora_to_text("ka"), "カ")
- self.assertEqual(mora_to_text("N"), "ン")
- self.assertEqual(mora_to_text("cl"), "ッ")
- self.assertEqual(mora_to_text("gye"), "ギェ")
- self.assertEqual(mora_to_text("ye"), "イェ")
- self.assertEqual(mora_to_text("wo"), "ウォ")
-
- def test_unvoice(self):
- self.assertEqual(mora_to_text("A"), "ア")
- self.assertEqual(mora_to_text("I"), "イ")
- self.assertEqual(mora_to_text("kA"), "カ")
- self.assertEqual(mora_to_text("gyE"), "ギェ")
- self.assertEqual(mora_to_text("yE"), "イェ")
- self.assertEqual(mora_to_text("wO"), "ウォ")
-
- def test_invalid_mora(self):
- """変なモーラが来ても例外を投げない"""
- self.assertEqual(mora_to_text("x"), "x")
- self.assertEqual(mora_to_text(""), "")
diff --git a/spaces/doluvor/faster-whisper-webui/src/segments.py b/spaces/doluvor/faster-whisper-webui/src/segments.py
deleted file mode 100644
index ec2650dceade5d0b2022264f6419115eab085aea..0000000000000000000000000000000000000000
--- a/spaces/doluvor/faster-whisper-webui/src/segments.py
+++ /dev/null
@@ -1,55 +0,0 @@
-from typing import Any, Dict, List
-
-import copy
-
-def merge_timestamps(timestamps: List[Dict[str, Any]], merge_window: float = 5, max_merge_size: float = 30, padding_left: float = 1, padding_right: float = 1):
- result = []
-
- if len(timestamps) == 0:
- return result
- if max_merge_size is None:
- return timestamps
-
- if padding_left is None:
- padding_left = 0
- if padding_right is None:
- padding_right = 0
-
- processed_time = 0
- current_segment = None
-
- for i in range(len(timestamps)):
- next_segment = timestamps[i]
-
- delta = next_segment['start'] - processed_time
-
- # Note that segments can still be longer than the max merge size, they just won't be merged in that case
- if current_segment is None or (merge_window is not None and delta > merge_window) \
- or next_segment['end'] - current_segment['start'] > max_merge_size:
- # Finish the current segment
- if current_segment is not None:
- # Add right padding
- finish_padding = min(padding_right, delta / 2) if delta < padding_left + padding_right else padding_right
- current_segment['end'] += finish_padding
- delta -= finish_padding
-
- result.append(current_segment)
-
- # Start a new segment
- current_segment = copy.deepcopy(next_segment)
-
- # Pad the segment
- current_segment['start'] = current_segment['start'] - min(padding_left, delta)
- processed_time = current_segment['end']
-
- else:
- # Merge the segment
- current_segment['end'] = next_segment['end']
- processed_time = current_segment['end']
-
- # Add the last segment
- if current_segment is not None:
- current_segment['end'] += padding_right
- result.append(current_segment)
-
- return result
\ No newline at end of file
diff --git a/spaces/dongsiqie/Code-Interpreter/functional.py b/spaces/dongsiqie/Code-Interpreter/functional.py
deleted file mode 100644
index c28e9c5298996da3319aa9630f8e01470e5a3b1c..0000000000000000000000000000000000000000
--- a/spaces/dongsiqie/Code-Interpreter/functional.py
+++ /dev/null
@@ -1,116 +0,0 @@
-from bot_backend import *
-import base64
-import time
-
-
-def chat_completion(bot_backend: BotBackend):
- model_choice = bot_backend.gpt_model_choice
- config = bot_backend.config
- kwargs_for_chat_completion = bot_backend.kwargs_for_chat_completion
-
- assert config['model'][model_choice]['available'], f"{model_choice} is not available for you API key"
-
- response = openai.ChatCompletion.create(**kwargs_for_chat_completion)
- return response
-
-
-def add_function_response_to_bot_history(content_to_display, history, unique_id):
- images, text = [], []
-
- # terminal output
- error_occurred = False
- for mark, out_str in content_to_display:
- if mark in ('stdout', 'execute_result_text', 'display_text'):
- text.append(out_str)
- elif mark in ('execute_result_png', 'execute_result_jpeg', 'display_png', 'display_jpeg'):
- if 'png' in mark:
- images.append(('png', out_str))
- else:
- images.append(('jpg', out_str))
- elif mark == 'error':
- text.append(delete_color_control_char(out_str))
- error_occurred = True
- text = '\n'.join(text).strip('\n')
- if error_occurred:
- history.append([None, f'❌Terminal output:\n```shell\n\n{text}\n```'])
- else:
- history.append([None, f'✔️Terminal output:\n```shell\n{text}\n```'])
-
- # image output
- for filetype, img in images:
- image_bytes = base64.b64decode(img)
- temp_path = f'cache/temp_{unique_id}'
- if not os.path.exists(temp_path):
- os.mkdir(temp_path)
- path = f'{temp_path}/{hash(time.time())}.{filetype}'
- with open(path, 'wb') as f:
- f.write(image_bytes)
- history.append(
- [
- None,
- f''
- ]
- )
-
-
-def parse_json(function_args: str, finished: bool):
- """
- GPT may generate non-standard JSON format string, which contains '\n' in string value, leading to error when using
- `json.loads()`.
- Here we implement a parser to extract code directly from non-standard JSON string.
- :return: code string if successfully parsed otherwise None
- """
- parser_log = {
- 'met_begin_{': False,
- 'begin_"code"': False,
- 'end_"code"': False,
- 'met_:': False,
- 'met_end_}': False,
- 'met_end_code_"': False,
- "code_begin_index": 0,
- "code_end_index": 0
- }
- try:
- for index, char in enumerate(function_args):
- if char == '{':
- parser_log['met_begin_{'] = True
- elif parser_log['met_begin_{'] and char == '"':
- if parser_log['met_:']:
- if finished:
- parser_log['code_begin_index'] = index + 1
- break
- else:
- if index + 1 == len(function_args):
- return ''
- else:
- temp_code_str = function_args[index + 1:]
- if '\n' in temp_code_str:
- return temp_code_str.strip('\n')
- else:
- return json.loads(function_args + '"}')['code']
- elif parser_log['begin_"code"']:
- parser_log['end_"code"'] = True
- else:
- parser_log['begin_"code"'] = True
- elif parser_log['end_"code"'] and char == ':':
- parser_log['met_:'] = True
- else:
- continue
- if finished:
- for index, char in enumerate(function_args[::-1]):
- back_index = -1 - index
- if char == '}':
- parser_log['met_end_}'] = True
- elif parser_log['met_end_}'] and char == '"':
- parser_log['code_end_index'] = back_index - 1
- break
- else:
- continue
- code_str = function_args[parser_log['code_begin_index']: parser_log['code_end_index'] + 1]
- if '\n' in code_str:
- return code_str.strip('\n')
- else:
- return json.loads(function_args)['code']
-
- except Exception as e:
- return None
diff --git a/spaces/dongyi/MMFS/tools/ci_test.py b/spaces/dongyi/MMFS/tools/ci_test.py
deleted file mode 100644
index 53fd54eb542daaf6387cad32d113f59b7ab5d219..0000000000000000000000000000000000000000
--- a/spaces/dongyi/MMFS/tools/ci_test.py
+++ /dev/null
@@ -1,167 +0,0 @@
-import os
-os.environ['KMP_DUPLICATE_LIB_OK'] = 'TRUE'
-import sys
-sys.path.append('./')
-sys.path.append('../')
-import skimage.io as skio
-import skimage.transform as skt
-import numpy as np
-from data import CustomDataLoader
-from data.super_dataset import SuperDataset
-from models import create_model
-from configs import parse_config
-from utils.util import check_path
-import random
-import argparse
-
-def make_toy_dataset():
- check_path('./toy_dataset')
-
- # paired
- check_path('./toy_dataset/trainpairedA')
- check_path('./toy_dataset/trainpairedB')
-
- # paired numpy
- check_path('./toy_dataset/trainnumpypairedA')
- check_path('./toy_dataset/trainnumpypairedB')
-
- # unpaired
- check_path('./toy_dataset/trainunpairedA')
- check_path('./toy_dataset/trainunpairedB')
-
- # unpaired numpy
- check_path('./toy_dataset/trainnumpyunpairedA')
- check_path('./toy_dataset/trainnumpyunpairedB')
-
-
- # landmark
- check_path('./toy_dataset/trainlmkA')
- check_path('./toy_dataset/trainlmkB')
-
- for i in range(6):
- A0 = np.random.randn(8, 8, 3) * 0.5 + 0.5
- A0[:,:,0] = 0
- A0 = np.clip(A0, 0, 1)
-
- A1 = np.random.randn(8, 8, 3) * 0.5 + 0.5
- A1[:,:,1] = 0
- A1 = np.clip(A1, 0, 1)
-
- A2 = np.random.randn(8, 8, 3) * 0.5 + 0.5
- A2[:,:,2] = 0
- A2 = np.clip(A2, 0, 1)
-
- B = np.random.randn(8, 8, 3) * 0.5 + 0.5
- B = np.clip(B, 0, 1)
-
- A0 = skt.resize(A0, (128, 128))
- A1 = skt.resize(A1, (128, 128))
- A2 = skt.resize(A2, (128, 128))
- B = skt.resize(B, (128, 128))
-
- # paired numpy
- np.save('./toy_dataset/trainnumpypairedA/%d.npy' % i, A0.astype(np.float32))
- np.save('./toy_dataset/trainnumpypairedB/%d.npy' % i, B.astype(np.float32))
-
- # unpaired numpy
- np.save('./toy_dataset/trainnumpyunpairedA/%d.npy' % i, A0.astype(np.float32))
- np.save('./toy_dataset/trainnumpyunpairedB/%d.npy' % i, B.astype(np.float32))
-
- A0 = A0 * 255.0
- A1 = A1 * 255.0
- A2 = A2 * 255.0
- B = B * 255.0
-
- # paired
- skio.imsave('./toy_dataset/trainpairedA/%d.png' % i, A0.astype(np.uint8))
- skio.imsave('./toy_dataset/trainpairedB/%d.png' % i, B.astype(np.uint8))
-
- # unpaired
- skio.imsave('./toy_dataset/trainunpairedA/%d.png' % i, A0.astype(np.uint8))
- skio.imsave('./toy_dataset/trainunpairedB/%d.png' % i, B.astype(np.uint8))
-
- landmark = np.random.rand(101, 2) * 0.5 + 0.5
- landmark = np.clip(landmark, 0, 1)
-
- # landmark
- np.save('./toy_dataset/trainlmkA/%d.npy' % i, landmark.astype(np.float32))
- np.save('./toy_dataset/trainlmkB/%d.npy' % i, landmark.astype(np.float32))
-
-def main(args):
- make_toy_dataset()
- config_dir = './exp'
- if not os.path.exists(config_dir):
- config_dir = './../exp'
-
- config_files = os.listdir(config_dir)
- if not args.all_tests:
- random.shuffle(config_files)
- config_files = config_files[:2]
-
- for cfg in config_files:
- if (not cfg.endswith('.yaml')) or "example" in cfg:
- continue
- print('Current:', cfg)
-
- try:
- # parse config
- config = parse_config(os.path.join(config_dir, cfg))
-
- config['common']['gpu_ids'] = None
- config['training']['continue_train'] = False
- config['dataset']['n_threads'] = 0
- config['dataset']['batch_size'] = 2
-
- if 'patch_size' in config['dataset']:
- config['dataset']['patch_size'] = 64
- if 'patch_batch_size' in config['dataset']:
- config['dataset']['patch_batch_size'] = 2
-
- config['dataset']['preprocess'] = ['scale_width']
-
- config['dataset']['paired_trainA_folder'] = ''
- config['dataset']['paired_trainB_folder'] = ''
- config['dataset']['paired_train_filelist'] = ''
- config['dataset']['paired_valA_folder'] = ''
- config['dataset']['paired_valB_folder'] = ''
- config['dataset']['paired_val_filelist'] = ''
-
- config['dataset']['unpaired_trainA_folder'] = ''
- config['dataset']['unpaired_trainB_folder'] = ''
- config['dataset']['unpaired_trainA_filelist'] = ''
- config['dataset']['unpaired_trainB_filelist'] = ''
- config['dataset']['unpaired_valA_folder'] = ''
- config['dataset']['unpaired_valB_folder'] = ''
- config['dataset']['unpaired_valA_filelist'] = ''
- config['dataset']['unpaired_valB_filelist'] = ''
-
- config['dataset']['dataroot'] = "./toy_dataset"
-
- # create dataset
- dataset = SuperDataset(config)
- dataset.config = dataset.convert_old_config_to_new()
- dataset.static_data.load_static_data()
- dataset.static_data.create_transforms()
-
- print('The number of training images = %d' % len(dataset))
- dataloader = CustomDataLoader(config, dataset)
-
- # create model
- model = create_model(config)
- model.setup(config)
-
- # train
- for data in dataloader:
- model.set_input(data)
- model.optimize_parameters()
- losses = model.get_current_losses()
- print(losses)
-
- except ImportError as error:
- print(error)
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser(description='ci_test')
- parser.add_argument('--all_tests', action='store_true')
- args = parser.parse_args()
- main(args)
diff --git a/spaces/dreambooth-hackathon/leaderboard/app.py b/spaces/dreambooth-hackathon/leaderboard/app.py
deleted file mode 100644
index 44e53336d09e05ec52a4cffecc47124656d64eca..0000000000000000000000000000000000000000
--- a/spaces/dreambooth-hackathon/leaderboard/app.py
+++ /dev/null
@@ -1,121 +0,0 @@
-# AUTOGENERATED! DO NOT EDIT! File to edit: app.ipynb.
-
-# %% auto 0
-__all__ = ['block', 'make_clickable_model', 'make_clickable_user', 'get_submissions']
-
-# %% app.ipynb 0
-import gradio as gr
-import pandas as pd
-from huggingface_hub import list_models
-
-# %% app.ipynb 1
-def make_clickable_model(model_name, link=None):
- if link is None:
- link = "https://huggingface.co/" + model_name
- # Remove user from model name
- return f'{model_name.split("/")[-1]}'
-
-
-def make_clickable_user(user_id):
- link = "https://huggingface.co/" + user_id
- return f'{user_id}'
-
-# %% app.ipynb 2
-def get_submissions(category):
- submissions = list_models(filter=["dreambooth-hackathon", category], full=True)
- leaderboard_models = []
-
- for submission in submissions:
- # user, model, likes
- user_id = submission.id.split("/")[0]
- leaderboard_models.append(
- (
- make_clickable_user(user_id),
- make_clickable_model(submission.id),
- submission.likes,
- )
- )
-
- df = pd.DataFrame(data=leaderboard_models, columns=["User", "Model", "Likes"])
- df.sort_values(by=["Likes"], ascending=False, inplace=True)
- df.insert(0, "Rank", list(range(1, len(df) + 1)))
- return df
-
-# %% app.ipynb 3
-block = gr.Blocks()
-
-with block:
- gr.Markdown(
- """# The DreamBooth Hackathon Leaderboard
-
- Welcome to the leaderboard for the DreamBooth Hackathon! This is a community event where particpants **personalise a Stable Diffusion model** by fine-tuning it with a powerful technique called [_DreamBooth_](https://arxiv.org/abs/2208.12242). This technique allows one to implant a subject (e.g. your pet or favourite dish) into the output domain of the model such that it can be synthesized with a _unique identifier_ in the prompt.
-
- This competition is composed of 5 _themes_, where each theme will collect models belong to one of the categories shown in the tabs below. We'll be **giving out prizes to the top 3 most liked models per theme**, and you're encouraged to submit as many models as you want!
-
- For details on how to participate, check out the hackathon's guide [here](https://github.com/huggingface/diffusion-models-class/blob/main/hackathon/README.md).
- """
- )
- with gr.Tabs():
- with gr.TabItem("Animal 🐨"):
- with gr.Row():
- animal_data = gr.components.Dataframe(
- type="pandas", datatype=["number", "markdown", "markdown", "number"]
- )
- with gr.Row():
- data_run = gr.Button("Refresh")
- data_run.click(
- get_submissions, inputs=gr.Variable("animal"), outputs=animal_data
- )
- with gr.TabItem("Science 🔬"):
- with gr.Row():
- science_data = gr.components.Dataframe(
- type="pandas", datatype=["number", "markdown", "markdown", "number"]
- )
- with gr.Row():
- data_run = gr.Button("Refresh")
- data_run.click(
- get_submissions, inputs=gr.Variable("science"), outputs=science_data
- )
- with gr.TabItem("Food 🍔"):
- with gr.Row():
- food_data = gr.components.Dataframe(
- type="pandas", datatype=["number", "markdown", "markdown", "number"]
- )
- with gr.Row():
- data_run = gr.Button("Refresh")
- data_run.click(
- get_submissions, inputs=gr.Variable("food"), outputs=food_data
- )
- with gr.TabItem("Landscape 🏔"):
- with gr.Row():
- landscape_data = gr.components.Dataframe(
- type="pandas", datatype=["number", "markdown", "markdown", "number"]
- )
- with gr.Row():
- data_run = gr.Button("Refresh")
- data_run.click(
- get_submissions,
- inputs=gr.Variable("landscape"),
- outputs=landscape_data,
- )
- with gr.TabItem("Wilcard 🔥"):
- with gr.Row():
- wildcard_data = gr.components.Dataframe(
- type="pandas", datatype=["number", "markdown", "markdown", "number"]
- )
- with gr.Row():
- data_run = gr.Button("Refresh")
- data_run.click(
- get_submissions,
- inputs=gr.Variable("wildcard"),
- outputs=wildcard_data,
- )
-
- block.load(get_submissions, inputs=gr.Variable("animal"), outputs=animal_data)
- block.load(get_submissions, inputs=gr.Variable("science"), outputs=science_data)
- block.load(get_submissions, inputs=gr.Variable("food"), outputs=food_data)
- block.load(get_submissions, inputs=gr.Variable("landscape"), outputs=landscape_data)
- block.load(get_submissions, inputs=gr.Variable("wildcard"), outputs=wildcard_data)
-
-
-block.launch()
diff --git a/spaces/ealbinu/automatic-speech-recognition/test_wavs/tal_csasr/README.md b/spaces/ealbinu/automatic-speech-recognition/test_wavs/tal_csasr/README.md
deleted file mode 100644
index bd1d534036b9aa2f98fc42740e67c6c0100415a2..0000000000000000000000000000000000000000
--- a/spaces/ealbinu/automatic-speech-recognition/test_wavs/tal_csasr/README.md
+++ /dev/null
@@ -1,2 +0,0 @@
-Files are downloaded from
-https://huggingface.co/luomingshuang/icefall_asr_tal-csasr_pruned_transducer_stateless5/tree/main/test_wavs
diff --git a/spaces/enzostvs/stable-diffusion-tpu/utils/remover.ts b/spaces/enzostvs/stable-diffusion-tpu/utils/remover.ts
deleted file mode 100644
index 27fec70576ac1ad036462521a6b05776b0a60f4e..0000000000000000000000000000000000000000
--- a/spaces/enzostvs/stable-diffusion-tpu/utils/remover.ts
+++ /dev/null
@@ -1,25 +0,0 @@
-import { deleteFile } from "../node_modules/@huggingface/hub/dist";
-import type { RepoDesignation, Credentials } from "../node_modules/@huggingface/hub/dist";
-
-export const RemoverDataset = async (name: string) => {
- const repo: RepoDesignation = { type: "dataset", name: "enzostvs/stable-diffusion-tpu-generations" };
- const credentials: Credentials = { accessToken: process.env.HF_TOKEN as string };
-
- const res: any = await deleteFile({
- repo,
- credentials,
- path: `images/${name}.png`,
- });
-
- if (res?.error) return {
- status: 500,
- ok: false,
- message: res?.error
- };
-
- return {
- status: 200,
- ok: true,
- };
-
-}
\ No newline at end of file
diff --git a/spaces/erbanku/gpt-academic/Dockerfile b/spaces/erbanku/gpt-academic/Dockerfile
deleted file mode 100644
index da5053dbc7fc0accbd7b10fab87ca72feced8fe8..0000000000000000000000000000000000000000
--- a/spaces/erbanku/gpt-academic/Dockerfile
+++ /dev/null
@@ -1,20 +0,0 @@
-# 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM
-# 如何构建: 先修改 `config.py`, 然后 docker build -t gpt-academic .
-# 如何运行: docker run --rm -it --net=host gpt-academic
-FROM python:3.11
-
-RUN echo '[global]' > /etc/pip.conf && \
- echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \
- echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf
-
-
-WORKDIR /gpt
-COPY requirements.txt .
-RUN pip3 install -r requirements.txt
-
-COPY . .
-
-# 可选步骤,用于预热模块
-RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
-
-CMD ["python3", "-u", "main.py"]
diff --git a/spaces/evaluate-metric/sari/README.md b/spaces/evaluate-metric/sari/README.md
deleted file mode 100644
index eea74d817b8b61a34fe71cf275bcde6e2c547f58..0000000000000000000000000000000000000000
--- a/spaces/evaluate-metric/sari/README.md
+++ /dev/null
@@ -1,146 +0,0 @@
----
-title: SARI
-emoji: 🤗
-colorFrom: blue
-colorTo: red
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
-tags:
-- evaluate
-- metric
-description: >-
- SARI is a metric used for evaluating automatic text simplification systems.
- The metric compares the predicted simplified sentences against the reference
- and the source sentences. It explicitly measures the goodness of words that are
- added, deleted and kept by the system.
- Sari = (F1_add + F1_keep + P_del) / 3
- where
- F1_add: n-gram F1 score for add operation
- F1_keep: n-gram F1 score for keep operation
- P_del: n-gram precision score for delete operation
- n = 4, as in the original paper.
-
- This implementation is adapted from Tensorflow's tensor2tensor implementation [3].
- It has two differences with the original GitHub [1] implementation:
- (1) Defines 0/0=1 instead of 0 to give higher scores for predictions that match
- a target exactly.
- (2) Fixes an alleged bug [2] in the keep score computation.
- [1] https://github.com/cocoxu/simplification/blob/master/SARI.py
- (commit 0210f15)
- [2] https://github.com/cocoxu/simplification/issues/6
- [3] https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/utils/sari_hook.py
----
-
-# Metric Card for SARI
-
-
-## Metric description
-SARI (***s**ystem output **a**gainst **r**eferences and against the **i**nput sentence*) is a metric used for evaluating automatic text simplification systems.
-
-The metric compares the predicted simplified sentences against the reference and the source sentences. It explicitly measures the goodness of words that are added, deleted and kept by the system.
-
-SARI can be computed as:
-
-`sari = ( F1_add + F1_keep + P_del) / 3`
-
-where
-
-`F1_add` is the n-gram F1 score for add operations
-
-`F1_keep` is the n-gram F1 score for keep operations
-
-`P_del` is the n-gram precision score for delete operations
-
-The number of n grams, `n`, is equal to 4, as in the original paper.
-
-This implementation is adapted from [Tensorflow's tensor2tensor implementation](https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/utils/sari_hook.py).
-It has two differences with the [original GitHub implementation](https://github.com/cocoxu/simplification/blob/master/SARI.py):
-
-1) It defines 0/0=1 instead of 0 to give higher scores for predictions that match a target exactly.
-2) It fixes an [alleged bug](https://github.com/cocoxu/simplification/issues/6) in the keep score computation.
-
-
-
-## How to use
-
-The metric takes 3 inputs: sources (a list of source sentence strings), predictions (a list of predicted sentence strings) and references (a list of lists of reference sentence strings)
-
-```python
-from evaluate import load
-sari = load("sari")
-sources=["About 95 species are currently accepted."]
-predictions=["About 95 you now get in."]
-references=[["About 95 species are currently known.","About 95 species are now accepted.","95 species are now accepted."]]
-sari_score = sari.compute(sources=sources, predictions=predictions, references=references)
-```
-## Output values
-
-This metric outputs a dictionary with the SARI score:
-
-```
-print(sari_score)
-{'sari': 26.953601953601954}
-```
-
-The range of values for the SARI score is between 0 and 100 -- the higher the value, the better the performance of the model being evaluated, with a SARI of 100 being a perfect score.
-
-### Values from popular papers
-
-The [original paper that proposes the SARI metric](https://aclanthology.org/Q16-1029.pdf) reports scores ranging from 26 to 43 for different simplification systems and different datasets. They also find that the metric ranks all of the simplification systems and human references in the same order as the human assessment used as a comparison, and that it correlates reasonably with human judgments.
-
-More recent SARI scores for text simplification can be found on leaderboards for datasets such as [TurkCorpus](https://paperswithcode.com/sota/text-simplification-on-turkcorpus) and [Newsela](https://paperswithcode.com/sota/text-simplification-on-newsela).
-
-## Examples
-
-Perfect match between prediction and reference:
-
-```python
-from evaluate import load
-sari = load("sari")
-sources=["About 95 species are currently accepted ."]
-predictions=["About 95 species are currently accepted ."]
-references=[["About 95 species are currently accepted ."]]
-sari_score = sari.compute(sources=sources, predictions=predictions, references=references)
-print(sari_score)
-{'sari': 100.0}
-```
-
-Partial match between prediction and reference:
-
-```python
-from evaluate import load
-sari = load("sari")
-sources=["About 95 species are currently accepted ."]
-predictions=["About 95 you now get in ."]
-references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]]
-sari_score = sari.compute(sources=sources, predictions=predictions, references=references)
-print(sari_score)
-{'sari': 26.953601953601954}
-```
-
-## Limitations and bias
-
-SARI is a valuable measure for comparing different text simplification systems as well as one that can assist the iterative development of a system.
-
-However, while the [original paper presenting SARI](https://aclanthology.org/Q16-1029.pdf) states that it captures "the notion of grammaticality and meaning preservation", this is a difficult claim to empirically validate.
-
-## Citation
-
-```bibtex
-@inproceedings{xu-etal-2016-optimizing,
-title = {Optimizing Statistical Machine Translation for Text Simplification},
-authors={Xu, Wei and Napoles, Courtney and Pavlick, Ellie and Chen, Quanze and Callison-Burch, Chris},
-journal = {Transactions of the Association for Computational Linguistics},
-volume = {4},
-year={2016},
-url = {https://www.aclweb.org/anthology/Q16-1029},
-pages = {401--415},
-}
-```
-
-## Further References
-
-- [NLP Progress -- Text Simplification](http://nlpprogress.com/english/simplification.html)
-- [Hugging Face Hub -- Text Simplification Models](https://huggingface.co/datasets?filter=task_ids:text-simplification)
diff --git a/spaces/facebook/CutLER/README.md b/spaces/facebook/CutLER/README.md
deleted file mode 100644
index ee21847314e8ece8de6e9b5ed02e6eb95926fa67..0000000000000000000000000000000000000000
--- a/spaces/facebook/CutLER/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: CutLER
-emoji: 🌖
-colorFrom: yellow
-colorTo: green
-sdk: docker
-pinned: false
-license: mit
-suggested_hardware: t4-small
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/fatiXbelha/sd/Download Once Upon a Time (2017) - A Journey Through Time Space and Reincarnation.md b/spaces/fatiXbelha/sd/Download Once Upon a Time (2017) - A Journey Through Time Space and Reincarnation.md
deleted file mode 100644
index 9a8e69924e377446ad68df63b6d2dd04e4b0cf8b..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Once Upon a Time (2017) - A Journey Through Time Space and Reincarnation.md
+++ /dev/null
@@ -1,126 +0,0 @@
-
-
Download Once Upon a Time (2017 Full Movie)
-
If you are looking for a romantic fantasy movie that will take you to a magical world of love, adventure, and drama, then you should download Once Upon a Time (2017 Full Movie). This movie is based on a popular Chinese novel and tells the story of a goddess who falls in love with a human while undergoing a trial in the mortal world. However, their love is threatened by an old enemy who wants to destroy everything that they hold dear. In this article, we will tell you more about this movie and how you can download it easily and legally.
-
Introduction
-
What is Once Upon a Time?
-
Once Upon a Time is a Chinese fantasy romance movie that was released in 2017. It is directed by Zhao Xiaoding and Anthony LaMolinara, and stars Liu Yifei and Yang Yang as the main leads. The movie is based on the novel Three Lives Three Worlds, Ten Miles Peach Blossoms by TangQi Gongzi, which is also adapted into a popular TV series called Eternal Love (2017). The movie follows the story of Bai Qian, a goddess who has to undergo a trial in the mortal world to become a high goddess. There, she meets Ye Hua, a crown prince of the heavenly realm who looks exactly like her former lover who died three hundred years ago. As they fall in love again, they have to face many obstacles and enemies who want to separate them or harm them.
There are many reasons why you should watch Once Upon a Time if you are a fan of fantasy romance movies. Here are some of them:
-
-
The movie has stunning visuals and effects that will make you feel like you are in a fairy tale. The movie features beautiful landscapes, costumes, creatures, and magic that will captivate your eyes and imagination.
-
The movie has an engaging plot and characters that will make you feel invested in their fate. The movie has many twists and turns that will keep you on the edge of your seat. The movie also has many emotional moments that will make you laugh, cry, or swoon.
-
The movie has a great cast and chemistry that will make you fall in love with them. Liu Yifei and Yang Yang are both talented and attractive actors who portray their roles with passion and charm. They have a great chemistry that will make you root for their love story.
-
-
How to download Once Upon a Time?
-
Option 1: Rent or buy from online platforms
-
Pros and cons of this option
-
One way to download Once Upon a Time is to rent or buy it from online platforms that offer legal and high-quality downloads. Some of the pros of this option are:
-
-
You can support the creators and distributors of the movie by paying for their work.
-
You can enjoy the movie in HD quality and with subtitles in your preferred language.
-
You can watch the movie anytime and anywhere without internet connection.
-
-
Some of the cons of this option are:
-
-
You have to pay for the movie, which may not be affordable for everyone.
-
You may not be able to find the movie on your preferred platform or in your region.
-
You may have limited time to watch the movie if you rent it instead of buying it.
-
-
List of platforms and prices
-
Here are some of the online platforms where you can rent or buy Once Upon a Time and their prices as of June 2023:
-
-
Platform
Option 2: Stream for free from Freevee
-
Pros and cons of this option
-
Another way to watch Once Upon a Time is to stream it for free from Freevee, a free video streaming service that includes on-demand access to thousands of movies and TV shows, as well as virtual live streaming channels. Some of the pros of this option are:
-
-
You don't have to pay anything to watch the movie, which is great for budget-conscious viewers.
-
You can discover other content that you might like on Freevee, such as original shows, documentaries, and sports.
-
You can watch the movie on various devices, such as your computer, smartphone, tablet, or smart TV.
-
-
Some of the cons of this option are:
-
watch once upon a time 2017 online free
-once upon a time 2017 full movie english subtitles
-once upon a time 2017 chinese movie download
-stream once upon a time 2017 hd
-once upon a time 2017 film review
-once upon a time 2017 rotten tomatoes
-once upon a time 2017 imdb
-once upon a time 2017 cast and crew
-once upon a time 2017 trailer youtube
-once upon a time 2017 netflix
-once upon a time 2017 amazon prime
-once upon a time 2017 hulu
-once upon a time 2017 disney plus
-once upon a time 2017 dvd release date
-once upon a time 2017 blu ray
-once upon a time 2017 soundtrack
-once upon a time 2017 box office
-once upon a time 2017 awards
-once upon a time 2017 novel adaptation
-once upon a time 2017 behind the scenes
-once upon a time 2017 bloopers
-once upon a time 2017 deleted scenes
-once upon a time 2017 director's cut
-once upon a time 2017 fanfiction
-once upon a time 2017 quotes
-once upon a time 2017 poster
-once upon a time 2017 wallpaper
-once upon a time 2017 cosplay
-once upon a time 2017 merchandise
-once upon a time 2017 sequel
-once upon a time 2017 prequel
-once upon a time 2017 spin off
-once upon a time 2017 remake
-once upon a time 2017 reboot
-once upon a time 2017 crossover
-once upon a time 2017 parody
-once upon a time 2017 trivia
-once upon a time 2017 easter eggs
-once upon a time 2017 references
-once upon a time 2017 analysis
-once upon a time 2017 symbolism
-once upon a time 2017 themes
-once upon a time 2017 genre
-once upon a time 2017 rating
-once upon a time 2017 age group
-once upon a time 2017 audience reaction
-once upon a time 2017 critics opinion
-once upon a time 2017 controversy
-
-
You have to watch ads during the movie, which may interrupt your viewing experience.
-
You need a stable internet connection to stream the movie, which may not be available everywhere.
-
You may not be able to download the movie for offline viewing, which limits your flexibility.
-
-
How to access Freevee and watch Once Upon a Time
-
Here are the steps to access Freevee and watch Once Upon a Time:
-
-
Go to the Freevee website or download the Freevee app on your device. You can also access Freevee through the Prime Video app or website if you have an Amazon account.
-
Create a free account or sign in with your existing Amazon account. You don't need a Prime membership to use Freevee.
-
Search for Once Upon a Time in the search bar or browse through the categories and genres.
-
Select the movie and click on play. Enjoy the movie with ads.
-
-
Conclusion
-
Summary of the main points
-
In conclusion, Once Upon a Time is a romantic fantasy movie that will take you to a magical world of love, adventure, and drama. It is based on a popular Chinese novel and stars Liu Yifei and Yang Yang as a goddess and a prince who have to overcome many challenges to be together. You can download or stream the movie from various online platforms, such as renting or buying it from Amazon, iTunes, Google Play, or YouTube, or watching it for free from Freevee. Each option has its own pros and cons that you should consider before choosing one.
-
Call to action and recommendation
-
If you are interested in watching Once Upon a Time, we recommend that you try Freevee first. It is a free and legal way to watch the movie without spending any money. You can also explore other content that Freevee offers, such as original shows, documentaries, and sports. However, if you prefer to watch the movie without ads or with better quality, you can also rent or buy it from other online platforms. The choice is yours. Whatever you decide, we hope that you enjoy Once Upon a Time and have a wonderful time watching it.
-
Frequently Asked Questions
-
Is Once Upon a Time available on Netflix?
-
No, Once Upon a Time is not available on Netflix in the U.S., the UK, or Germany. However, you can watch it on other online platforms, such as Amazon, iTunes, Google Play, YouTube, or Freevee.
-
Is Once Upon a Time related to the TV series Once Upon a Time?
-
No, Once Upon a Time is not related to the TV series Once Upon a Time that aired on ABC from 2011 to 2018. They have different stories, characters, and settings. The only thing they have in common is the title.
-
What is the difference between Once Upon a Time and Eternal Love?
-
Once Upon a Time and Eternal Love are both adaptations of the same novel by TangQi Gongzi called Three Lives Three Worlds, Ten Miles Peach Blossoms. However, they have different formats, casts, and interpretations. Once Upon a Time is a movie that focuses on the main love story between Bai Qian and Ye Hua. Eternal Love is a TV series that explores more details and subplots of the novel.
-
How many languages does Once Upon a Time have subtitles in?
-
Once Upon a Time has subtitles in various languages depending on the platform you use to watch it. For example, on Amazon Prime Video, you can choose from English, Spanish, French, German, Italian, Portuguese, Dutch, Polish, Turkish, Arabic, Hindi, Tamil, Telugu, Indonesian, Malayalam, and Kannada subtitles.
-
How can
How can I contact the makers of Once Upon a Time?
-
If you have any questions, feedback, or suggestions for the makers of Once Upon a Time, you can contact them through their official website, social media accounts, or email address. Here are some of the ways to reach them:
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fb700/chat3/crazy_functions/test_project/python/dqn/policies.py b/spaces/fb700/chat3/crazy_functions/test_project/python/dqn/policies.py
deleted file mode 100644
index 4ecf39a5fc04b24ad1b809232b186728366987b6..0000000000000000000000000000000000000000
--- a/spaces/fb700/chat3/crazy_functions/test_project/python/dqn/policies.py
+++ /dev/null
@@ -1,237 +0,0 @@
-from typing import Any, Dict, List, Optional, Type
-
-import gym
-import torch as th
-from torch import nn
-
-from stable_baselines3.common.policies import BasePolicy, register_policy
-from stable_baselines3.common.torch_layers import BaseFeaturesExtractor, FlattenExtractor, NatureCNN, create_mlp
-from stable_baselines3.common.type_aliases import Schedule
-
-
-class QNetwork(BasePolicy):
- """
- Action-Value (Q-Value) network for DQN
-
- :param observation_space: Observation space
- :param action_space: Action space
- :param net_arch: The specification of the policy and value networks.
- :param activation_fn: Activation function
- :param normalize_images: Whether to normalize images or not,
- dividing by 255.0 (True by default)
- """
-
- def __init__(
- self,
- observation_space: gym.spaces.Space,
- action_space: gym.spaces.Space,
- features_extractor: nn.Module,
- features_dim: int,
- net_arch: Optional[List[int]] = None,
- activation_fn: Type[nn.Module] = nn.ReLU,
- normalize_images: bool = True,
- ):
- super(QNetwork, self).__init__(
- observation_space,
- action_space,
- features_extractor=features_extractor,
- normalize_images=normalize_images,
- )
-
- if net_arch is None:
- net_arch = [64, 64]
-
- self.net_arch = net_arch
- self.activation_fn = activation_fn
- self.features_extractor = features_extractor
- self.features_dim = features_dim
- self.normalize_images = normalize_images
- action_dim = self.action_space.n # number of actions
- q_net = create_mlp(self.features_dim, action_dim, self.net_arch, self.activation_fn)
- self.q_net = nn.Sequential(*q_net)
-
- def forward(self, obs: th.Tensor) -> th.Tensor:
- """
- Predict the q-values.
-
- :param obs: Observation
- :return: The estimated Q-Value for each action.
- """
- return self.q_net(self.extract_features(obs))
-
- def _predict(self, observation: th.Tensor, deterministic: bool = True) -> th.Tensor:
- q_values = self.forward(observation)
- # Greedy action
- action = q_values.argmax(dim=1).reshape(-1)
- return action
-
- def _get_constructor_parameters(self) -> Dict[str, Any]:
- data = super()._get_constructor_parameters()
-
- data.update(
- dict(
- net_arch=self.net_arch,
- features_dim=self.features_dim,
- activation_fn=self.activation_fn,
- features_extractor=self.features_extractor,
- )
- )
- return data
-
-
-class DQNPolicy(BasePolicy):
- """
- Policy class with Q-Value Net and target net for DQN
-
- :param observation_space: Observation space
- :param action_space: Action space
- :param lr_schedule: Learning rate schedule (could be constant)
- :param net_arch: The specification of the policy and value networks.
- :param activation_fn: Activation function
- :param features_extractor_class: Features extractor to use.
- :param features_extractor_kwargs: Keyword arguments
- to pass to the features extractor.
- :param normalize_images: Whether to normalize images or not,
- dividing by 255.0 (True by default)
- :param optimizer_class: The optimizer to use,
- ``th.optim.Adam`` by default
- :param optimizer_kwargs: Additional keyword arguments,
- excluding the learning rate, to pass to the optimizer
- """
-
- def __init__(
- self,
- observation_space: gym.spaces.Space,
- action_space: gym.spaces.Space,
- lr_schedule: Schedule,
- net_arch: Optional[List[int]] = None,
- activation_fn: Type[nn.Module] = nn.ReLU,
- features_extractor_class: Type[BaseFeaturesExtractor] = FlattenExtractor,
- features_extractor_kwargs: Optional[Dict[str, Any]] = None,
- normalize_images: bool = True,
- optimizer_class: Type[th.optim.Optimizer] = th.optim.Adam,
- optimizer_kwargs: Optional[Dict[str, Any]] = None,
- ):
- super(DQNPolicy, self).__init__(
- observation_space,
- action_space,
- features_extractor_class,
- features_extractor_kwargs,
- optimizer_class=optimizer_class,
- optimizer_kwargs=optimizer_kwargs,
- )
-
- if net_arch is None:
- if features_extractor_class == FlattenExtractor:
- net_arch = [64, 64]
- else:
- net_arch = []
-
- self.net_arch = net_arch
- self.activation_fn = activation_fn
- self.normalize_images = normalize_images
-
- self.net_args = {
- "observation_space": self.observation_space,
- "action_space": self.action_space,
- "net_arch": self.net_arch,
- "activation_fn": self.activation_fn,
- "normalize_images": normalize_images,
- }
-
- self.q_net, self.q_net_target = None, None
- self._build(lr_schedule)
-
- def _build(self, lr_schedule: Schedule) -> None:
- """
- Create the network and the optimizer.
-
- :param lr_schedule: Learning rate schedule
- lr_schedule(1) is the initial learning rate
- """
-
- self.q_net = self.make_q_net()
- self.q_net_target = self.make_q_net()
- self.q_net_target.load_state_dict(self.q_net.state_dict())
-
- # Setup optimizer with initial learning rate
- self.optimizer = self.optimizer_class(self.parameters(), lr=lr_schedule(1), **self.optimizer_kwargs)
-
- def make_q_net(self) -> QNetwork:
- # Make sure we always have separate networks for features extractors etc
- net_args = self._update_features_extractor(self.net_args, features_extractor=None)
- return QNetwork(**net_args).to(self.device)
-
- def forward(self, obs: th.Tensor, deterministic: bool = True) -> th.Tensor:
- return self._predict(obs, deterministic=deterministic)
-
- def _predict(self, obs: th.Tensor, deterministic: bool = True) -> th.Tensor:
- return self.q_net._predict(obs, deterministic=deterministic)
-
- def _get_constructor_parameters(self) -> Dict[str, Any]:
- data = super()._get_constructor_parameters()
-
- data.update(
- dict(
- net_arch=self.net_args["net_arch"],
- activation_fn=self.net_args["activation_fn"],
- lr_schedule=self._dummy_schedule, # dummy lr schedule, not needed for loading policy alone
- optimizer_class=self.optimizer_class,
- optimizer_kwargs=self.optimizer_kwargs,
- features_extractor_class=self.features_extractor_class,
- features_extractor_kwargs=self.features_extractor_kwargs,
- )
- )
- return data
-
-
-MlpPolicy = DQNPolicy
-
-
-class CnnPolicy(DQNPolicy):
- """
- Policy class for DQN when using images as input.
-
- :param observation_space: Observation space
- :param action_space: Action space
- :param lr_schedule: Learning rate schedule (could be constant)
- :param net_arch: The specification of the policy and value networks.
- :param activation_fn: Activation function
- :param features_extractor_class: Features extractor to use.
- :param normalize_images: Whether to normalize images or not,
- dividing by 255.0 (True by default)
- :param optimizer_class: The optimizer to use,
- ``th.optim.Adam`` by default
- :param optimizer_kwargs: Additional keyword arguments,
- excluding the learning rate, to pass to the optimizer
- """
-
- def __init__(
- self,
- observation_space: gym.spaces.Space,
- action_space: gym.spaces.Space,
- lr_schedule: Schedule,
- net_arch: Optional[List[int]] = None,
- activation_fn: Type[nn.Module] = nn.ReLU,
- features_extractor_class: Type[BaseFeaturesExtractor] = NatureCNN,
- features_extractor_kwargs: Optional[Dict[str, Any]] = None,
- normalize_images: bool = True,
- optimizer_class: Type[th.optim.Optimizer] = th.optim.Adam,
- optimizer_kwargs: Optional[Dict[str, Any]] = None,
- ):
- super(CnnPolicy, self).__init__(
- observation_space,
- action_space,
- lr_schedule,
- net_arch,
- activation_fn,
- features_extractor_class,
- features_extractor_kwargs,
- normalize_images,
- optimizer_class,
- optimizer_kwargs,
- )
-
-
-register_policy("MlpPolicy", MlpPolicy)
-register_policy("CnnPolicy", CnnPolicy)
diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/losses.py b/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/losses.py
deleted file mode 100644
index 87aeaa107af4d53f5a6132b3739d5cafdcded7fc..0000000000000000000000000000000000000000
--- a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/losses.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import torch
-from torch import nn
-
-
-def get_loss(name):
- if name == "cosface":
- return CosFace()
- elif name == "arcface":
- return ArcFace()
- else:
- raise ValueError()
-
-
-class CosFace(nn.Module):
- def __init__(self, s=64.0, m=0.40):
- super(CosFace, self).__init__()
- self.s = s
- self.m = m
-
- def forward(self, cosine, label):
- index = torch.where(label != -1)[0]
- m_hot = torch.zeros(index.size()[0], cosine.size()[1], device=cosine.device)
- m_hot.scatter_(1, label[index, None], self.m)
- cosine[index] -= m_hot
- ret = cosine * self.s
- return ret
-
-
-class ArcFace(nn.Module):
- def __init__(self, s=64.0, m=0.5):
- super(ArcFace, self).__init__()
- self.s = s
- self.m = m
-
- def forward(self, cosine: torch.Tensor, label):
- index = torch.where(label != -1)[0]
- m_hot = torch.zeros(index.size()[0], cosine.size()[1], device=cosine.device)
- m_hot.scatter_(1, label[index, None], self.m)
- cosine.acos_()
- cosine[index] += m_hot
- cosine.cos_().mul_(self.s)
- return cosine
diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/models/discriminator.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/models/discriminator.py
deleted file mode 100644
index 16bf3722c7f2e35cdc9bd177a33ed0975e67200d..0000000000000000000000000000000000000000
--- a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/models/discriminator.py
+++ /dev/null
@@ -1,20 +0,0 @@
-from torch import nn
-
-
-class LatentCodesDiscriminator(nn.Module):
- def __init__(self, style_dim, n_mlp):
- super().__init__()
-
- self.style_dim = style_dim
-
- layers = []
- for i in range(n_mlp-1):
- layers.append(
- nn.Linear(style_dim, style_dim)
- )
- layers.append(nn.LeakyReLU(0.2))
- layers.append(nn.Linear(512, 1))
- self.mlp = nn.Sequential(*layers)
-
- def forward(self, w):
- return self.mlp(w)
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Neon Pink Lips Light Effect and Backgrounds.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Neon Pink Lips Light Effect and Backgrounds.md
deleted file mode 100644
index 036c4af167ec8ad4baa2740896eb0d0d3b4178cb..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Neon Pink Lips Light Effect and Backgrounds.md
+++ /dev/null
@@ -1,165 +0,0 @@
-
-
How to Download Pink Lips: A Guide to Achieve the Perfect Pout
-
Do you want to have pink lips that look natural, healthy, and attractive? If so, you are not alone. Many people view pink lips as a sign of beauty or health, and they can enhance your smile and confidence. However, not everyone is born with pink lips, and some factors can cause your lips to lose their original hue or become discolored. Fortunately, there are ways to get pink lips naturally at home or by using some amazing apps that can change your lip color in photos. In this article, we will show you how to download pink lips and achieve the perfect pout.
-
What are Pink Lips and Why Do They Matter?
-
Pink lips are lips that have a light pink hue that matches your skin tone and complexion. They are usually smooth, hydrated, and free from cracks or sores. Pink lips can make you look younger, fresher, and more vibrant. They can also complement your makeup and outfit, and make your teeth look whiter.
Having pink lips can offer you several benefits, such as:
-
-
Improving your appearance and self-esteem
-
Boosting your mood and happiness
-
Attracting more attention and compliments
-
Expressing your personality and style
-
Protecting your lips from dryness and damage
-
-
The Causes of Lip Discoloration
-
However, not everyone has pink lips naturally, and some factors can cause your lips to change color or become darker. Some of these factors include:
-
-
Genetics and skin tone
-
Sun exposure and sunspots
-
Smoking and nicotine stains
-
Caffeine and alcohol consumption
-
Dehydration and dryness
-
Allergies and infections
-
Anemia and low blood oxygen levels
-
Certain medications and medical conditions
-
-
How to Get Pink Lips Naturally at Home
-
If you want to get pink lips naturally at home, you can try some simple home remedies and lip care techniques that can help you exfoliate, moisturize, nourish, and brighten your lips. Here are some of the best ways to get pink lips naturally at home:
-
Lip Scrubs
-
Lip scrubs are a great way to remove dead skin cells, dry patches, and stains from your lips. They can also stimulate blood circulation, which can make your lips appear pinker. You can use a lip scrub once or twice a week to gently massage your lips. You can buy a lip scrub from a store or online, or you can make your own by mixing sugar or salt with an oil such as coconut or almond oil.
-
Lip Massage
-
Lip massage can also boost blood flow to your lips, which can make them look plumper and rosier. You can use your fingers or a soft toothbrush to gently massage your lips for a few minutes every day. You can also use a lip oil or balm to lubricate your lips and prevent them from drying out.
-
Lip Masks
-
Lip masks are another way to hydrate, nourish, and brighten your lips. They can also help reduce pigmentation, inflammation, and chapping. You can use a lip mask once or twice a week to leave it on your lips for 10 to 15 minutes. You can buy a lip mask from a store or online, or you can make your own by using natural ingredients such as honey or lemon juice.
-
Lip Balms
-
Lip balms are essential for keeping your lips moisturized, soft, and smooth. They can also protect your lips from sun damage, wind, cold, and pollution. You can use a lip balm several times a day to apply it on your lips. You can buy a lip balm from a store or online, or you can make your own by using natural ingredients such as beeswax or shea butter. You can also choose a lip balm that has a tint of pink or red to add some color to your lips.
How to Use Pink Lips Apps to Change Your Lip Color in Photos
-
If you want to change your lip color in photos, you can use some amazing apps that can help you download pink lips in seconds. These apps can let you edit your photos and apply different shades of pink to your lips. You can also adjust the intensity, brightness, and contrast of the color to suit your preference. Here are some of the best pink lips apps for iPhone and Android:
-
The Best Pink Lips Apps for iPhone and Android
-
-
-
App Name
-
Description
-
Rating
-
Price
-
-
-
Pink Lips Photo Editor
-
This app allows you to change your lip color in photos with various pink shades. You can also add stickers, filters, and text to your photos.
-
4.5/5
-
Free
-
-
-
Pink Lips Makeup Camera
-
This app lets you try on different pink lipsticks in real time with your camera. You can also apply other makeup effects such as eyelashes, eyeshadow, and blush.
-
4.4/5
-
Free
-
-
-
Pink Lips Photo Booth
-
This app enables you to create fun and funny photos with pink lips. You can also choose from different styles of pink lips such as glossy, matte, glitter, and neon.
-
4.3/5
-
$0.99
-
-
-
Pink Lips Beauty Plus
-
This app helps you enhance your beauty and glamour with pink lips. You can also use other features such as skin smoothing, teeth whitening, and eye enlargement.
-
4.2/5
-
Free
-
-
-
Pink Lips Photo Collage
-
This app allows you to create stunning photo collages with pink lips. You can also customize your collages with backgrounds, frames, stickers, and fonts.
-
4.1/5
-
Free
-
-
-
How to Use a Pink Lips App in 4 Easy Steps
-
To use a pink lips app to change your lip color in photos, you can follow these simple steps:
-
-
Download and install the app of your choice from the App Store or Google Play Store.
-
Open the app and select a photo from your gallery or take a new one with your camera.
-
Choose a pink shade that matches your skin tone and mood from the app's palette.
-
Adjust the color's intensity, brightness, and contrast to make it look natural and realistic.
-
Save and share your photo with your friends and family.
-
-
Conclusion
-
Pink lips are a desirable feature that can make you look more attractive, healthy, and confident. However, not everyone has pink lips naturally, and some factors can cause your lips to become darker or discolored. Fortunately, there are ways to get pink lips naturally at home or by using some amazing apps that can change your lip color in photos. In this article, we have shown you how to download pink lips and achieve the perfect pout. We hope you enjoyed this article and found it useful. If you have any questions or comments, please feel free to leave them below.
-
FAQs
-
Q: How long does it take to get pink lips naturally?
-
A: It depends on the cause of your lip discoloration and the method you use to get pink lips naturally. Generally, it may take a few weeks to a few months of consistent lip care and home remedies to see noticeable results.
-
Q: Are pink lips permanent?
-
A: No, pink lips are not permanent. They can fade or change color over time due to aging, sun exposure, smoking, diet, or other factors. Therefore, you need to maintain your lip care routine and protect your lips from harmful factors to keep them pink and healthy.
-
Q: Can I use lipstick or lip gloss to get pink lips?
-
A: Yes, you can use lipstick or lip gloss to get pink lips temporarily. However, you should choose a product that is suitable for your skin tone and lip condition. You should also avoid products that contain harsh chemicals, artificial colors, or fragrances that can irritate or dry out your lips. You should also remove your lipstick or lip gloss before going to bed and apply a lip balm to moisturize your lips.
-
Q: What are the side effects of using pink lips apps?
-
A: Pink lips apps are generally safe and fun to use, as long as you use them responsibly and moderately. However, some possible side effects of using pink lips apps are:
-
-
Creating unrealistic expectations or dissatisfaction with your natural lip color
-
Spending too much time or money on editing your photos
-
Exposing your personal information or photos to hackers or scammers
-
Receiving negative feedback or criticism from others
-
-
Q: How can I prevent my lips from getting darker or discolored?
-
A: Some of the best ways to prevent your lips from getting darker or discolored are:
-
-
Avoid smoking, drinking, or eating foods that can stain your lips
-
Drink plenty of water and eat healthy foods that can nourish your lips
-
Wear a lip balm with SPF and avoid excessive sun exposure
-
Exfoliate and moisturize your lips regularly
-
Consult a doctor if you have any medical conditions or allergies that can affect your lips
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/node_modules/ms/readme.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/node_modules/ms/readme.md
deleted file mode 100644
index 9a1996b17e0de6854dd1cf10c5f2ee642e494085..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/node_modules/ms/readme.md
+++ /dev/null
@@ -1,60 +0,0 @@
-# ms
-
-[](https://travis-ci.org/zeit/ms)
-[](https://spectrum.chat/zeit)
-
-Use this package to easily convert various time formats to milliseconds.
-
-## Examples
-
-```js
-ms('2 days') // 172800000
-ms('1d') // 86400000
-ms('10h') // 36000000
-ms('2.5 hrs') // 9000000
-ms('2h') // 7200000
-ms('1m') // 60000
-ms('5s') // 5000
-ms('1y') // 31557600000
-ms('100') // 100
-ms('-3 days') // -259200000
-ms('-1h') // -3600000
-ms('-200') // -200
-```
-
-### Convert from Milliseconds
-
-```js
-ms(60000) // "1m"
-ms(2 * 60000) // "2m"
-ms(-3 * 60000) // "-3m"
-ms(ms('10 hours')) // "10h"
-```
-
-### Time Format Written-Out
-
-```js
-ms(60000, { long: true }) // "1 minute"
-ms(2 * 60000, { long: true }) // "2 minutes"
-ms(-3 * 60000, { long: true }) // "-3 minutes"
-ms(ms('10 hours'), { long: true }) // "10 hours"
-```
-
-## Features
-
-- Works both in [Node.js](https://nodejs.org) and in the browser
-- If a number is supplied to `ms`, a string with a unit is returned
-- If a string that contains the number is supplied, it returns it as a number (e.g.: it returns `100` for `'100'`)
-- If you pass a string with a number and a valid unit, the number of equivalent milliseconds is returned
-
-## Related Packages
-
-- [ms.macro](https://github.com/knpwrs/ms.macro) - Run `ms` as a macro at build-time.
-
-## Caught a Bug?
-
-1. [Fork](https://help.github.com/articles/fork-a-repo/) this repository to your own GitHub account and then [clone](https://help.github.com/articles/cloning-a-repository/) it to your local device
-2. Link the package to the global module directory: `npm link`
-3. Within the module you want to test your local development instance of ms, just link it to the dependencies: `npm link ms`. Instead of the default one from npm, Node.js will now use your clone of ms!
-
-As always, you can run the tests using: `npm test`
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/setprototypeof/test/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/setprototypeof/test/index.js
deleted file mode 100644
index afeb4ddb2921824491502d0f68a0a3a44cf28aa1..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/setprototypeof/test/index.js
+++ /dev/null
@@ -1,24 +0,0 @@
-'use strict'
-/* eslint-env mocha */
-/* eslint no-proto: 0 */
-var assert = require('assert')
-var setPrototypeOf = require('..')
-
-describe('setProtoOf(obj, proto)', function () {
- it('should merge objects', function () {
- var obj = { a: 1, b: 2 }
- var proto = { b: 3, c: 4 }
- var mergeObj = setPrototypeOf(obj, proto)
-
- if (Object.getPrototypeOf) {
- assert.strictEqual(Object.getPrototypeOf(obj), proto)
- } else if ({ __proto__: [] } instanceof Array) {
- assert.strictEqual(obj.__proto__, proto)
- } else {
- assert.strictEqual(obj.a, 1)
- assert.strictEqual(obj.b, 2)
- assert.strictEqual(obj.c, 4)
- }
- assert.strictEqual(mergeObj, obj)
- })
-})
diff --git a/spaces/fightglory/YoloV4-Webcam/xml_to_txt.py b/spaces/fightglory/YoloV4-Webcam/xml_to_txt.py
deleted file mode 100644
index 6752fbfec4c60d1dcb90bf81446a6e66364dcd5f..0000000000000000000000000000000000000000
--- a/spaces/fightglory/YoloV4-Webcam/xml_to_txt.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import xml.etree.ElementTree as ET
-import os
-from glob import glob
-
-XML_PATH = './dataset/xml'
-CLASSES_PATH = './class_names/classes.txt'
-TXT_PATH = './dataset/txt/anno.txt'
-
-
-'''loads the classes'''
-def get_classes(classes_path):
- with open(classes_path) as f:
- class_names = f.readlines()
- class_names = [c.strip() for c in class_names]
- return class_names
-
-
-classes = get_classes(CLASSES_PATH)
-assert len(classes) > 0, 'no class names detected!'
-print(f'num classes: {len(classes)}')
-
-# output file
-list_file = open(TXT_PATH, 'w')
-
-for path in glob(os.path.join(XML_PATH, '*.xml')):
- in_file = open(path)
-
- # Parse .xml file
- tree = ET.parse(in_file)
- root = tree.getroot()
- # Write object information to .txt file
- file_name = root.find('filename').text
- print(file_name)
- list_file.write(file_name)
- for obj in root.iter('object'):
- cls = obj.find('name').text
- cls_id = classes.index(cls)
- xmlbox = obj.find('bndbox')
- b = (int(xmlbox.find('xmin').text), int(xmlbox.find('ymin').text), int(xmlbox.find('xmax').text), int(xmlbox.find('ymax').text))
- list_file.write(" " + ",".join([str(a) for a in b]) + ',' + str(cls_id))
- list_file.write('\n')
-list_file.close()
diff --git a/spaces/fishaudio/fish-diffusion/configs/CSD.py b/spaces/fishaudio/fish-diffusion/configs/CSD.py
deleted file mode 100644
index d38a1e1ef44a1a26599a6c3ffacee5025f6b3e30..0000000000000000000000000000000000000000
--- a/spaces/fishaudio/fish-diffusion/configs/CSD.py
+++ /dev/null
@@ -1,39 +0,0 @@
-_base_ = [
- "./_base_/archs/hifi_svc.py",
-]
-
-speaker_mapping = {'csd': 0,}
-
-model = dict(
- type="HiFiSVC",
- speaker_encoder=dict(
- input_size=len(speaker_mapping),
- ),
-)
-
-preprocessing = dict(
- text_features_extractor=dict(
- type="ContentVec",
- ),
- pitch_extractor=dict(
- type="ParselMouthPitchExtractor",
- keep_zeros=False,
- f0_min=40.0,
- f0_max=1600.0,
- ),
- energy_extractor=dict(
- type="RMSEnergyExtractor",
- ),
- augmentations=[
- dict(
- type="RandomPitchShifting",
- key_shifts=[-5., 5.],
- probability=1.5,
- ),
- dict(
- type="RandomTimeStretching",
- factors=[0.8, 1.2],
- probability=0.75,
- )
- ],
-)
\ No newline at end of file
diff --git a/spaces/flatindo/generate2/diffusion_webui/diffusion_models/text2img_app.py b/spaces/flatindo/generate2/diffusion_webui/diffusion_models/text2img_app.py
deleted file mode 100644
index 66369e776740fb8b15294d7fe03dea0ca2d2bc13..0000000000000000000000000000000000000000
--- a/spaces/flatindo/generate2/diffusion_webui/diffusion_models/text2img_app.py
+++ /dev/null
@@ -1,173 +0,0 @@
-import gradio as gr
-import torch
-from diffusers import StableDiffusionPipeline,DiffusionPipeline
-
-from diffusion_webui.utils.model_list import stable_model_list
-from diffusion_webui.utils.scheduler_list import (
- SCHEDULER_MAPPING,
- get_scheduler,
-)
-
-
-class StableDiffusionText2ImageGenerator:
- def __init__(self):
- self.pipe = None
-
- def load_model(
- self,
- stable_model_path,
- scheduler,
- ):
- if self.pipe is None or self.pipe.model_name != stable_model_path or self.pipe.scheduler_name != scheduler:
- if stable_model_path == "stabilityai/stable-diffusion-xl-base-0.9":
- self.pipe = DiffusionPipeline.from_pretrained(
- stable_model_path, safety_checker=None, torch_dtype=torch.float16
- )
- else:
- self.pipe = StableDiffusionPipeline.from_pretrained(
- stable_model_path, safety_checker=None, torch_dtype=torch.float16
- )
-
- self.pipe = get_scheduler(pipe=self.pipe, scheduler=scheduler)
- self.pipe.to("cuda")
- self.pipe.enable_xformers_memory_efficient_attention()
- self.pipe.model_name = stable_model_path
- self.pipe.scheduler_name = scheduler
-
- return self.pipe
-
- def generate_image(
- self,
- stable_model_path: str,
- prompt: str,
- negative_prompt: str,
- num_images_per_prompt: int,
- scheduler: str,
- guidance_scale: int,
- num_inference_step: int,
- height: int,
- width: int,
- seed_generator=0,
- ):
- pipe = self.load_model(
- stable_model_path=stable_model_path,
- scheduler=scheduler,
- )
- if seed_generator == 0:
- random_seed = torch.randint(0, 1000000, (1,))
- generator = torch.manual_seed(random_seed)
- else:
- generator = torch.manual_seed(seed_generator)
-
- images = pipe(
- prompt=prompt,
- height=height,
- width=width,
- negative_prompt=negative_prompt,
- num_images_per_prompt=num_images_per_prompt,
- num_inference_steps=num_inference_step,
- guidance_scale=guidance_scale,
- generator=generator,
- ).images
-
- return images
-
- def app():
- with gr.Blocks():
- with gr.Row():
- with gr.Column():
- text2image_prompt = gr.Textbox(
- lines=1,
- placeholder="Prompt",
- show_label=False,
- )
-
- text2image_negative_prompt = gr.Textbox(
- lines=1,
- placeholder="Negative Prompt",
- show_label=False,
- )
- with gr.Row():
- with gr.Column():
- text2image_model_path = gr.Dropdown(
- choices=stable_model_list,
- value=stable_model_list[1],
- label="Text-Image Model Id",
- )
-
- text2image_guidance_scale = gr.Slider(
- minimum=0.1,
- maximum=15,
- step=0.1,
- value=7.5,
- label="Guidance Scale",
- )
-
- text2image_num_inference_step = gr.Slider(
- minimum=1,
- maximum=100,
- step=1,
- value=50,
- label="Num Inference Step",
- )
- text2image_num_images_per_prompt = gr.Slider(
- minimum=1,
- maximum=4,
- step=1,
- value=1,
- label="Number Of Images",
- )
- with gr.Row():
- with gr.Column():
- text2image_scheduler = gr.Dropdown(
- choices=list(SCHEDULER_MAPPING.keys()),
- value=list(SCHEDULER_MAPPING.keys())[0],
- label="Scheduler",
- )
-
- text2image_height = gr.Slider(
- minimum=128,
- maximum=1280,
- step=32,
- value=512,
- label="Image Height",
- )
-
- text2image_width = gr.Slider(
- minimum=128,
- maximum=1280,
- step=32,
- value=1024,
- label="Image Width",
- )
- text2image_seed_generator = gr.Slider(
- label="Seed(0 for random)",
- minimum=0,
- maximum=1000000,
- value=0,
- )
- text2image_predict = gr.Button(value="Generator")
-
- with gr.Column():
- output_image = gr.Gallery(
- label="Generated images",
- show_label=False,
- elem_id="gallery",
- ).style(grid=(1, 2), height=200)
-
- text2image_predict.click(
- fn=StableDiffusionText2ImageGenerator().generate_image,
- inputs=[
- text2image_model_path,
- text2image_prompt,
- text2image_negative_prompt,
- text2image_num_images_per_prompt,
- text2image_scheduler,
- text2image_guidance_scale,
- text2image_num_inference_step,
- text2image_height,
- text2image_width,
- text2image_seed_generator,
- ],
- outputs=output_image,
- )
diff --git a/spaces/fuckyoudeki/AutoGPT/autogpt/__init__.py b/spaces/fuckyoudeki/AutoGPT/autogpt/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/glyszt/vt/vtoonify/model/raft/core/__init__.py b/spaces/glyszt/vt/vtoonify/model/raft/core/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/googlyeyes/question_generation_swayam/README.md b/spaces/googlyeyes/question_generation_swayam/README.md
deleted file mode 100644
index d28f5d84762457f0ca2af71bc782bb058819cf5c..0000000000000000000000000000000000000000
--- a/spaces/googlyeyes/question_generation_swayam/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Question Generation Swayam
-emoji: 🌍
-colorFrom: gray
-colorTo: red
-sdk: streamlit
-sdk_version: 1.19.0
-app_file: app.py
-pinned: false
-license: unknown
-python_version: 3.10.9
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/gradio/HuBERT/fairseq/incremental_decoding_utils.py b/spaces/gradio/HuBERT/fairseq/incremental_decoding_utils.py
deleted file mode 100644
index b26e6cd01cd4cbdffa23d88b354eb4a55a94189b..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/fairseq/incremental_decoding_utils.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import uuid
-from typing import Dict, Optional
-
-from torch import Tensor
-
-
-class FairseqIncrementalState(object):
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self.init_incremental_state()
-
- def init_incremental_state(self):
- self._incremental_state_id = str(uuid.uuid4())
-
- def _get_full_incremental_state_key(self, key: str) -> str:
- return "{}.{}".format(self._incremental_state_id, key)
-
- def get_incremental_state(
- self,
- incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]],
- key: str,
- ) -> Optional[Dict[str, Optional[Tensor]]]:
- """Helper for getting incremental state for an nn.Module."""
- full_key = self._get_full_incremental_state_key(key)
- if incremental_state is None or full_key not in incremental_state:
- return None
- return incremental_state[full_key]
-
- def set_incremental_state(
- self,
- incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]],
- key: str,
- value: Dict[str, Optional[Tensor]],
- ) -> Optional[Dict[str, Dict[str, Optional[Tensor]]]]:
- """Helper for setting incremental state for an nn.Module."""
- if incremental_state is not None:
- full_key = self._get_full_incremental_state_key(key)
- incremental_state[full_key] = value
- return incremental_state
-
-
-def with_incremental_state(cls):
- cls.__bases__ = (FairseqIncrementalState,) + tuple(
- b for b in cls.__bases__ if b != FairseqIncrementalState
- )
- return cls
diff --git a/spaces/gwang-kim/DATID-3D/eg3d/viz/stylemix_widget.py b/spaces/gwang-kim/DATID-3D/eg3d/viz/stylemix_widget.py
deleted file mode 100644
index 0b84d6426b27bc890cfcf7e74a74ce0569d77847..0000000000000000000000000000000000000000
--- a/spaces/gwang-kim/DATID-3D/eg3d/viz/stylemix_widget.py
+++ /dev/null
@@ -1,68 +0,0 @@
-# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
-#
-# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
-# property and proprietary rights in and to this material, related
-# documentation and any modifications thereto. Any use, reproduction,
-# disclosure or distribution of this material and related documentation
-# without an express license agreement from NVIDIA CORPORATION or
-# its affiliates is strictly prohibited.
-
-import imgui
-from gui_utils import imgui_utils
-
-#----------------------------------------------------------------------------
-
-class StyleMixingWidget:
- def __init__(self, viz):
- self.viz = viz
- self.seed_def = 1000
- self.seed = self.seed_def
- self.animate = False
- self.enables = []
-
- @imgui_utils.scoped_by_object_id
- def __call__(self, show=True):
- viz = self.viz
- num_ws = viz.result.get('num_ws', 0)
- num_enables = viz.result.get('num_ws', 18)
- self.enables += [False] * max(num_enables - len(self.enables), 0)
-
- if show:
- imgui.text('Stylemix')
- imgui.same_line(viz.label_w)
- with imgui_utils.item_width(viz.font_size * 8), imgui_utils.grayed_out(num_ws == 0):
- _changed, self.seed = imgui.input_int('##seed', self.seed)
- imgui.same_line(viz.label_w + viz.font_size * 8 + viz.spacing)
- with imgui_utils.grayed_out(num_ws == 0):
- _clicked, self.animate = imgui.checkbox('Anim', self.animate)
-
- pos2 = imgui.get_content_region_max()[0] - 1 - viz.button_w
- pos1 = pos2 - imgui.get_text_line_height() - viz.spacing
- pos0 = viz.label_w + viz.font_size * 12
- imgui.push_style_var(imgui.STYLE_FRAME_PADDING, [0, 0])
- for idx in range(num_enables):
- imgui.same_line(round(pos0 + (pos1 - pos0) * (idx / (num_enables - 1))))
- if idx == 0:
- imgui.set_cursor_pos_y(imgui.get_cursor_pos_y() + 3)
- with imgui_utils.grayed_out(num_ws == 0):
- _clicked, self.enables[idx] = imgui.checkbox(f'##{idx}', self.enables[idx])
- if imgui.is_item_hovered():
- imgui.set_tooltip(f'{idx}')
- imgui.pop_style_var(1)
-
- imgui.same_line(pos2)
- imgui.set_cursor_pos_y(imgui.get_cursor_pos_y() - 3)
- with imgui_utils.grayed_out(num_ws == 0):
- if imgui_utils.button('Reset', width=-1, enabled=(self.seed != self.seed_def or self.animate or any(self.enables[:num_enables]))):
- self.seed = self.seed_def
- self.animate = False
- self.enables = [False] * num_enables
-
- if any(self.enables[:num_ws]):
- viz.args.stylemix_idx = [idx for idx, enable in enumerate(self.enables) if enable]
- viz.args.stylemix_seed = self.seed & ((1 << 32) - 1)
- if self.animate:
- self.seed += 1
-
-#----------------------------------------------------------------------------
diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/ops/filtered_lrelu.cpp b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/ops/filtered_lrelu.cpp
deleted file mode 100644
index 4e253d1f3ffe84e54e667bf61a45dfe66264a73c..0000000000000000000000000000000000000000
--- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/ops/filtered_lrelu.cpp
+++ /dev/null
@@ -1,300 +0,0 @@
-// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// NVIDIA CORPORATION and its licensors retain all intellectual property
-// and proprietary rights in and to this software, related documentation
-// and any modifications thereto. Any use, reproduction, disclosure or
-// distribution of this software and related documentation without an express
-// license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-#include
-#include
-#include
-#include "filtered_lrelu.h"
-
-//------------------------------------------------------------------------
-
-static std::tuple filtered_lrelu(
- torch::Tensor x, torch::Tensor fu, torch::Tensor fd, torch::Tensor b, torch::Tensor si,
- int up, int down, int px0, int px1, int py0, int py1, int sx, int sy, float gain, float slope, float clamp, bool flip_filters, bool writeSigns)
-{
- // Set CUDA device.
- TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device");
- const at::cuda::OptionalCUDAGuard device_guard(device_of(x));
-
- // Validate arguments.
- TORCH_CHECK(fu.device() == x.device() && fd.device() == x.device() && b.device() == x.device(), "all input tensors must reside on the same device");
- TORCH_CHECK(fu.dtype() == torch::kFloat && fd.dtype() == torch::kFloat, "fu and fd must be float32");
- TORCH_CHECK(b.dtype() == x.dtype(), "x and b must have the same dtype");
- TORCH_CHECK(x.dtype() == torch::kHalf || x.dtype() == torch::kFloat, "x and b must be float16 or float32");
- TORCH_CHECK(x.dim() == 4, "x must be rank 4");
- TORCH_CHECK(x.size(0) * x.size(1) <= INT_MAX && x.size(2) <= INT_MAX && x.size(3) <= INT_MAX, "x is too large");
- TORCH_CHECK(x.numel() > 0, "x is empty");
- TORCH_CHECK((fu.dim() == 1 || fu.dim() == 2) && (fd.dim() == 1 || fd.dim() == 2), "fu and fd must be rank 1 or 2");
- TORCH_CHECK(fu.size(0) <= INT_MAX && fu.size(-1) <= INT_MAX, "fu is too large");
- TORCH_CHECK(fd.size(0) <= INT_MAX && fd.size(-1) <= INT_MAX, "fd is too large");
- TORCH_CHECK(fu.numel() > 0, "fu is empty");
- TORCH_CHECK(fd.numel() > 0, "fd is empty");
- TORCH_CHECK(b.dim() == 1 && b.size(0) == x.size(1), "b must be a vector with the same number of channels as x");
- TORCH_CHECK(up >= 1 && down >= 1, "up and down must be at least 1");
-
- // Figure out how much shared memory is available on the device.
- int maxSharedBytes = 0;
- AT_CUDA_CHECK(cudaDeviceGetAttribute(&maxSharedBytes, cudaDevAttrMaxSharedMemoryPerBlockOptin, x.device().index()));
- int sharedKB = maxSharedBytes >> 10;
-
- // Populate enough launch parameters to check if a CUDA kernel exists.
- filtered_lrelu_kernel_params p;
- p.up = up;
- p.down = down;
- p.fuShape = make_int2((int)fu.size(-1), fu.dim() == 2 ? (int)fu.size(0) : 0); // shape [n, 0] indicates separable filter.
- p.fdShape = make_int2((int)fd.size(-1), fd.dim() == 2 ? (int)fd.size(0) : 0);
- filtered_lrelu_kernel_spec test_spec = choose_filtered_lrelu_kernel(p, sharedKB);
- if (!test_spec.exec)
- {
- // No kernel found - return empty tensors and indicate missing kernel with return code of -1.
- return std::make_tuple(torch::Tensor(), torch::Tensor(), -1);
- }
-
- // Input/output element size.
- int64_t sz = (x.dtype() == torch::kHalf) ? 2 : 4;
-
- // Input sizes.
- int64_t xw = (int)x.size(3);
- int64_t xh = (int)x.size(2);
- int64_t fut_w = (int)fu.size(-1) - 1;
- int64_t fut_h = (int)fu.size(0) - 1;
- int64_t fdt_w = (int)fd.size(-1) - 1;
- int64_t fdt_h = (int)fd.size(0) - 1;
-
- // Logical size of upsampled buffer.
- int64_t cw = xw * up + (px0 + px1) - fut_w;
- int64_t ch = xh * up + (py0 + py1) - fut_h;
- TORCH_CHECK(cw > fdt_w && ch > fdt_h, "upsampled buffer must be at least the size of downsampling filter");
- TORCH_CHECK(cw <= INT_MAX && ch <= INT_MAX, "upsampled buffer is too large");
-
- // Compute output size and allocate.
- int64_t yw = (cw - fdt_w + (down - 1)) / down;
- int64_t yh = (ch - fdt_h + (down - 1)) / down;
- TORCH_CHECK(yw > 0 && yh > 0, "output must be at least 1x1");
- TORCH_CHECK(yw <= INT_MAX && yh <= INT_MAX, "output is too large");
- torch::Tensor y = torch::empty({x.size(0), x.size(1), yh, yw}, x.options(), x.suggest_memory_format());
-
- // Allocate sign tensor.
- torch::Tensor so;
- torch::Tensor s = si;
- bool readSigns = !!s.numel();
- int64_t sw_active = 0; // Active width of sign tensor.
- if (writeSigns)
- {
- sw_active = yw * down - (down - 1) + fdt_w; // Active width in elements.
- int64_t sh = yh * down - (down - 1) + fdt_h; // Height = active height.
- int64_t sw = (sw_active + 15) & ~15; // Width = active width in elements, rounded up to multiple of 16.
- TORCH_CHECK(sh <= INT_MAX && (sw >> 2) <= INT_MAX, "signs is too large");
- s = so = torch::empty({x.size(0), x.size(1), sh, sw >> 2}, x.options().dtype(torch::kUInt8), at::MemoryFormat::Contiguous);
- }
- else if (readSigns)
- sw_active = s.size(3) << 2;
-
- // Validate sign tensor if in use.
- if (readSigns || writeSigns)
- {
- TORCH_CHECK(s.is_contiguous(), "signs must be contiguous");
- TORCH_CHECK(s.dtype() == torch::kUInt8, "signs must be uint8");
- TORCH_CHECK(s.device() == x.device(), "signs must reside on the same device as x");
- TORCH_CHECK(s.dim() == 4, "signs must be rank 4");
- TORCH_CHECK(s.size(0) == x.size(0) && s.size(1) == x.size(1), "signs must have same batch & channels as x");
- TORCH_CHECK(s.size(2) <= INT_MAX && s.size(3) <= INT_MAX, "signs is too large");
- }
-
- // Populate rest of CUDA kernel parameters.
- p.x = x.data_ptr();
- p.y = y.data_ptr();
- p.b = b.data_ptr();
- p.s = (readSigns || writeSigns) ? s.data_ptr() : 0;
- p.fu = fu.data_ptr();
- p.fd = fd.data_ptr();
- p.pad0 = make_int2(px0, py0);
- p.gain = gain;
- p.slope = slope;
- p.clamp = clamp;
- p.flip = (flip_filters) ? 1 : 0;
- p.xShape = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0));
- p.yShape = make_int4((int)y.size(3), (int)y.size(2), (int)y.size(1), (int)y.size(0));
- p.sShape = (readSigns || writeSigns) ? make_int2((int)s.size(3), (int)s.size(2)) : make_int2(0, 0); // Width is in bytes. Contiguous.
- p.sOfs = make_int2(sx, sy);
- p.swLimit = (sw_active + 3) >> 2; // Rounded up to bytes.
-
- // x, y, b strides are in bytes.
- p.xStride = make_longlong4(sz * x.stride(3), sz * x.stride(2), sz * x.stride(1), sz * x.stride(0));
- p.yStride = make_longlong4(sz * y.stride(3), sz * y.stride(2), sz * y.stride(1), sz * y.stride(0));
- p.bStride = sz * b.stride(0);
-
- // fu, fd strides are in elements.
- p.fuStride = make_longlong3(fu.stride(-1), fu.dim() == 2 ? fu.stride(0) : 0, 0);
- p.fdStride = make_longlong3(fd.stride(-1), fd.dim() == 2 ? fd.stride(0) : 0, 0);
-
- // Determine if indices don't fit in int32. Support negative strides although Torch currently never produces those.
- bool index64b = false;
- if (std::abs(p.bStride * x.size(1)) > INT_MAX) index64b = true;
- if (std::min(x.size(0) * p.xStride.w, 0ll) + std::min(x.size(1) * p.xStride.z, 0ll) + std::min(x.size(2) * p.xStride.y, 0ll) + std::min(x.size(3) * p.xStride.x, 0ll) < -INT_MAX) index64b = true;
- if (std::max(x.size(0) * p.xStride.w, 0ll) + std::max(x.size(1) * p.xStride.z, 0ll) + std::max(x.size(2) * p.xStride.y, 0ll) + std::max(x.size(3) * p.xStride.x, 0ll) > INT_MAX) index64b = true;
- if (std::min(y.size(0) * p.yStride.w, 0ll) + std::min(y.size(1) * p.yStride.z, 0ll) + std::min(y.size(2) * p.yStride.y, 0ll) + std::min(y.size(3) * p.yStride.x, 0ll) < -INT_MAX) index64b = true;
- if (std::max(y.size(0) * p.yStride.w, 0ll) + std::max(y.size(1) * p.yStride.z, 0ll) + std::max(y.size(2) * p.yStride.y, 0ll) + std::max(y.size(3) * p.yStride.x, 0ll) > INT_MAX) index64b = true;
- if (s.numel() > INT_MAX) index64b = true;
-
- // Choose CUDA kernel.
- filtered_lrelu_kernel_spec spec = { 0 };
- AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "filtered_lrelu_cuda", [&]
- {
- if constexpr (sizeof(scalar_t) <= 4) // Exclude doubles. constexpr prevents template instantiation.
- {
- // Choose kernel based on index type, datatype and sign read/write modes.
- if (!index64b && writeSigns && !readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB);
- else if (!index64b && !writeSigns && readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB);
- else if (!index64b && !writeSigns && !readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB);
- else if ( index64b && writeSigns && !readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB);
- else if ( index64b && !writeSigns && readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB);
- else if ( index64b && !writeSigns && !readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB);
- }
- });
- TORCH_CHECK(spec.exec, "internal error - CUDA kernel not found") // This should not happen because we tested earlier that kernel exists.
-
- // Launch CUDA kernel.
- void* args[] = {&p};
- int bx = spec.numWarps * 32;
- int gx = (p.yShape.x - 1) / spec.tileOut.x + 1;
- int gy = (p.yShape.y - 1) / spec.tileOut.y + 1;
- int gz = p.yShape.z * p.yShape.w;
-
- // Repeat multiple horizontal tiles in a CTA?
- if (spec.xrep)
- {
- p.tilesXrep = spec.xrep;
- p.tilesXdim = gx;
-
- gx = (gx + p.tilesXrep - 1) / p.tilesXrep;
- std::swap(gx, gy);
- }
- else
- {
- p.tilesXrep = 0;
- p.tilesXdim = 0;
- }
-
- // Launch filter setup kernel.
- AT_CUDA_CHECK(cudaLaunchKernel(spec.setup, 1, 1024, args, 0, at::cuda::getCurrentCUDAStream()));
-
- // Copy kernels to constant memory.
- if ( writeSigns && !readSigns) AT_CUDA_CHECK((copy_filters(at::cuda::getCurrentCUDAStream())));
- else if (!writeSigns && readSigns) AT_CUDA_CHECK((copy_filters(at::cuda::getCurrentCUDAStream())));
- else if (!writeSigns && !readSigns) AT_CUDA_CHECK((copy_filters(at::cuda::getCurrentCUDAStream())));
-
- // Set cache and shared memory configurations for main kernel.
- AT_CUDA_CHECK(cudaFuncSetCacheConfig(spec.exec, cudaFuncCachePreferShared));
- if (spec.dynamicSharedKB) // Need dynamically allocated shared memory?
- AT_CUDA_CHECK(cudaFuncSetAttribute(spec.exec, cudaFuncAttributeMaxDynamicSharedMemorySize, spec.dynamicSharedKB << 10));
- AT_CUDA_CHECK(cudaFuncSetSharedMemConfig(spec.exec, cudaSharedMemBankSizeFourByte));
-
- // Launch main kernel.
- const int maxSubGz = 65535; // CUDA maximum for block z dimension.
- for (int zofs=0; zofs < gz; zofs += maxSubGz) // Do multiple launches if gz is too big.
- {
- p.blockZofs = zofs;
- int subGz = std::min(maxSubGz, gz - zofs);
- AT_CUDA_CHECK(cudaLaunchKernel(spec.exec, dim3(gx, gy, subGz), bx, args, spec.dynamicSharedKB << 10, at::cuda::getCurrentCUDAStream()));
- }
-
- // Done.
- return std::make_tuple(y, so, 0);
-}
-
-//------------------------------------------------------------------------
-
-static torch::Tensor filtered_lrelu_act(torch::Tensor x, torch::Tensor si, int sx, int sy, float gain, float slope, float clamp, bool writeSigns)
-{
- // Set CUDA device.
- TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device");
- const at::cuda::OptionalCUDAGuard device_guard(device_of(x));
-
- // Validate arguments.
- TORCH_CHECK(x.dim() == 4, "x must be rank 4");
- TORCH_CHECK(x.size(0) * x.size(1) <= INT_MAX && x.size(2) <= INT_MAX && x.size(3) <= INT_MAX, "x is too large");
- TORCH_CHECK(x.numel() > 0, "x is empty");
- TORCH_CHECK(x.dtype() == torch::kHalf || x.dtype() == torch::kFloat || x.dtype() == torch::kDouble, "x must be float16, float32 or float64");
-
- // Output signs if we don't have sign input.
- torch::Tensor so;
- torch::Tensor s = si;
- bool readSigns = !!s.numel();
- if (writeSigns)
- {
- int64_t sw = x.size(3);
- sw = (sw + 15) & ~15; // Round to a multiple of 16 for coalescing.
- s = so = torch::empty({x.size(0), x.size(1), x.size(2), sw >> 2}, x.options().dtype(torch::kUInt8), at::MemoryFormat::Contiguous);
- }
-
- // Validate sign tensor if in use.
- if (readSigns || writeSigns)
- {
- TORCH_CHECK(s.is_contiguous(), "signs must be contiguous");
- TORCH_CHECK(s.dtype() == torch::kUInt8, "signs must be uint8");
- TORCH_CHECK(s.device() == x.device(), "signs must reside on the same device as x");
- TORCH_CHECK(s.dim() == 4, "signs must be rank 4");
- TORCH_CHECK(s.size(0) == x.size(0) && s.size(1) == x.size(1), "signs must have same batch & channels as x");
- TORCH_CHECK(s.size(2) <= INT_MAX && (s.size(3) << 2) <= INT_MAX, "signs tensor is too large");
- }
-
- // Initialize CUDA kernel parameters.
- filtered_lrelu_act_kernel_params p;
- p.x = x.data_ptr();
- p.s = (readSigns || writeSigns) ? s.data_ptr() : 0;
- p.gain = gain;
- p.slope = slope;
- p.clamp = clamp;
- p.xShape = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0));
- p.xStride = make_longlong4(x.stride(3), x.stride(2), x.stride(1), x.stride(0));
- p.sShape = (readSigns || writeSigns) ? make_int2((int)s.size(3) << 2, (int)s.size(2)) : make_int2(0, 0); // Width is in elements. Contiguous.
- p.sOfs = make_int2(sx, sy);
-
- // Choose CUDA kernel.
- void* func = 0;
- AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "filtered_lrelu_act_cuda", [&]
- {
- if (writeSigns)
- func = choose_filtered_lrelu_act_kernel();
- else if (readSigns)
- func = choose_filtered_lrelu_act_kernel();
- else
- func = choose_filtered_lrelu_act_kernel();
- });
- TORCH_CHECK(func, "internal error - CUDA kernel not found");
-
- // Launch CUDA kernel.
- void* args[] = {&p};
- int bx = 128; // 4 warps per block.
-
- // Logical size of launch = writeSigns ? p.s : p.x
- uint32_t gx = writeSigns ? p.sShape.x : p.xShape.x;
- uint32_t gy = writeSigns ? p.sShape.y : p.xShape.y;
- uint32_t gz = p.xShape.z * p.xShape.w; // Same as in p.sShape if signs are in use.
- gx = (gx - 1) / bx + 1;
-
- // Make sure grid y and z dimensions are within CUDA launch limits. Kernel loops internally to do the rest.
- const uint32_t gmax = 65535;
- gy = std::min(gy, gmax);
- gz = std::min(gz, gmax);
-
- // Launch.
- AT_CUDA_CHECK(cudaLaunchKernel(func, dim3(gx, gy, gz), bx, args, 0, at::cuda::getCurrentCUDAStream()));
- return so;
-}
-
-//------------------------------------------------------------------------
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m)
-{
- m.def("filtered_lrelu", &filtered_lrelu); // The whole thing.
- m.def("filtered_lrelu_act_", &filtered_lrelu_act); // Activation and sign tensor handling only. Modifies data tensor in-place.
-}
-
-//------------------------------------------------------------------------
\ No newline at end of file
diff --git a/spaces/haakohu/deep_privacy2_face/dp2/data/build.py b/spaces/haakohu/deep_privacy2_face/dp2/data/build.py
deleted file mode 100644
index ceab946b4da20467f879f3c6af0e9eb985465ac4..0000000000000000000000000000000000000000
--- a/spaces/haakohu/deep_privacy2_face/dp2/data/build.py
+++ /dev/null
@@ -1,40 +0,0 @@
-import torch
-import tops
-from .utils import collate_fn
-
-
-def get_dataloader(
- dataset, gpu_transform: torch.nn.Module,
- num_workers,
- batch_size,
- infinite: bool,
- drop_last: bool,
- prefetch_factor: int,
- shuffle,
- channels_last=False
- ):
- sampler = None
- dl_kwargs = dict(
- pin_memory=True,
- )
- if infinite:
- sampler = tops.InfiniteSampler(
- dataset, rank=tops.rank(),
- num_replicas=tops.world_size(),
- shuffle=shuffle
- )
- elif tops.world_size() > 1:
- sampler = torch.utils.data.DistributedSampler(
- dataset, shuffle=shuffle, num_replicas=tops.world_size(), rank=tops.rank())
- dl_kwargs["drop_last"] = drop_last
- else:
- dl_kwargs["shuffle"] = shuffle
- dl_kwargs["drop_last"] = drop_last
- dataloader = torch.utils.data.DataLoader(
- dataset, sampler=sampler, collate_fn=collate_fn,
- batch_size=batch_size,
- num_workers=num_workers, prefetch_factor=prefetch_factor,
- **dl_kwargs
- )
- dataloader = tops.DataPrefetcher(dataloader, gpu_transform, channels_last=channels_last)
- return dataloader
diff --git a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/open_clip/constants.py b/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/open_clip/constants.py
deleted file mode 100644
index a670bb3fab442baeb9af53b91c312e6982af57ee..0000000000000000000000000000000000000000
--- a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/open_clip/constants.py
+++ /dev/null
@@ -1,2 +0,0 @@
-OPENAI_DATASET_MEAN = (0.48145466, 0.4578275, 0.40821073)
-OPENAI_DATASET_STD = (0.26862954, 0.26130258, 0.27577711)
diff --git a/spaces/hanstyle/tts/face_detection/detection/sfd/net_s3fd.py b/spaces/hanstyle/tts/face_detection/detection/sfd/net_s3fd.py
deleted file mode 100644
index fc64313c277ab594d0257585c70f147606693452..0000000000000000000000000000000000000000
--- a/spaces/hanstyle/tts/face_detection/detection/sfd/net_s3fd.py
+++ /dev/null
@@ -1,129 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-class L2Norm(nn.Module):
- def __init__(self, n_channels, scale=1.0):
- super(L2Norm, self).__init__()
- self.n_channels = n_channels
- self.scale = scale
- self.eps = 1e-10
- self.weight = nn.Parameter(torch.Tensor(self.n_channels))
- self.weight.data *= 0.0
- self.weight.data += self.scale
-
- def forward(self, x):
- norm = x.pow(2).sum(dim=1, keepdim=True).sqrt() + self.eps
- x = x / norm * self.weight.view(1, -1, 1, 1)
- return x
-
-
-class s3fd(nn.Module):
- def __init__(self):
- super(s3fd, self).__init__()
- self.conv1_1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1)
- self.conv1_2 = nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1)
-
- self.conv2_1 = nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1)
- self.conv2_2 = nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1)
-
- self.conv3_1 = nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1)
- self.conv3_2 = nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1)
- self.conv3_3 = nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1)
-
- self.conv4_1 = nn.Conv2d(256, 512, kernel_size=3, stride=1, padding=1)
- self.conv4_2 = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1)
- self.conv4_3 = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1)
-
- self.conv5_1 = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1)
- self.conv5_2 = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1)
- self.conv5_3 = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1)
-
- self.fc6 = nn.Conv2d(512, 1024, kernel_size=3, stride=1, padding=3)
- self.fc7 = nn.Conv2d(1024, 1024, kernel_size=1, stride=1, padding=0)
-
- self.conv6_1 = nn.Conv2d(1024, 256, kernel_size=1, stride=1, padding=0)
- self.conv6_2 = nn.Conv2d(256, 512, kernel_size=3, stride=2, padding=1)
-
- self.conv7_1 = nn.Conv2d(512, 128, kernel_size=1, stride=1, padding=0)
- self.conv7_2 = nn.Conv2d(128, 256, kernel_size=3, stride=2, padding=1)
-
- self.conv3_3_norm = L2Norm(256, scale=10)
- self.conv4_3_norm = L2Norm(512, scale=8)
- self.conv5_3_norm = L2Norm(512, scale=5)
-
- self.conv3_3_norm_mbox_conf = nn.Conv2d(256, 4, kernel_size=3, stride=1, padding=1)
- self.conv3_3_norm_mbox_loc = nn.Conv2d(256, 4, kernel_size=3, stride=1, padding=1)
- self.conv4_3_norm_mbox_conf = nn.Conv2d(512, 2, kernel_size=3, stride=1, padding=1)
- self.conv4_3_norm_mbox_loc = nn.Conv2d(512, 4, kernel_size=3, stride=1, padding=1)
- self.conv5_3_norm_mbox_conf = nn.Conv2d(512, 2, kernel_size=3, stride=1, padding=1)
- self.conv5_3_norm_mbox_loc = nn.Conv2d(512, 4, kernel_size=3, stride=1, padding=1)
-
- self.fc7_mbox_conf = nn.Conv2d(1024, 2, kernel_size=3, stride=1, padding=1)
- self.fc7_mbox_loc = nn.Conv2d(1024, 4, kernel_size=3, stride=1, padding=1)
- self.conv6_2_mbox_conf = nn.Conv2d(512, 2, kernel_size=3, stride=1, padding=1)
- self.conv6_2_mbox_loc = nn.Conv2d(512, 4, kernel_size=3, stride=1, padding=1)
- self.conv7_2_mbox_conf = nn.Conv2d(256, 2, kernel_size=3, stride=1, padding=1)
- self.conv7_2_mbox_loc = nn.Conv2d(256, 4, kernel_size=3, stride=1, padding=1)
-
- def forward(self, x):
- h = F.relu(self.conv1_1(x))
- h = F.relu(self.conv1_2(h))
- h = F.max_pool2d(h, 2, 2)
-
- h = F.relu(self.conv2_1(h))
- h = F.relu(self.conv2_2(h))
- h = F.max_pool2d(h, 2, 2)
-
- h = F.relu(self.conv3_1(h))
- h = F.relu(self.conv3_2(h))
- h = F.relu(self.conv3_3(h))
- f3_3 = h
- h = F.max_pool2d(h, 2, 2)
-
- h = F.relu(self.conv4_1(h))
- h = F.relu(self.conv4_2(h))
- h = F.relu(self.conv4_3(h))
- f4_3 = h
- h = F.max_pool2d(h, 2, 2)
-
- h = F.relu(self.conv5_1(h))
- h = F.relu(self.conv5_2(h))
- h = F.relu(self.conv5_3(h))
- f5_3 = h
- h = F.max_pool2d(h, 2, 2)
-
- h = F.relu(self.fc6(h))
- h = F.relu(self.fc7(h))
- ffc7 = h
- h = F.relu(self.conv6_1(h))
- h = F.relu(self.conv6_2(h))
- f6_2 = h
- h = F.relu(self.conv7_1(h))
- h = F.relu(self.conv7_2(h))
- f7_2 = h
-
- f3_3 = self.conv3_3_norm(f3_3)
- f4_3 = self.conv4_3_norm(f4_3)
- f5_3 = self.conv5_3_norm(f5_3)
-
- cls1 = self.conv3_3_norm_mbox_conf(f3_3)
- reg1 = self.conv3_3_norm_mbox_loc(f3_3)
- cls2 = self.conv4_3_norm_mbox_conf(f4_3)
- reg2 = self.conv4_3_norm_mbox_loc(f4_3)
- cls3 = self.conv5_3_norm_mbox_conf(f5_3)
- reg3 = self.conv5_3_norm_mbox_loc(f5_3)
- cls4 = self.fc7_mbox_conf(ffc7)
- reg4 = self.fc7_mbox_loc(ffc7)
- cls5 = self.conv6_2_mbox_conf(f6_2)
- reg5 = self.conv6_2_mbox_loc(f6_2)
- cls6 = self.conv7_2_mbox_conf(f7_2)
- reg6 = self.conv7_2_mbox_loc(f7_2)
-
- # max-out background label
- chunk = torch.chunk(cls1, 4, 1)
- bmax = torch.max(torch.max(chunk[0], chunk[1]), chunk[2])
- cls1 = torch.cat([bmax, chunk[3]], dim=1)
-
- return [cls1, reg1, cls2, reg2, cls3, reg3, cls4, reg4, cls5, reg5, cls6, reg6]
diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/__init__.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/__init__.py
deleted file mode 100644
index e8f72e0f45d6d683771f0d815dfd0e3d0db52b9d..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/__init__.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-from . import transforms # isort:skip
-
-from .build import (
- build_detection_test_loader,
- build_detection_train_loader,
- get_detection_dataset_dicts,
- load_proposals_into_dataset,
- print_instances_class_histogram,
-)
-from .catalog import DatasetCatalog, MetadataCatalog
-from .common import DatasetFromList, MapDataset
-from .dataset_mapper import DatasetMapper
-
-# ensure the builtin data are registered
-from . import datasets, samplers # isort:skip
-
-__all__ = [k for k in globals().keys() if not k.startswith("_")]
diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/utils/serialize.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/utils/serialize.py
deleted file mode 100644
index 734a62c2c4ecfd520eb9e8b941857b6f7e17d4c8..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/utils/serialize.py
+++ /dev/null
@@ -1,29 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-import cloudpickle
-
-
-class PicklableWrapper(object):
- """
- Wrap an object to make it more picklable, note that it uses
- heavy weight serialization libraries that are slower than pickle.
- It's best to use it only on closures (which are usually not picklable).
-
- This is a simplified version of
- https://github.com/joblib/joblib/blob/master/joblib/externals/loky/cloudpickle_wrapper.py
- """
-
- def __init__(self, obj):
- self._obj = obj
-
- def __reduce__(self):
- s = cloudpickle.dumps(self._obj)
- return cloudpickle.loads, (s,)
-
- def __call__(self, *args, **kwargs):
- return self._obj(*args, **kwargs)
-
- def __getattr__(self, attr):
- # Ensure that the wrapped object can be used seamlessly as the previous object.
- if attr not in ["_obj"]:
- return getattr(self._obj, attr)
- return getattr(self, attr)
diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/docs/notes/compatibility.md b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/docs/notes/compatibility.md
deleted file mode 100644
index f7b66c2e384b162864fb96a2fed44ba3084b8226..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/docs/notes/compatibility.md
+++ /dev/null
@@ -1,83 +0,0 @@
-# Compatibility with Other Libraries
-
-## Compatibility with Detectron (and maskrcnn-benchmark)
-
-Detectron2 addresses some legacy issues left in Detectron. As a result, their models
-are not compatible:
-running inference with the same model weights will produce different results in the two code bases.
-
-The major differences regarding inference are:
-
-- The height and width of a box with corners (x1, y1) and (x2, y2) is now computed more naturally as
- width = x2 - x1 and height = y2 - y1;
- In Detectron, a "+ 1" was added both height and width.
-
- Note that the relevant ops in Caffe2 have [adopted this change of convention](https://github.com/pytorch/pytorch/pull/20550)
- with an extra option.
- So it is still possible to run inference with a Detectron2-trained model in Caffe2.
-
- The change in height/width calculations most notably changes:
- - encoding/decoding in bounding box regression.
- - non-maximum suppression. The effect here is very negligible, though.
-
-- RPN now uses simpler anchors with fewer quantization artifacts.
-
- In Detectron, the anchors were quantized and
- [do not have accurate areas](https://github.com/facebookresearch/Detectron/issues/227).
- In Detectron2, the anchors are center-aligned to feature grid points and not quantized.
-
-- Classification layers have a different ordering of class labels.
-
- This involves any trainable parameter with shape (..., num_categories + 1, ...).
- In Detectron2, integer labels [0, K-1] correspond to the K = num_categories object categories
- and the label "K" corresponds to the special "background" category.
- In Detectron, label "0" means background, and labels [1, K] correspond to the K categories.
-
-- ROIAlign is implemented differently. The new implementation is [available in Caffe2](https://github.com/pytorch/pytorch/pull/23706).
-
- 1. All the ROIs are shifted by half a pixel compared to Detectron in order to create better image-feature-map alignment.
- See `layers/roi_align.py` for details.
- To enable the old behavior, use `ROIAlign(aligned=False)`, or `POOLER_TYPE=ROIAlign` instead of
- `ROIAlignV2` (the default).
-
- 1. The ROIs are not required to have a minimum size of 1.
- This will lead to tiny differences in the output, but should be negligible.
-
-- Mask inference function is different.
-
- In Detectron2, the "paste_mask" function is different and should be more accurate than in Detectron. This change
- can improve mask AP on COCO by ~0.5% absolute.
-
-There are some other differences in training as well, but they won't affect
-model-level compatibility. The major ones are:
-
-- We fixed a [bug](https://github.com/facebookresearch/Detectron/issues/459) in
- Detectron, by making `RPN.POST_NMS_TOPK_TRAIN` per-image, rather than per-batch.
- The fix may lead to a small accuracy drop for a few models (e.g. keypoint
- detection) and will require some parameter tuning to match the Detectron results.
-- For simplicity, we change the default loss in bounding box regression to L1 loss, instead of smooth L1 loss.
- We have observed that this tends to slightly decrease box AP50 while improving box AP for higher
- overlap thresholds (and leading to a slight overall improvement in box AP).
-- We interpret the coordinates in COCO bounding box and segmentation annotations
- as coordinates in range `[0, width]` or `[0, height]`. The coordinates in
- COCO keypoint annotations are interpreted as pixel indices in range `[0, width - 1]` or `[0, height - 1]`.
- Note that this affects how flip augmentation is implemented.
-
-
-We will later share more details and rationale behind the above mentioned issues
-about pixels, coordinates, and "+1"s.
-
-
-## Compatibility with Caffe2
-
-As mentioned above, despite the incompatibilities with Detectron, the relevant
-ops have been implemented in Caffe2.
-Therefore, models trained with detectron2 can be converted in Caffe2.
-See [Deployment](../tutorials/deployment.md) for the tutorial.
-
-## Compatibility with TensorFlow
-
-Most ops are available in TensorFlow, although some tiny differences in
-the implementation of resize / ROIAlign / padding need to be addressed.
-A working conversion script is provided by [tensorpack FasterRCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN/convert_d2)
-to run a standard detectron2 model in TensorFlow.
diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tests/structures/test_boxes.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tests/structures/test_boxes.py
deleted file mode 100644
index 4d33c3bf9b7471c7e4382bc9e66c26e1fb60e29f..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tests/structures/test_boxes.py
+++ /dev/null
@@ -1,182 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-import json
-import math
-import numpy as np
-import unittest
-import torch
-
-from detectron2.structures import Boxes, BoxMode, pairwise_iou
-
-
-class TestBoxMode(unittest.TestCase):
- def _convert_xy_to_wh(self, x):
- return BoxMode.convert(x, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS)
-
- def _convert_xywha_to_xyxy(self, x):
- return BoxMode.convert(x, BoxMode.XYWHA_ABS, BoxMode.XYXY_ABS)
-
- def _convert_xywh_to_xywha(self, x):
- return BoxMode.convert(x, BoxMode.XYWH_ABS, BoxMode.XYWHA_ABS)
-
- def test_box_convert_list(self):
- for tp in [list, tuple]:
- box = tp([5.0, 5.0, 10.0, 10.0])
- output = self._convert_xy_to_wh(box)
- self.assertIsInstance(output, tp)
- self.assertIsInstance(output[0], float)
- self.assertEqual(output, tp([5.0, 5.0, 5.0, 5.0]))
-
- with self.assertRaises(Exception):
- self._convert_xy_to_wh([box])
-
- def test_box_convert_array(self):
- box = np.asarray([[5, 5, 10, 10], [1, 1, 2, 3]])
- output = self._convert_xy_to_wh(box)
- self.assertEqual(output.dtype, box.dtype)
- self.assertEqual(output.shape, box.shape)
- self.assertTrue((output[0] == [5, 5, 5, 5]).all())
- self.assertTrue((output[1] == [1, 1, 1, 2]).all())
-
- def test_box_convert_cpu_tensor(self):
- box = torch.tensor([[5, 5, 10, 10], [1, 1, 2, 3]])
- output = self._convert_xy_to_wh(box)
- self.assertEqual(output.dtype, box.dtype)
- self.assertEqual(output.shape, box.shape)
- output = output.numpy()
- self.assertTrue((output[0] == [5, 5, 5, 5]).all())
- self.assertTrue((output[1] == [1, 1, 1, 2]).all())
-
- @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available")
- def test_box_convert_cuda_tensor(self):
- box = torch.tensor([[5, 5, 10, 10], [1, 1, 2, 3]]).cuda()
- output = self._convert_xy_to_wh(box)
- self.assertEqual(output.dtype, box.dtype)
- self.assertEqual(output.shape, box.shape)
- self.assertEqual(output.device, box.device)
- output = output.cpu().numpy()
- self.assertTrue((output[0] == [5, 5, 5, 5]).all())
- self.assertTrue((output[1] == [1, 1, 1, 2]).all())
-
- def test_box_convert_xywha_to_xyxy_list(self):
- for tp in [list, tuple]:
- box = tp([50, 50, 30, 20, 0])
- output = self._convert_xywha_to_xyxy(box)
- self.assertIsInstance(output, tp)
- self.assertEqual(output, tp([35, 40, 65, 60]))
-
- with self.assertRaises(Exception):
- self._convert_xywha_to_xyxy([box])
-
- def test_box_convert_xywha_to_xyxy_array(self):
- for dtype in [np.float64, np.float32]:
- box = np.asarray(
- [
- [50, 50, 30, 20, 0],
- [50, 50, 30, 20, 90],
- [1, 1, math.sqrt(2), math.sqrt(2), -45],
- ],
- dtype=dtype,
- )
- output = self._convert_xywha_to_xyxy(box)
- self.assertEqual(output.dtype, box.dtype)
- expected = np.asarray([[35, 40, 65, 60], [40, 35, 60, 65], [0, 0, 2, 2]], dtype=dtype)
- self.assertTrue(np.allclose(output, expected, atol=1e-6), "output={}".format(output))
-
- def test_box_convert_xywha_to_xyxy_tensor(self):
- for dtype in [torch.float32, torch.float64]:
- box = torch.tensor(
- [
- [50, 50, 30, 20, 0],
- [50, 50, 30, 20, 90],
- [1, 1, math.sqrt(2), math.sqrt(2), -45],
- ],
- dtype=dtype,
- )
- output = self._convert_xywha_to_xyxy(box)
- self.assertEqual(output.dtype, box.dtype)
- expected = torch.tensor([[35, 40, 65, 60], [40, 35, 60, 65], [0, 0, 2, 2]], dtype=dtype)
-
- self.assertTrue(torch.allclose(output, expected, atol=1e-6), "output={}".format(output))
-
- def test_box_convert_xywh_to_xywha_list(self):
- for tp in [list, tuple]:
- box = tp([50, 50, 30, 20])
- output = self._convert_xywh_to_xywha(box)
- self.assertIsInstance(output, tp)
- self.assertEqual(output, tp([65, 60, 30, 20, 0]))
-
- with self.assertRaises(Exception):
- self._convert_xywh_to_xywha([box])
-
- def test_box_convert_xywh_to_xywha_array(self):
- for dtype in [np.float64, np.float32]:
- box = np.asarray([[30, 40, 70, 60], [30, 40, 60, 70], [-1, -1, 2, 2]], dtype=dtype)
- output = self._convert_xywh_to_xywha(box)
- self.assertEqual(output.dtype, box.dtype)
- expected = np.asarray(
- [[65, 70, 70, 60, 0], [60, 75, 60, 70, 0], [0, 0, 2, 2, 0]], dtype=dtype
- )
- self.assertTrue(np.allclose(output, expected, atol=1e-6), "output={}".format(output))
-
- def test_box_convert_xywh_to_xywha_tensor(self):
- for dtype in [torch.float32, torch.float64]:
- box = torch.tensor([[30, 40, 70, 60], [30, 40, 60, 70], [-1, -1, 2, 2]], dtype=dtype)
- output = self._convert_xywh_to_xywha(box)
- self.assertEqual(output.dtype, box.dtype)
- expected = torch.tensor(
- [[65, 70, 70, 60, 0], [60, 75, 60, 70, 0], [0, 0, 2, 2, 0]], dtype=dtype
- )
-
- self.assertTrue(torch.allclose(output, expected, atol=1e-6), "output={}".format(output))
-
- def test_json_serializable(self):
- payload = {"box_mode": BoxMode.XYWH_REL}
- try:
- json.dumps(payload)
- except Exception:
- self.fail("JSON serialization failed")
-
- def test_json_deserializable(self):
- payload = '{"box_mode": 2}'
- obj = json.loads(payload)
- try:
- obj["box_mode"] = BoxMode(obj["box_mode"])
- except Exception:
- self.fail("JSON deserialization failed")
-
-
-class TestBoxIOU(unittest.TestCase):
- def test_pairwise_iou(self):
- boxes1 = torch.tensor([[0.0, 0.0, 1.0, 1.0], [0.0, 0.0, 1.0, 1.0]])
-
- boxes2 = torch.tensor(
- [
- [0.0, 0.0, 1.0, 1.0],
- [0.0, 0.0, 0.5, 1.0],
- [0.0, 0.0, 1.0, 0.5],
- [0.0, 0.0, 0.5, 0.5],
- [0.5, 0.5, 1.0, 1.0],
- [0.5, 0.5, 1.5, 1.5],
- ]
- )
-
- expected_ious = torch.tensor(
- [
- [1.0, 0.5, 0.5, 0.25, 0.25, 0.25 / (2 - 0.25)],
- [1.0, 0.5, 0.5, 0.25, 0.25, 0.25 / (2 - 0.25)],
- ]
- )
-
- ious = pairwise_iou(Boxes(boxes1), Boxes(boxes2))
-
- self.assertTrue(torch.allclose(ious, expected_ious))
-
-
-class TestBoxes(unittest.TestCase):
- def test_empty_cat(self):
- x = Boxes.cat([])
- self.assertTrue(x.tensor.shape, (0, 4))
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/hf-accelerate/accelerate_examples/src/markup.py b/spaces/hf-accelerate/accelerate_examples/src/markup.py
deleted file mode 100644
index 920738495302a046ac225314560df7caa282c687..0000000000000000000000000000000000000000
--- a/spaces/hf-accelerate/accelerate_examples/src/markup.py
+++ /dev/null
@@ -1,68 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from template import get_filename
-
-
-_remove_color = "rgb(103,6,12)"
-_addition_color = "rgb(6,103,12)"
-
-
-def mark_text(text, add=True):
- """Marks text with a highlight color for addition or removal.
-
- Args:
- text (`str`):
- Some code to be marked.
- add (`bool`, *optional*, defaults to True):
- Whether to mark the text as an addition or a removal.
-
- Returns:
- `str`: The marked text as an HTML `mark` element.
- """
- if add:
- color = _addition_color
- else:
- color = _remove_color
- return f'{text}'
-
-
-def highlight(code: str):
- """Takes in code and returns the respective highlighted code sample.
-
- Args:
- code (`str`):
- Code from a file.
- """
- lines = code.split("\n")
- for i, line in enumerate(lines):
- if line.startswith("-"):
- lines[i] = "- " + line[1:]
- lines[i] = mark_text(lines[i], False)
- elif line.startswith("+"):
- lines[i] = "+ " + line[1:]
- lines[i] = mark_text(lines[i], True)
- else:
- lines[i] = " " + line
- return "\n".join(lines).rstrip()
-
-
-def get_text(option, tab):
- """
- Reads in an option and returns the code, explanation, and documentation links
- """
- filename = option.lower().replace(" ", "_")
- with open(get_filename(tab, filename)) as f:
- output = f.read()
- return output.split("##\n")[1:]
diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/evaluation/add_mean_dice_to_json.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/evaluation/add_mean_dice_to_json.py
deleted file mode 100644
index b4f428b0567dcf1a99c6cfca90682f1c465208d8..0000000000000000000000000000000000000000
--- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/evaluation/add_mean_dice_to_json.py
+++ /dev/null
@@ -1,51 +0,0 @@
-# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import json
-import numpy as np
-from batchgenerators.utilities.file_and_folder_operations import subfiles
-from collections import OrderedDict
-
-
-def foreground_mean(filename):
- with open(filename, 'r') as f:
- res = json.load(f)
- class_ids = np.array([int(i) for i in res['results']['mean'].keys() if (i != 'mean')])
- class_ids = class_ids[class_ids != 0]
- class_ids = class_ids[class_ids != -1]
- class_ids = class_ids[class_ids != 99]
-
- tmp = res['results']['mean'].get('99')
- if tmp is not None:
- _ = res['results']['mean'].pop('99')
-
- metrics = res['results']['mean']['1'].keys()
- res['results']['mean']["mean"] = OrderedDict()
- for m in metrics:
- foreground_values = [res['results']['mean'][str(i)][m] for i in class_ids]
- res['results']['mean']["mean"][m] = np.nanmean(foreground_values)
- with open(filename, 'w') as f:
- json.dump(res, f, indent=4, sort_keys=True)
-
-
-def run_in_folder(folder):
- json_files = subfiles(folder, True, None, ".json", True)
- json_files = [i for i in json_files if not i.split("/")[-1].startswith(".") and not i.endswith("_globalMean.json")] # stupid mac
- for j in json_files:
- foreground_mean(j)
-
-
-if __name__ == "__main__":
- folder = "/media/fabian/Results/nnUNetOutput_final/summary_jsons"
- run_in_folder(folder)
diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/scripts_new/run_glacier_front_1.sh b/spaces/ho11laqe/nnUNet_calvingfront_detection/scripts_new/run_glacier_front_1.sh
deleted file mode 100644
index de33008e221e40aab4e8c71bbc49578a002268b8..0000000000000000000000000000000000000000
--- a/spaces/ho11laqe/nnUNet_calvingfront_detection/scripts_new/run_glacier_front_1.sh
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/bin/bash -l
-#SBATCH --nodes=1 --gres=gpu:1 --time=24:00:00
-#SBATCH --job-name=Task501_glacier_front_1
-
-export data_raw="/home/woody/iwi5/iwi5039h/data_raw"
-export nnUNet_raw_data_base="/home/woody/iwi5/iwi5039h/nnUNet_data/nnUNet_raw_data_base/"
-export nnUNet_preprocessed="/home/woody/iwi5/iwi5039h/nnUNet_data/nnUNet_preprocessed/"
-export RESULTS_FOLDER="/home/woody/iwi5/iwi5039h/nnUNet_data/RESULTS_FOLDER"
-
-cd nnunet_glacer
-pwd
-conda activate nnunet
-
-python3 nnunet/run/run_training.py 2d nnUNetTrainerV2 501 1 --disable_postprocessing_on_folds --disable_deepsupervision
-python3 nnunet/inference/predict_simple.py -i $nnUNet_raw_data_base/nnUNet_raw_data/Task501_Glacier_front/imagesTs -o $RESULTS_FOLDER/test_predictions/Task501_Glacier_front/fold_1 -t 501 -m 2d -f 1 -p nnUNetPlansv2.1 -tr nnUNetTrainerV2
-python3 nnunet/dataset_conversion/Task501_Glacier_reverse.py -i $RESULTS_FOLDER/test_predictions/Task501_Glacier_front/fold_1
-python3 ./evaluate_nnUNet.py --predictions $RESULTS_FOLDER/test_predictions/Task501_Glacier_front/fold_1/pngs --labels_fronts $data_raw/fronts/test --labels_zones $data_raw/zones/test --sar_images $data_raw/sar_images/test
-
diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/scripts_new/run_glacier_mtl_early_boundary_3.sh b/spaces/ho11laqe/nnUNet_calvingfront_detection/scripts_new/run_glacier_mtl_early_boundary_3.sh
deleted file mode 100644
index 874b33214b0cf0568e74420f96be55c223ea1473..0000000000000000000000000000000000000000
--- a/spaces/ho11laqe/nnUNet_calvingfront_detection/scripts_new/run_glacier_mtl_early_boundary_3.sh
+++ /dev/null
@@ -1,20 +0,0 @@
-#!/bin/bash -l
-#SBATCH --nodes=1 --gres=gpu:1 --time=24:00:00
-#SBATCH --job-name=Task505_glacier_mtl_early_boundary_3
-
-export data_raw="/home/woody/iwi5/iwi5039h/data_raw"
-export nnUNet_raw_data_base="/home/woody/iwi5/iwi5039h/nnUNet_data/nnUNet_raw_data_base/"
-export nnUNet_preprocessed="/home/woody/iwi5/iwi5039h/nnUNet_data/nnUNet_preprocessed/"
-export RESULTS_FOLDER="/home/woody/iwi5/iwi5039h/nnUNet_data/RESULTS_FOLDER"
-
-cd nnunet_glacer
-pwd
-conda activate nnunet
-
-#python3 nnunet/dataset_conversion/Task504_Glacier_mtl_recon.py -data_percentage 100 -base $data_raw
-#python3 nnunet/experiment_planning/nnUNet_plan_and_preprocess.py -t 504 -pl3d None -pl2d ExperimentPlanner2D_mtl
-
-python3 nnunet/run/run_training.py 2d nnUNetTrainerMTLearly_boundary 505 3 -p nnUNetPlans_mtl --disable_postprocessing_on_folds
-python3 nnunet/inference/predict_simple.py -i $nnUNet_raw_data_base/nnUNet_raw_data/Task505_Glacier_mtl_boundary/imagesTs -o $RESULTS_FOLDER/test_predictions/Task505_Glacier_mtl_boundary/early/fold_3 -t 505 -m 2d -f 3 -p nnUNetPlans_mtl -tr nnUNetTrainerMTLearly_boundary
-python3 nnunet/dataset_conversion/Task505_Glacier_mtl_recon_reverse.py -i $RESULTS_FOLDER/test_predictions/Task505_Glacier_mtl_boundary/early/fold_3
-python3 ./evaluate_nnUNet.py --predictions $RESULTS_FOLDER/test_predictions/Task505_Glacier_mtl_boundary/early/fold_3/pngs --labels_fronts $data_raw/fronts/test --labels_zones $data_raw/zones/test --sar_images $data_raw/sar_images/test
diff --git a/spaces/huggan/projected_gan_art/README.md b/spaces/huggan/projected_gan_art/README.md
deleted file mode 100644
index a91d24dd7a0df80cf878952ff08b4ba04b014a3f..0000000000000000000000000000000000000000
--- a/spaces/huggan/projected_gan_art/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Projected_GAN_art
-emoji: 📈
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 2.9.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/huggingface-projects/wordalle/static/_app/immutable/chunks/preload-helper-359634c4.js b/spaces/huggingface-projects/wordalle/static/_app/immutable/chunks/preload-helper-359634c4.js
deleted file mode 100644
index 6a7b0409c22ed53da59d2a4abee5cfef49b4cb69..0000000000000000000000000000000000000000
--- a/spaces/huggingface-projects/wordalle/static/_app/immutable/chunks/preload-helper-359634c4.js
+++ /dev/null
@@ -1 +0,0 @@
-import{s as m,E as f}from"./index-86f4d6c3.js";const c=[];function g(s,l=f){let o;const e=new Set;function i(r){if(m(s,r)&&(s=r,o)){const a=!c.length;for(const t of e)t[1](),c.push(t,s);if(a){for(let t=0;t{e.delete(t),e.size===0&&(o(),o=null)}}return{set:i,update:u,subscribe:n}}let b="",d="";function E(s){b=s.base,d=s.assets||b}const _="modulepreload",h={},p="/static/_app/immutable/",S=function(l,o){return!o||o.length===0?l():Promise.all(o.map(e=>{if(e=`${p}${e}`,e in h)return;h[e]=!0;const i=e.endsWith(".css"),u=i?'[rel="stylesheet"]':"";if(document.querySelector(`link[href="${e}"]${u}`))return;const n=document.createElement("link");if(n.rel=i?"stylesheet":_,i||(n.as="script",n.crossOrigin=""),n.href=e,document.head.appendChild(n),i)return new Promise((r,a)=>{n.addEventListener("load",r),n.addEventListener("error",()=>a(new Error(`Unable to preload CSS for ${e}`)))})})).then(()=>l())};export{S as _,d as a,b,E as s,g as w};
diff --git a/spaces/huggingface/data-measurements-tool/README.md b/spaces/huggingface/data-measurements-tool/README.md
deleted file mode 100644
index 1a599814c90be0267e004a7c898b65d1b5619331..0000000000000000000000000000000000000000
--- a/spaces/huggingface/data-measurements-tool/README.md
+++ /dev/null
@@ -1,45 +0,0 @@
----
-title: DataMeasurementsTool
-emoji: 🤗
-colorFrom: indigo
-colorTo: red
-sdk: streamlit
-sdk_version: 1.0.0
-app_file: app.py
-pinned: false
-python_version: 3.9.6
----
-
-# Data Measurements Tool
-
-🚧 Doing Construction - Link Below Not Synced Yet 🚧
-
-[](https://huggingface.co/spaces/huggingface/data-measurements-tool)
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : `1.0.0`
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
\ No newline at end of file
diff --git a/spaces/hysts/1adrianb-face-alignment/app.py b/spaces/hysts/1adrianb-face-alignment/app.py
deleted file mode 100644
index e3906ae1da40fbff7f26a873cd874ea211da242f..0000000000000000000000000000000000000000
--- a/spaces/hysts/1adrianb-face-alignment/app.py
+++ /dev/null
@@ -1,57 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import functools
-import pathlib
-
-import cv2
-import face_alignment
-import gradio as gr
-import numpy as np
-import torch
-
-TITLE = 'face-alignment'
-DESCRIPTION = 'https://github.com/1adrianb/face-alignment'
-
-MAX_IMAGE_SIZE = 1800
-
-
-def detect(
- image: np.ndarray,
- detector,
- device: torch.device,
-) -> np.ndarray:
- landmarks, _, boxes = detector.get_landmarks(image, return_bboxes=True)
- if landmarks is None:
- return image
-
- res = image.copy()
- for pts, box in zip(landmarks, boxes):
- box = np.round(box[:4]).astype(int)
- cv2.rectangle(res, tuple(box[:2]), tuple(box[2:]), (0, 255, 0), 2)
- tl = pts.min(axis=0)
- br = pts.max(axis=0)
- size = (br - tl).max()
- radius = max(2, int(3 * size / 256))
- for pt in np.round(pts).astype(int):
- cv2.circle(res, tuple(pt), radius, (0, 255, 0), cv2.FILLED)
- return res
-
-
-device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
-detector = face_alignment.FaceAlignment(face_alignment.LandmarksType.TWO_D,
- device=device.type)
-fn = functools.partial(detect, detector=detector, device=device)
-
-image_paths = sorted(pathlib.Path('images').glob('*.jpg'))
-examples = [[path.as_posix()] for path in image_paths]
-
-gr.Interface(
- fn=fn,
- inputs=gr.Image(label='Input', type='numpy'),
- outputs=gr.Image(label='Output', type='numpy'),
- examples=examples,
- title=TITLE,
- description=DESCRIPTION,
-).queue().launch()
diff --git a/spaces/hysts/ControlNet/app_seg.py b/spaces/hysts/ControlNet/app_seg.py
deleted file mode 100644
index 04f4a4afe6e7e20c98d6a62860bd0f9e5cc65aaf..0000000000000000000000000000000000000000
--- a/spaces/hysts/ControlNet/app_seg.py
+++ /dev/null
@@ -1,87 +0,0 @@
-# This file is adapted from https://github.com/lllyasviel/ControlNet/blob/f4748e3630d8141d7765e2bd9b1e348f47847707/gradio_seg2image.py
-# The original license file is LICENSE.ControlNet in this repo.
-import gradio as gr
-
-
-def create_demo(process, max_images=12, default_num_images=3):
- with gr.Blocks() as demo:
- with gr.Row():
- gr.Markdown('## Control Stable Diffusion with Segmentation Maps')
- with gr.Row():
- with gr.Column():
- input_image = gr.Image(source='upload', type='numpy')
- prompt = gr.Textbox(label='Prompt')
- run_button = gr.Button(label='Run')
- with gr.Accordion('Advanced options', open=False):
- is_segmentation_map = gr.Checkbox(
- label='Is segmentation map', value=False)
- num_samples = gr.Slider(label='Images',
- minimum=1,
- maximum=max_images,
- value=default_num_images,
- step=1)
- image_resolution = gr.Slider(label='Image Resolution',
- minimum=256,
- maximum=512,
- value=512,
- step=256)
- detect_resolution = gr.Slider(
- label='Segmentation Resolution',
- minimum=128,
- maximum=512,
- value=512,
- step=1)
- num_steps = gr.Slider(label='Steps',
- minimum=1,
- maximum=100,
- value=20,
- step=1)
- guidance_scale = gr.Slider(label='Guidance Scale',
- minimum=0.1,
- maximum=30.0,
- value=9.0,
- step=0.1)
- seed = gr.Slider(label='Seed',
- minimum=-1,
- maximum=2147483647,
- step=1,
- randomize=True)
- a_prompt = gr.Textbox(
- label='Added Prompt',
- value='best quality, extremely detailed')
- n_prompt = gr.Textbox(
- label='Negative Prompt',
- value=
- 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
- )
- with gr.Column():
- result = gr.Gallery(label='Output',
- show_label=False,
- elem_id='gallery').style(grid=2,
- height='auto')
- inputs = [
- input_image,
- prompt,
- a_prompt,
- n_prompt,
- num_samples,
- image_resolution,
- detect_resolution,
- num_steps,
- guidance_scale,
- seed,
- is_segmentation_map,
- ]
- prompt.submit(fn=process, inputs=inputs, outputs=result)
- run_button.click(fn=process,
- inputs=inputs,
- outputs=result,
- api_name='seg')
- return demo
-
-
-if __name__ == '__main__':
- from model import Model
- model = Model()
- demo = create_demo(model.process_seg)
- demo.queue().launch()
diff --git a/spaces/hzy123/bingo/src/lib/hooks/use-bing.ts b/spaces/hzy123/bingo/src/lib/hooks/use-bing.ts
deleted file mode 100644
index dcdb1667ced0cba299b0825c0e91c4732411308c..0000000000000000000000000000000000000000
--- a/spaces/hzy123/bingo/src/lib/hooks/use-bing.ts
+++ /dev/null
@@ -1,173 +0,0 @@
-'use client'
-
-import { useState, useCallback, useEffect, useMemo } from 'react'
-import { useAtom, useAtomValue } from 'jotai'
-import { chatFamily, bingConversationStyleAtom, GreetMessages, hashAtom, voiceAtom } from '@/state'
-import { setConversationMessages } from './chat-history'
-import { ChatMessageModel, BotId, FileItem } from '@/lib/bots/bing/types'
-import { nanoid } from '../utils'
-import { TTS } from '../bots/bing/tts'
-
-export function useBing(botId: BotId = 'bing') {
- const chatAtom = useMemo(() => chatFamily({ botId, page: 'singleton' }), [botId])
- const [enableTTS] = useAtom(voiceAtom)
- const speaker = useMemo(() => new TTS(), [])
- const [hash, setHash] = useAtom(hashAtom)
- const bingConversationStyle = useAtomValue(bingConversationStyleAtom)
- const [chatState, setChatState] = useAtom(chatAtom)
- const [input, setInput] = useState('')
- const [attachmentList, setAttachmentList] = useState([])
-
- const updateMessage = useCallback(
- (messageId: string, updater: (message: ChatMessageModel) => void) => {
- setChatState((draft) => {
- const message = draft.messages.find((m) => m.id === messageId)
- if (message) {
- updater(message)
- }
- })
- },
- [setChatState],
- )
-
- const sendMessage = useCallback(
- async (input: string, options = {}) => {
- const botMessageId = nanoid()
- const imageUrl = attachmentList?.[0]?.status === 'loaded' ? attachmentList[0].url : undefined
- setChatState((draft) => {
- const text = imageUrl ? `${input}\n\n` : input
- draft.messages.push({ id: nanoid(), text, author: 'user' }, { id: botMessageId, text: '', author: 'bot' })
- setAttachmentList([])
- })
- const abortController = new AbortController()
- setChatState((draft) => {
- draft.generatingMessageId = botMessageId
- draft.abortController = abortController
- })
- speaker.reset()
- await chatState.bot.sendMessage({
- prompt: input,
- imageUrl: /\?bcid=([^&]+)/.test(imageUrl ?? '') ? `https://www.bing.com/images/blob?bcid=${RegExp.$1}` : imageUrl,
- options: {
- ...options,
- bingConversationStyle,
- },
- signal: abortController.signal,
- onEvent(event) {
- if (event.type === 'UPDATE_ANSWER') {
- updateMessage(botMessageId, (message) => {
- if (event.data.text.length > message.text.length) {
- message.text = event.data.text
- }
-
- if (event.data.spokenText && enableTTS) {
- speaker.speak(event.data.spokenText)
- }
-
- message.throttling = event.data.throttling || message.throttling
- message.sourceAttributions = event.data.sourceAttributions || message.sourceAttributions
- message.suggestedResponses = event.data.suggestedResponses || message.suggestedResponses
- })
- } else if (event.type === 'ERROR') {
- updateMessage(botMessageId, (message) => {
- message.error = event.error
- })
- setChatState((draft) => {
- draft.abortController = undefined
- draft.generatingMessageId = ''
- })
- } else if (event.type === 'DONE') {
- setChatState((draft) => {
- draft.abortController = undefined
- draft.generatingMessageId = ''
- })
- }
- },
- })
- },
- [botId, attachmentList, chatState.bot, setChatState, updateMessage],
- )
-
- const uploadImage = useCallback(async (imgUrl: string) => {
- setAttachmentList([{ url: imgUrl, status: 'loading' }])
- const response = await chatState.bot.uploadImage(imgUrl, bingConversationStyle)
- if (response?.blobId) {
- setAttachmentList([{ url: `/api/blob?bcid=${response.blobId}`, status: 'loaded' }])
- } else {
- setAttachmentList([{ url: imgUrl, status: 'error' }])
- }
- }, [chatState.bot])
-
- const resetConversation = useCallback(() => {
- chatState.bot.resetConversation()
- speaker.abort()
- setChatState((draft) => {
- draft.abortController = undefined
- draft.generatingMessageId = ''
- draft.messages = [{ author: 'bot', text: GreetMessages[Math.floor(GreetMessages.length * Math.random())], id: nanoid() }]
- draft.conversationId = nanoid()
- })
- }, [chatState.bot, setChatState])
-
- const stopGenerating = useCallback(() => {
- chatState.abortController?.abort()
- if (chatState.generatingMessageId) {
- updateMessage(chatState.generatingMessageId, (message) => {
- if (!message.text && !message.error) {
- message.text = 'Cancelled'
- }
- })
- }
- setChatState((draft) => {
- draft.generatingMessageId = ''
- })
- }, [chatState.abortController, chatState.generatingMessageId, setChatState, updateMessage])
-
- useEffect(() => {
- if (chatState.messages.length) {
- setConversationMessages(botId, chatState.conversationId, chatState.messages)
- }
- }, [botId, chatState.conversationId, chatState.messages])
-
- useEffect(() => {
- if (hash === 'reset') {
- resetConversation()
- setHash('')
- }
- }, [hash, setHash])
-
- const chat = useMemo(
- () => ({
- botId,
- bot: chatState.bot,
- isSpeaking: speaker.isSpeaking,
- messages: chatState.messages,
- sendMessage,
- setInput,
- input,
- resetConversation,
- generating: !!chatState.generatingMessageId,
- stopGenerating,
- uploadImage,
- setAttachmentList,
- attachmentList,
- }),
- [
- botId,
- bingConversationStyle,
- chatState.bot,
- chatState.generatingMessageId,
- chatState.messages,
- speaker.isSpeaking,
- setInput,
- input,
- setAttachmentList,
- attachmentList,
- resetConversation,
- sendMessage,
- stopGenerating,
- ],
- )
-
- return chat
-}
diff --git a/spaces/inamXcontru/PoeticTTS/Battlefield 3 Highly Compressed Pc Games 573 Mb How to Install and Run.md b/spaces/inamXcontru/PoeticTTS/Battlefield 3 Highly Compressed Pc Games 573 Mb How to Install and Run.md
deleted file mode 100644
index 14eba6d321360fc0872c8c6bd1fb3a5a142b9d45..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Battlefield 3 Highly Compressed Pc Games 573 Mb How to Install and Run.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
Battlefield 3 Highly Compressed PC Game is a first-person shooter game that was developed by EA DICE, and published by Electronic Arts. This installment was released on 25 October 2011. We have played many Highly compressed pc games but never ever played Battlefield 3 game so you can also get it free from here. This installment is full of fun, High-Quality graphics, and an awesome sound system.
A lot of game lovers already playing this game on PlayStation 3, Xbox 360, Microsoft Windows, and another well-known operating system. There are a lot of followers of this game on social media websites like Facebook, Twitter, Instagram, etc. All the followers are already playing this game and enjoy leaving positive reviews about its features. There are also many websites that are giving you this game but this website gives you a 100% working link for Battlefield 3 highly compressed game. This game is popular all over the world so you can get it from here with a single link. You can also get Streets of Rage 4 Highly Compressed PC Game
-
Battlefield 3 Full Game is very popular songs. You can get also this game in google by Battlefield 3 PC Game Free Download, Battlefield 3 Free download full version for pc, Battlefield 3 Download free full version, Battlefield 3 free download full version for pc with crack, Download battlefield 3 highly compressed, Battlefield 3 download android, Battlefield 3 free download full version for android, Battlefield 3 pc, Battlefield 3 download, Battlefield 3 highly compressed pc games (573 MB), BF3 video download free, Battlefield 3 Direct download link keywords.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/inamXcontru/PoeticTTS/Call Of Duty Black Ops 2 Highly Compressedl The Ultimate Guide to Download and Install the Game.md b/spaces/inamXcontru/PoeticTTS/Call Of Duty Black Ops 2 Highly Compressedl The Ultimate Guide to Download and Install the Game.md
deleted file mode 100644
index a0b7afe3ef86c318e159fe721145b3289ef28601..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Call Of Duty Black Ops 2 Highly Compressedl The Ultimate Guide to Download and Install the Game.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
Each operator shall prohibit a covered employee from using alcohol within four hours prior to performing covered functions, or, if an employee is called to duty to respond to an emergency, within the time period after the employee has been notified to report for duty. No operator having actual knowledge that a covered employee has used alcohol within four hours prior to performing covered functions or within the time period after the employee has been notified to report for duty shall permit that covered employee to perform or continue to perform covered functions.
Foucault has theorized that the political, including political economy, is war by other means and identified this logic as originating in racially polarized social formations (Foucault 2003). The predicating and predicated violence of the state and its agents was never explicitly interrogated by the TRC for what light it could shed on the organizational mentalities and subject making practices of economic apartheid as a war machine in itself. This opens, not the ethical question of means and ends, but rather the question of what autonomous political means does it take to impose and support subjugating economic media. Separating the violence of political economy from the political economy of violence historically and ethically divorced the limitless movement of capital under apartheid from the correlative infinitization of its violence against persons of color as the striated substance of this self-moving circulatory apparatus.[12]I have analyzed elsewhere how regimes of labor discipline in colonial and post-colonial South Africa deployed the body of color as an interchangeable economic and fetishized substance, the worked-upon body of color was the substrate for the interior chain of signifiers that constituted a racialized economimesis. With the emerging threat posed to capitalist labor discipline by the struggle institutions economimesis evolved into a counterinsurgency theater of structural nostalgia that restaged subjugated bodies of color through pain and disfigurement. In this theater the now threatened political economy was violently reenacted through the allegorical re-inscription of labor discipline onto the recalcitrant black body, both individual and collective. I do not seek to pose racism as the reductive truth about the violence of the apartheid state but to stress that like the infinite productivity of apartheid capitalism, which entailed the capitalization of race and the racialization of capital, state terror was as committed to the (re)production of racial subjects as it was to securing the economy of racial exploitation.[13]
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Antilog Table Pdf TOP Free Download.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Antilog Table Pdf TOP Free Download.md
deleted file mode 100644
index 6d8fd84922df200af6937ef65cbcf171281b0735..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Antilog Table Pdf TOP Free Download.md
+++ /dev/null
@@ -1,15 +0,0 @@
-
-
-Nishkramana (Sanskrit: निषà¥à¤•à¥à¤°à¤®à¤£, Niá¹£kramaṇa) (literally, first walk) is the sixth of the 16 samskaras (mysteries) practiced by Hindus.Nishkramana is an ancient Hindu ritual that requires a young person seeking success to take a walk in a certain place to a certain temple after coming of age during the year during the rainy season.
-This is also referred to as Nishkramana.
-The ritual was associated with the coronation ceremony, and is considered an important part of the Hindu ritual of Vinapa.
-Nishkramana, also like Nishkama, is a ceremony that ultimately involves entry into the spiritual life. 8a78ff9644
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Adjustment Program Epson Sx115 VERIFIED.md b/spaces/inreVtussa/clothingai/Examples/Adjustment Program Epson Sx115 VERIFIED.md
deleted file mode 100644
index 5503d46859bde44b5a28d25a2b88a16f4a7c2b94..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Adjustment Program Epson Sx115 VERIFIED.md
+++ /dev/null
@@ -1,14 +0,0 @@
-
-
-adjustment program epson sx115 w" Posted on 03/29/2020 08:07
-Actual today (03/29/2020 08:07): adjustment program epson sx115w CISS Epson XP 315 XP 325 XP 435 XP 425 XP 455.
-Programs for resetting Epson diapers free download.
-Adjustment program Epson.
-Epson adjustment program download Epson Adjustment.
-Epson Adjustment Program CISS Forum.
-Download program for resetting diapers for Epson inkjet printers.
-Epson Adjustment Program Free Download RU.
-Epson Adjustment Program. 8a78ff9644
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Adobe Media Encoder CC 2019 13.0.0 (x64) Crack Download.md b/spaces/inreVtussa/clothingai/Examples/Adobe Media Encoder CC 2019 13.0.0 (x64) Crack Download.md
deleted file mode 100644
index 0fe64c965ccf6a2cf0519a10c88febfa12e25b9a..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Adobe Media Encoder CC 2019 13.0.0 (x64) Crack Download.md
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
How to Download Julie 2 Movie with English Subtitles Using Torrent
-
Julie 2 is a 2017 Indian thriller drama film starring Raai Laxmi, Ravi Kishan, Aditya Srivastava, and Pankaj Tripathi. The film follows the journey of a simple aspiring actress who makes it big in a male-dominated industry, but faces many challenges and dangers along the way.
-
Adobe Media Encoder CC 2019 13.0.0 (x64) Crack download
If you want to watch Julie 2 movie with English subtitles, you can use torrent to download it. Torrent is a peer-to-peer file-sharing protocol that allows you to download large files from multiple sources. However, torrenting can also be risky and illegal in some countries, so you should always use a VPN (virtual private network) to protect your privacy and security.
-
Here are the steps to download Julie 2 movie with English subtitles using torrent:
-
Step 1: Find a Reliable Torrent Site
-
The first step is to find a reliable torrent site that has Julie 2 movie with English subtitles. There are many torrent sites on the web, but some of them may be fake, malicious, or blocked by your ISP (internet service provider). You should always check the reviews and ratings of the torrent sites before using them.
-
Some of the popular torrent sites that may have Julie 2 movie with English subtitles are:
-
-
-
The Pirate Bay
-
1337x
-
RARBG
-
LimeTorrents
-
Torrentz2
-
-
Step 2: Download a Torrent Client
-
The next step is to download a torrent client that can handle the torrent files. A torrent client is a software that connects you to other peers who have the same file and downloads it in small pieces. You should always use a reputable and updated torrent client to avoid malware and viruses.
-
Some of the popular torrent clients that you can use are:
-
-
uTorrent
-
BitTorrent
-
qBittorrent
-
Vuze
-
Deluge
-
-
Step 3: Search for Julie 2 Movie with English Subtitles on the Torrent Site
-
The third step is to search for Julie 2 movie with English subtitles on the torrent site that you have chosen. You can use the search bar or browse through the categories to find the movie. You should always look for torrents that have high seeders (uploaders) and leechers (downloaders), as they indicate the popularity and availability of the file.
-
You should also check the comments and feedback of the torrents to see if they are genuine and have good quality. You can also use filters to narrow down your search results by file size, video quality, audio quality, language, etc.
-
Step 4: Download Julie 2 Movie with English Subtitles Torrent File
-
The fourth step is to download Julie 2 movie with English subtitles torrent file from the torrent site. A torrent file is a small file that contains information about the larger file that you want to download. You can download it by clicking on the download button or magnet link on the torrent site.
-
You should always scan the torrent file with an antivirus software before opening it. You should also make sure that you have enough disk space and bandwidth to download the movie.
-
Step 5: Open Julie 2 Movie with English Subtitles Torrent File with Your Torrent Client
-
The final step is to open Julie 2 movie with English subtitles torrent file with your torrent client. This will start the downloading process of the movie from other peers who have it. You can monitor the progress and speed of the download on your torrent client.
-
You should always seed (upload) the movie after downloading it to help other peers who want to download it. You should also delete the torrent file after downloading the movie to save disk space.
-
Conclusion
-
Julie 2 movie with English subtitles is a thrilling drama that you can watch online using torrent. However, you should always be careful and responsible when using torrent, as it can expose you to legal and security risks. You should always use a VPN to protect your identity and data, and only download torrents from trusted sources.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/inreVtussa/clothingai/Examples/Clave Para Activar Spy Hunter 4.md b/spaces/inreVtussa/clothingai/Examples/Clave Para Activar Spy Hunter 4.md
deleted file mode 100644
index 3381ee5555b357109200be988c4d76c9c624d0ff..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Clave Para Activar Spy Hunter 4.md
+++ /dev/null
@@ -1,14 +0,0 @@
-
-
-password y correo para activar spyhunter 4 42 DOWNLOAD : b40a4b9566. rar [1,34 Mb] (downloads: 62) · Download from . SpyHunter 4.42.
-SpyHunter - Scanning Software and Download SpyHunter 4.2.
-How to uninstall a program like SpyHunter?
-How to uninstall a program from a computer?
-SpyHunter - free download.
-How to remove Spyhunter without damaging the system.
-Download SpyHunter 4.42 SpyHunter is a program that allows you to remove .
-SpyHunter - Scanning Software and Download SpyHunter 4.42.
-How to uninstall a program like SpyHunter? 8a78ff9644
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Descarga Gratis Autodata 2010 13.md b/spaces/inreVtussa/clothingai/Examples/Descarga Gratis Autodata 2010 13.md
deleted file mode 100644
index ee137bf8ed48fe9151c63f3cd5bf78e95c3461c5..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Descarga Gratis Autodata 2010 13.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Descarga Gratis Autodata 2010 13l. Disciplines. Accounting. Publication Date. November 8, 1967. Citation Information. Mike Parmelee. "Mixamo Fuse Universal ... 1fdad05405
-
-
-
diff --git a/spaces/j0hngou/vision-diffmask/code/datamodules/image_classification.py b/spaces/j0hngou/vision-diffmask/code/datamodules/image_classification.py
deleted file mode 100644
index 5c16a0c1b9a0c347ee8b144ee4d475f8f874bd29..0000000000000000000000000000000000000000
--- a/spaces/j0hngou/vision-diffmask/code/datamodules/image_classification.py
+++ /dev/null
@@ -1,44 +0,0 @@
-from .base import ImageDataModule
-from torch.utils.data import random_split
-from torchvision.datasets import MNIST, CIFAR10
-from typing import Optional
-
-
-class MNISTDataModule(ImageDataModule):
- """Datamodule for the MNIST dataset."""
-
- def prepare_data(self):
- # Download MNIST
- MNIST(self.data_dir, train=True, download=True)
- MNIST(self.data_dir, train=False, download=True)
-
- def setup(self, stage: Optional[str] = None):
- # Set the training and validation data
- if stage == "fit" or stage is None:
- mnist_full = MNIST(self.data_dir, train=True, transform=self.transform)
- self.train_data, self.val_data = random_split(mnist_full, [55000, 5000])
-
- # Set the test data
- if stage == "test" or stage is None:
- self.test_data = MNIST(self.data_dir, train=False, transform=self.transform)
-
-
-class CIFAR10DataModule(ImageDataModule):
- """Datamodule for the CIFAR10 dataset."""
-
- def prepare_data(self):
- # Download CIFAR10
- CIFAR10(self.data_dir, train=True, download=True)
- CIFAR10(self.data_dir, train=False, download=True)
-
- def setup(self, stage: Optional[str] = None):
- # Set the training and validation data
- if stage == "fit" or stage is None:
- cifar10_full = CIFAR10(self.data_dir, train=True, transform=self.transform)
- self.train_data, self.val_data = random_split(cifar10_full, [45000, 5000])
-
- # Set the test data
- if stage == "test" or stage is None:
- self.test_data = CIFAR10(
- self.data_dir, train=False, transform=self.transform
- )
diff --git a/spaces/jackrui/diff-amp-AMP_Sequence_Detector/app.py b/spaces/jackrui/diff-amp-AMP_Sequence_Detector/app.py
deleted file mode 100644
index 2f50c6b2afc7be870b345b19791761c53fe85e13..0000000000000000000000000000000000000000
--- a/spaces/jackrui/diff-amp-AMP_Sequence_Detector/app.py
+++ /dev/null
@@ -1,114 +0,0 @@
-import numpy as np
-from transformers import AutoTokenizer, AutoModelForSequenceClassification, EsmForSequenceClassification
-from transformers import set_seed
-import torch
-import torch.nn as nn
-import warnings
-from tqdm import tqdm
-import gradio as gr
-
-warnings.filterwarnings('ignore')
-device = "cpu"
-model_checkpoint1 = "facebook/esm2_t12_35M_UR50D"
-tokenizer = AutoTokenizer.from_pretrained(model_checkpoint1)
-
-
-class MyModel(nn.Module):
- def __init__(self):
- super().__init__()
- self.bert1 = EsmForSequenceClassification.from_pretrained(model_checkpoint1, num_labels=3000)#3000
- # for param in self.bert1.parameters():
- # param.requires_grad = False
- self.bn1 = nn.BatchNorm1d(256)
- self.bn2 = nn.BatchNorm1d(128)
- self.bn3 = nn.BatchNorm1d(64)
- self.relu = nn.LeakyReLU()
- self.fc1 = nn.Linear(3000, 256)
- self.fc2 = nn.Linear(256, 128)
- self.fc3 = nn.Linear(128, 64)
- self.output_layer = nn.Linear(64, 2)
- self.dropout = nn.Dropout(0.3) # 0.3
-
- def forward(self, x):
- with torch.no_grad():
- bert_output = self.bert1(input_ids=x['input_ids'],
- attention_mask=x['attention_mask'])
- # output_feature = bert_output["logits"]
- # print(output_feature.size())
- # output_feature = self.bn1(self.fc1(output_feature))
- # output_feature = self.bn2(self.fc1(output_feature))
- # output_feature = self.relu(self.bn3(self.fc3(output_feature)))
- # output_feature = self.dropout(self.output_layer(output_feature))
- output_feature = self.dropout(bert_output["logits"])
- output_feature = self.dropout(self.relu(self.bn1(self.fc1(output_feature))))
- output_feature = self.dropout(self.relu(self.bn2(self.fc2(output_feature))))
- output_feature = self.dropout(self.relu(self.bn3(self.fc3(output_feature))))
- output_feature = self.dropout(self.output_layer(output_feature))
- # return torch.sigmoid(output_feature),output_feature
- return torch.softmax(output_feature, dim=1)
-
-
-def AMP(test_sequences, model):
- # 保持 AMP 函数不变,只处理传入的 test_sequences 数据
- max_len = 18
- test_data = tokenizer(test_sequences, max_length=max_len, padding="max_length", truncation=True,
- return_tensors='pt')
- model = model.to(device)
- model.eval()
- out_probability = []
- with torch.no_grad():
- predict = model(test_data)
- out_probability.extend(np.max(np.array(predict.cpu()), axis=1).tolist())
- test_argmax = np.argmax(predict.cpu(), axis=1).tolist()
- id2str = {0: "non-AMP", 1: "AMP"}
- return id2str[test_argmax[0]], out_probability[0]
-
-
-def classify_sequence(sequence):
- # Check if the sequence is a valid amino acid sequence and has a length of at least 3
- valid_amino_acids = set("ACDEFGHIKLMNPQRSTVWY")
- sequence = sequence.upper()
-
- if all(aa in valid_amino_acids for aa in sequence) and len(sequence) >= 3:
- result, probability = AMP(sequence, model)
- return "yes" if result == "AMP" else "no"
- else:
- return "Invalid Sequence"
-
-# 加载模型
-model = MyModel()
-model.load_state_dict(torch.load("best_model.pth", map_location=torch.device('cpu')),strict=False)
-
-
-if __name__ == "__main__":
- with gr.Blocks() as demo:
- gr.Markdown(
- """
-
- # Welcome to Antimicrobial Peptide Recognition Model
- This is an antimicrobial peptide recognition model derived from Diff-AMP, which is a branch of a comprehensive system integrating generation, recognition, and optimization. In this recognition model, you can simply input a sequence, and it will predict whether it is an antimicrobial peptide. Due to limited website capacity, we can only perform simple predictions.
- If you require large-scale computations, please contact my email at wangrui66677@gmail.com. Feel free to reach out if you have any questions or inquiries.
-
- """)
-
- # 添加示例输入和输出
- examples = [
- ["KLLKKLLKLWKKLLKKLK"],
- ["FLGLLFHGVHHVGKWIHGLIHGHH"],
- ["GLMSTLKGAATNAAVTLLNKLQCKLTGTC"]
- ]
-
- # 创建 Gradio 接口并应用美化样式和示例
- iface = gr.Interface(
- fn=classify_sequence,
- inputs="text",
- outputs="text",
- title="AMP Sequence Detector",
- examples=examples
- )
-
-
- demo.launch()
-
-
-
diff --git a/spaces/jbilcke-hf/MusicGen/tests/data/test_audio.py b/spaces/jbilcke-hf/MusicGen/tests/data/test_audio.py
deleted file mode 100644
index 40c0d5ed69eff92a766dc6d176e532f0df6c2b5e..0000000000000000000000000000000000000000
--- a/spaces/jbilcke-hf/MusicGen/tests/data/test_audio.py
+++ /dev/null
@@ -1,239 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from itertools import product
-import random
-
-import numpy as np
-import torch
-import torchaudio
-
-from audiocraft.data.audio import audio_info, audio_read, audio_write, _av_read
-
-from ..common_utils import TempDirMixin, get_white_noise, save_wav
-
-
-class TestInfo(TempDirMixin):
-
- def test_info_mp3(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- wav = get_white_noise(ch, int(sample_rate * duration))
- path = self.get_temp_path('sample_wav.mp3')
- save_wav(path, wav, sample_rate)
- info = audio_info(path)
- assert info.sample_rate == sample_rate
- assert info.channels == ch
- # we cannot trust torchaudio for num_frames, so we don't check
-
- def _test_info_format(self, ext: str):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- wav = get_white_noise(ch, n_frames)
- path = self.get_temp_path(f'sample_wav{ext}')
- save_wav(path, wav, sample_rate)
- info = audio_info(path)
- assert info.sample_rate == sample_rate
- assert info.channels == ch
- assert np.isclose(info.duration, duration, atol=1e-5)
-
- def test_info_wav(self):
- self._test_info_format('.wav')
-
- def test_info_flac(self):
- self._test_info_format('.flac')
-
- def test_info_ogg(self):
- self._test_info_format('.ogg')
-
- def test_info_m4a(self):
- # TODO: generate m4a file programmatically
- # self._test_info_format('.m4a')
- pass
-
-
-class TestRead(TempDirMixin):
-
- def test_read_full_wav(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99)
- path = self.get_temp_path('sample_wav.wav')
- save_wav(path, wav, sample_rate)
- read_wav, read_sr = audio_read(path)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[1] == wav.shape[1]
- assert torch.allclose(read_wav, wav, rtol=1e-03, atol=1e-04)
-
- def test_read_partial_wav(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- read_duration = torch.rand(1).item()
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- read_frames = int(sample_rate * read_duration)
- wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99)
- path = self.get_temp_path('sample_wav.wav')
- save_wav(path, wav, sample_rate)
- read_wav, read_sr = audio_read(path, 0, read_duration)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[1] == read_frames
- assert torch.allclose(read_wav[..., 0:read_frames], wav[..., 0:read_frames], rtol=1e-03, atol=1e-04)
-
- def test_read_seek_time_wav(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- read_duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99)
- path = self.get_temp_path('sample_wav.wav')
- save_wav(path, wav, sample_rate)
- seek_time = torch.rand(1).item()
- read_wav, read_sr = audio_read(path, seek_time, read_duration)
- seek_frames = int(sample_rate * seek_time)
- expected_frames = n_frames - seek_frames
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[1] == expected_frames
- assert torch.allclose(read_wav, wav[..., seek_frames:], rtol=1e-03, atol=1e-04)
-
- def test_read_seek_time_wav_padded(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- read_duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- read_frames = int(sample_rate * read_duration)
- wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99)
- path = self.get_temp_path('sample_wav.wav')
- save_wav(path, wav, sample_rate)
- seek_time = torch.rand(1).item()
- seek_frames = int(sample_rate * seek_time)
- expected_frames = n_frames - seek_frames
- read_wav, read_sr = audio_read(path, seek_time, read_duration, pad=True)
- expected_pad_wav = torch.zeros(wav.shape[0], read_frames - expected_frames)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[1] == read_frames
- assert torch.allclose(read_wav[..., :expected_frames], wav[..., seek_frames:], rtol=1e-03, atol=1e-04)
- assert torch.allclose(read_wav[..., expected_frames:], expected_pad_wav)
-
-
-class TestAvRead(TempDirMixin):
-
- def test_avread_seek_base(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 2.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- wav = get_white_noise(ch, n_frames)
- path = self.get_temp_path(f'reference_a_{sample_rate}_{ch}.wav')
- save_wav(path, wav, sample_rate)
- for _ in range(100):
- # seek will always load a full duration segment in the file
- seek_time = random.uniform(0.0, 1.0)
- seek_duration = random.uniform(0.001, 1.0)
- read_wav, read_sr = _av_read(path, seek_time, seek_duration)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[-1] == int(seek_duration * sample_rate)
-
- def test_avread_seek_partial(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- wav = get_white_noise(ch, n_frames)
- path = self.get_temp_path(f'reference_b_{sample_rate}_{ch}.wav')
- save_wav(path, wav, sample_rate)
- for _ in range(100):
- # seek will always load a partial segment
- seek_time = random.uniform(0.5, 1.)
- seek_duration = 1.
- expected_num_frames = n_frames - int(seek_time * sample_rate)
- read_wav, read_sr = _av_read(path, seek_time, seek_duration)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[-1] == expected_num_frames
-
- def test_avread_seek_outofbound(self):
- sample_rates = [8000, 16_000]
- channels = [1, 2]
- duration = 1.
- for sample_rate, ch in product(sample_rates, channels):
- n_frames = int(sample_rate * duration)
- wav = get_white_noise(ch, n_frames)
- path = self.get_temp_path(f'reference_c_{sample_rate}_{ch}.wav')
- save_wav(path, wav, sample_rate)
- seek_time = 1.5
- read_wav, read_sr = _av_read(path, seek_time, 1.)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[-1] == 0
-
- def test_avread_seek_edge(self):
- sample_rates = [8000, 16_000]
- # some of these values will have
- # int(((frames - 1) / sample_rate) * sample_rate) != (frames - 1)
- n_frames = [1000, 1001, 1002]
- channels = [1, 2]
- for sample_rate, ch, frames in product(sample_rates, channels, n_frames):
- duration = frames / sample_rate
- wav = get_white_noise(ch, frames)
- path = self.get_temp_path(f'reference_d_{sample_rate}_{ch}.wav')
- save_wav(path, wav, sample_rate)
- seek_time = (frames - 1) / sample_rate
- seek_frames = int(seek_time * sample_rate)
- read_wav, read_sr = _av_read(path, seek_time, duration)
- assert read_sr == sample_rate
- assert read_wav.shape[0] == wav.shape[0]
- assert read_wav.shape[-1] == (frames - seek_frames)
-
-
-class TestAudioWrite(TempDirMixin):
-
- def test_audio_write_wav(self):
- torch.manual_seed(1234)
- sample_rates = [8000, 16_000]
- n_frames = [1000, 1001, 1002]
- channels = [1, 2]
- strategies = ["peak", "clip", "rms"]
- formats = ["wav", "mp3"]
- for sample_rate, ch, frames in product(sample_rates, channels, n_frames):
- for format_, strategy in product(formats, strategies):
- wav = get_white_noise(ch, frames)
- path = self.get_temp_path(f'pred_{sample_rate}_{ch}')
- audio_write(path, wav, sample_rate, format_, strategy=strategy)
- read_wav, read_sr = torchaudio.load(f'{path}.{format_}')
- if format_ == "wav":
- assert read_wav.shape == wav.shape
-
- if format_ == "wav" and strategy in ["peak", "rms"]:
- rescaled_read_wav = read_wav / read_wav.abs().max() * wav.abs().max()
- # for a Gaussian, the typical max scale will be less than ~5x the std.
- # The error when writing to disk will ~ 1/2**15, and when rescaling, 5x that.
- # For RMS target, rescaling leaves more headroom by default, leading
- # to a 20x rescaling typically
- atol = (5 if strategy == "peak" else 20) / 2**15
- delta = (rescaled_read_wav - wav).abs().max()
- assert torch.allclose(wav, rescaled_read_wav, rtol=0, atol=atol), (delta, atol)
- formats = ["wav"] # faster unit tests
diff --git a/spaces/jbilcke-hf/VideoQuest/tailwind.config.js b/spaces/jbilcke-hf/VideoQuest/tailwind.config.js
deleted file mode 100644
index ce2783d5277b5c05378042e0a47eed675e99b606..0000000000000000000000000000000000000000
--- a/spaces/jbilcke-hf/VideoQuest/tailwind.config.js
+++ /dev/null
@@ -1,46 +0,0 @@
-/** @type {import('tailwindcss').Config} */
-module.exports = {
- darkMode: ["class"],
- content: [
- './pages/**/*.{ts,tsx}',
- './components/**/*.{ts,tsx}',
- './app/**/*.{ts,tsx}',
- './src/**/*.{ts,tsx}',
- './src/lib/fonts.ts'
- ],
- theme: {
- container: {
- center: true,
- padding: "2rem",
- screens: {
- "2xl": "1400px",
- },
- },
- extend: {
- fontFamily: {
- sans: ['var(--font-inter)'],
- edu: ['var(--font-edu)'],
- orbitron: ['var(--font-orbitron)'],
- amatic: ['var(--font-amatic)'],
- macondo: ['var(--font-macondo)'],
- imfell: ['var(--font-imfell)'],
- lugrasimo: ['var(--font-lugrasimo)'],
- },
- keyframes: {
- "accordion-down": {
- from: { height: 0 },
- to: { height: "var(--radix-accordion-content-height)" },
- },
- "accordion-up": {
- from: { height: "var(--radix-accordion-content-height)" },
- to: { height: 0 },
- },
- },
- animation: {
- "accordion-down": "accordion-down 0.2s ease-out",
- "accordion-up": "accordion-up 0.2s ease-out",
- },
- },
- },
- plugins: [require("tailwindcss-animate")],
-}
\ No newline at end of file
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Cipher/_EKSBlowfish.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Cipher/_EKSBlowfish.py
deleted file mode 100644
index a844fae43d092a23c8a9d2eecf8caa4493433579..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Cipher/_EKSBlowfish.py
+++ /dev/null
@@ -1,131 +0,0 @@
-# ===================================================================
-#
-# Copyright (c) 2019, Legrandin
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-#
-# 1. Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# 2. Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in
-# the documentation and/or other materials provided with the
-# distribution.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
-# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
-# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
-# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
-# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
-# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-# ===================================================================
-
-import sys
-
-from Crypto.Cipher import _create_cipher
-from Crypto.Util._raw_api import (load_pycryptodome_raw_lib,
- VoidPointer, SmartPointer, c_size_t,
- c_uint8_ptr, c_uint)
-
-_raw_blowfish_lib = load_pycryptodome_raw_lib(
- "Crypto.Cipher._raw_eksblowfish",
- """
- int EKSBlowfish_start_operation(const uint8_t key[],
- size_t key_len,
- const uint8_t salt[16],
- size_t salt_len,
- unsigned cost,
- unsigned invert,
- void **pResult);
- int EKSBlowfish_encrypt(const void *state,
- const uint8_t *in,
- uint8_t *out,
- size_t data_len);
- int EKSBlowfish_decrypt(const void *state,
- const uint8_t *in,
- uint8_t *out,
- size_t data_len);
- int EKSBlowfish_stop_operation(void *state);
- """
- )
-
-
-def _create_base_cipher(dict_parameters):
- """This method instantiates and returns a smart pointer to
- a low-level base cipher. It will absorb named parameters in
- the process."""
-
- try:
- key = dict_parameters.pop("key")
- salt = dict_parameters.pop("salt")
- cost = dict_parameters.pop("cost")
- except KeyError as e:
- raise TypeError("Missing EKSBlowfish parameter: " + str(e))
- invert = dict_parameters.pop("invert", True)
-
- if len(key) not in key_size:
- raise ValueError("Incorrect EKSBlowfish key length (%d bytes)" % len(key))
-
- start_operation = _raw_blowfish_lib.EKSBlowfish_start_operation
- stop_operation = _raw_blowfish_lib.EKSBlowfish_stop_operation
-
- void_p = VoidPointer()
- result = start_operation(c_uint8_ptr(key),
- c_size_t(len(key)),
- c_uint8_ptr(salt),
- c_size_t(len(salt)),
- c_uint(cost),
- c_uint(int(invert)),
- void_p.address_of())
- if result:
- raise ValueError("Error %X while instantiating the EKSBlowfish cipher"
- % result)
- return SmartPointer(void_p.get(), stop_operation)
-
-
-def new(key, mode, salt, cost, invert):
- """Create a new EKSBlowfish cipher
-
- Args:
-
- key (bytes, bytearray, memoryview):
- The secret key to use in the symmetric cipher.
- Its length can vary from 0 to 72 bytes.
-
- mode (one of the supported ``MODE_*`` constants):
- The chaining mode to use for encryption or decryption.
-
- salt (bytes, bytearray, memoryview):
- The salt that bcrypt uses to thwart rainbow table attacks
-
- cost (integer):
- The complexity factor in bcrypt
-
- invert (bool):
- If ``False``, in the inner loop use ``ExpandKey`` first over the salt
- and then over the key, as defined in
- the `original bcrypt specification `_.
- If ``True``, reverse the order, as in the first implementation of
- `bcrypt` in OpenBSD.
-
- :Return: an EKSBlowfish object
- """
-
- kwargs = { 'salt':salt, 'cost':cost, 'invert':invert }
- return _create_cipher(sys.modules[__name__], key, mode, **kwargs)
-
-
-MODE_ECB = 1
-
-# Size of a data block (in bytes)
-block_size = 8
-# Size of a key (in bytes)
-key_size = range(0, 72 + 1)
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/IO/PKCS8.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/IO/PKCS8.py
deleted file mode 100644
index 18dffae35e3c97368e7925b8d635a7dfeacdaac8..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/IO/PKCS8.py
+++ /dev/null
@@ -1,239 +0,0 @@
-#
-# PublicKey/PKCS8.py : PKCS#8 functions
-#
-# ===================================================================
-#
-# Copyright (c) 2014, Legrandin
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-#
-# 1. Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# 2. Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in
-# the documentation and/or other materials provided with the
-# distribution.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
-# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
-# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
-# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
-# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
-# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-# ===================================================================
-
-
-from Crypto.Util.py3compat import *
-
-from Crypto.Util.asn1 import (
- DerNull,
- DerSequence,
- DerObjectId,
- DerOctetString,
- )
-
-from Crypto.IO._PBES import PBES1, PBES2, PbesError
-
-
-__all__ = ['wrap', 'unwrap']
-
-
-def wrap(private_key, key_oid, passphrase=None, protection=None,
- prot_params=None, key_params=DerNull(), randfunc=None):
- """Wrap a private key into a PKCS#8 blob (clear or encrypted).
-
- Args:
-
- private_key (byte string):
- The private key encoded in binary form. The actual encoding is
- algorithm specific. In most cases, it is DER.
-
- key_oid (string):
- The object identifier (OID) of the private key to wrap.
- It is a dotted string, like ``1.2.840.113549.1.1.1`` (for RSA keys).
-
- passphrase (bytes string or string):
- The secret passphrase from which the wrapping key is derived.
- Set it only if encryption is required.
-
- protection (string):
- The identifier of the algorithm to use for securely wrapping the key.
- The default value is ``PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC``.
-
- prot_params (dictionary):
- Parameters for the protection algorithm.
-
- +------------------+-----------------------------------------------+
- | Key | Description |
- +==================+===============================================+
- | iteration_count | The KDF algorithm is repeated several times to|
- | | slow down brute force attacks on passwords |
- | | (called *N* or CPU/memory cost in scrypt). |
- | | The default value for PBKDF2 is 1000. |
- | | The default value for scrypt is 16384. |
- +------------------+-----------------------------------------------+
- | salt_size | Salt is used to thwart dictionary and rainbow |
- | | attacks on passwords. The default value is 8 |
- | | bytes. |
- +------------------+-----------------------------------------------+
- | block_size | *(scrypt only)* Memory-cost (r). The default |
- | | value is 8. |
- +------------------+-----------------------------------------------+
- | parallelization | *(scrypt only)* CPU-cost (p). The default |
- | | value is 1. |
- +------------------+-----------------------------------------------+
-
- key_params (DER object or None):
- The ``parameters`` field to use in the ``AlgorithmIdentifier``
- SEQUENCE. If ``None``, no ``parameters`` field will be added.
- By default, the ASN.1 type ``NULL`` is used.
-
- randfunc (callable):
- Random number generation function; it should accept a single integer
- N and return a string of random data, N bytes long.
- If not specified, a new RNG will be instantiated
- from :mod:`Crypto.Random`.
-
- Return:
- The PKCS#8-wrapped private key (possibly encrypted), as a byte string.
- """
-
- #
- # PrivateKeyInfo ::= SEQUENCE {
- # version Version,
- # privateKeyAlgorithm PrivateKeyAlgorithmIdentifier,
- # privateKey PrivateKey,
- # attributes [0] IMPLICIT Attributes OPTIONAL
- # }
- #
- if key_params is None:
- algorithm = DerSequence([DerObjectId(key_oid)])
- else:
- algorithm = DerSequence([DerObjectId(key_oid), key_params])
-
- pk_info = DerSequence([
- 0,
- algorithm,
- DerOctetString(private_key)
- ])
- pk_info_der = pk_info.encode()
-
- if passphrase is None:
- return pk_info_der
-
- if not passphrase:
- raise ValueError("Empty passphrase")
-
- # Encryption with PBES2
- passphrase = tobytes(passphrase)
- if protection is None:
- protection = 'PBKDF2WithHMAC-SHA1AndDES-EDE3-CBC'
- return PBES2.encrypt(pk_info_der, passphrase,
- protection, prot_params, randfunc)
-
-
-def unwrap(p8_private_key, passphrase=None):
- """Unwrap a private key from a PKCS#8 blob (clear or encrypted).
-
- Args:
- p8_private_key (byte string):
- The private key wrapped into a PKCS#8 blob, DER encoded.
- passphrase (byte string or string):
- The passphrase to use to decrypt the blob (if it is encrypted).
-
- Return:
- A tuple containing
-
- #. the algorithm identifier of the wrapped key (OID, dotted string)
- #. the private key (byte string, DER encoded)
- #. the associated parameters (byte string, DER encoded) or ``None``
-
- Raises:
- ValueError : if decoding fails
- """
-
- if passphrase:
- passphrase = tobytes(passphrase)
-
- found = False
- try:
- p8_private_key = PBES1.decrypt(p8_private_key, passphrase)
- found = True
- except PbesError as e:
- error_str = "PBES1[%s]" % str(e)
- except ValueError:
- error_str = "PBES1[Invalid]"
-
- if not found:
- try:
- p8_private_key = PBES2.decrypt(p8_private_key, passphrase)
- found = True
- except PbesError as e:
- error_str += ",PBES2[%s]" % str(e)
- except ValueError:
- error_str += ",PBES2[Invalid]"
-
- if not found:
- raise ValueError("Error decoding PKCS#8 (%s)" % error_str)
-
- pk_info = DerSequence().decode(p8_private_key, nr_elements=(2, 3, 4, 5))
- if len(pk_info) == 2 and not passphrase:
- raise ValueError("Not a valid clear PKCS#8 structure "
- "(maybe it is encrypted?)")
-
- # RFC5208, PKCS#8, version is v1(0)
- #
- # PrivateKeyInfo ::= SEQUENCE {
- # version Version,
- # privateKeyAlgorithm PrivateKeyAlgorithmIdentifier,
- # privateKey PrivateKey,
- # attributes [0] IMPLICIT Attributes OPTIONAL
- # }
- #
- # RFC5915, Asymmetric Key Package, version is v2(1)
- #
- # OneAsymmetricKey ::= SEQUENCE {
- # version Version,
- # privateKeyAlgorithm PrivateKeyAlgorithmIdentifier,
- # privateKey PrivateKey,
- # attributes [0] Attributes OPTIONAL,
- # ...,
- # [[2: publicKey [1] PublicKey OPTIONAL ]],
- # ...
- # }
-
- if pk_info[0] == 0:
- if len(pk_info) not in (3, 4):
- raise ValueError("Not a valid PrivateKeyInfo SEQUENCE")
- elif pk_info[0] == 1:
- if len(pk_info) not in (3, 4, 5):
- raise ValueError("Not a valid PrivateKeyInfo SEQUENCE")
- else:
- raise ValueError("Not a valid PrivateKeyInfo SEQUENCE")
-
- algo = DerSequence().decode(pk_info[1], nr_elements=(1, 2))
- algo_oid = DerObjectId().decode(algo[0]).value
- if len(algo) == 1:
- algo_params = None
- else:
- try:
- DerNull().decode(algo[1])
- algo_params = None
- except:
- algo_params = algo[1]
-
- # PrivateKey ::= OCTET STRING
- private_key = DerOctetString().decode(pk_info[2]).payload
-
- # We ignore attributes and (for v2 only) publickey
-
- return (algo_oid, private_key, algo_params)
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/MicImagePlugin.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/MicImagePlugin.py
deleted file mode 100644
index 801318930d515426a186a7524f25ef7c342dec7a..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/MicImagePlugin.py
+++ /dev/null
@@ -1,103 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# Microsoft Image Composer support for PIL
-#
-# Notes:
-# uses TiffImagePlugin.py to read the actual image streams
-#
-# History:
-# 97-01-20 fl Created
-#
-# Copyright (c) Secret Labs AB 1997.
-# Copyright (c) Fredrik Lundh 1997.
-#
-# See the README file for information on usage and redistribution.
-#
-
-
-import olefile
-
-from . import Image, TiffImagePlugin
-
-#
-# --------------------------------------------------------------------
-
-
-def _accept(prefix):
- return prefix[:8] == olefile.MAGIC
-
-
-##
-# Image plugin for Microsoft's Image Composer file format.
-
-
-class MicImageFile(TiffImagePlugin.TiffImageFile):
- format = "MIC"
- format_description = "Microsoft Image Composer"
- _close_exclusive_fp_after_loading = False
-
- def _open(self):
- # read the OLE directory and see if this is a likely
- # to be a Microsoft Image Composer file
-
- try:
- self.ole = olefile.OleFileIO(self.fp)
- except OSError as e:
- msg = "not an MIC file; invalid OLE file"
- raise SyntaxError(msg) from e
-
- # find ACI subfiles with Image members (maybe not the
- # best way to identify MIC files, but what the... ;-)
-
- self.images = []
- for path in self.ole.listdir():
- if path[1:] and path[0][-4:] == ".ACI" and path[1] == "Image":
- self.images.append(path)
-
- # if we didn't find any images, this is probably not
- # an MIC file.
- if not self.images:
- msg = "not an MIC file; no image entries"
- raise SyntaxError(msg)
-
- self.frame = None
- self._n_frames = len(self.images)
- self.is_animated = self._n_frames > 1
-
- self.seek(0)
-
- def seek(self, frame):
- if not self._seek_check(frame):
- return
- try:
- filename = self.images[frame]
- except IndexError as e:
- msg = "no such frame"
- raise EOFError(msg) from e
-
- self.fp = self.ole.openstream(filename)
-
- TiffImagePlugin.TiffImageFile._open(self)
-
- self.frame = frame
-
- def tell(self):
- return self.frame
-
- def close(self):
- self.ole.close()
- super().close()
-
- def __exit__(self, *args):
- self.ole.close()
- super().__exit__()
-
-
-#
-# --------------------------------------------------------------------
-
-Image.register_open(MicImageFile.format, MicImageFile, _accept)
-
-Image.register_extension(MicImageFile.format, ".mic")
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/utils/display.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/utils/display.py
deleted file mode 100644
index f2bb99bad25753b259c8f8f42f7fc7567af8a7ed..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/utils/display.py
+++ /dev/null
@@ -1,215 +0,0 @@
-import json
-import pkgutil
-import textwrap
-from typing import Callable, Dict, Optional, Tuple, Any, Union
-import uuid
-
-from ._vegafusion_data import compile_with_vegafusion, using_vegafusion
-from .plugin_registry import PluginRegistry, PluginEnabler
-from .mimebundle import spec_to_mimebundle
-from .schemapi import validate_jsonschema
-
-
-# ==============================================================================
-# Renderer registry
-# ==============================================================================
-# MimeBundleType needs to be the same as what are acceptable return values
-# for _repr_mimebundle_,
-# see https://ipython.readthedocs.io/en/stable/config/integrating.html#MyObject._repr_mimebundle_
-MimeBundleDataType = Dict[str, Any]
-MimeBundleMetaDataType = Dict[str, Any]
-MimeBundleType = Union[
- MimeBundleDataType, Tuple[MimeBundleDataType, MimeBundleMetaDataType]
-]
-RendererType = Callable[..., MimeBundleType]
-# Subtype of MimeBundleType as more specific in the values of the dictionaries
-DefaultRendererReturnType = Tuple[
- Dict[str, Union[str, dict]], Dict[str, Dict[str, Any]]
-]
-
-
-class RendererRegistry(PluginRegistry[RendererType]):
- entrypoint_err_messages = {
- "notebook": textwrap.dedent(
- """
- To use the 'notebook' renderer, you must install the vega package
- and the associated Jupyter extension.
- See https://altair-viz.github.io/getting_started/installation.html
- for more information.
- """
- ),
- "altair_viewer": textwrap.dedent(
- """
- To use the 'altair_viewer' renderer, you must install the altair_viewer
- package; see http://github.com/altair-viz/altair_viewer/
- for more information.
- """
- ),
- }
-
- def set_embed_options(
- self,
- defaultStyle: Optional[Union[bool, str]] = None,
- renderer: Optional[str] = None,
- width: Optional[int] = None,
- height: Optional[int] = None,
- padding: Optional[int] = None,
- scaleFactor: Optional[float] = None,
- actions: Optional[Union[bool, Dict[str, bool]]] = None,
- **kwargs,
- ) -> PluginEnabler:
- """Set options for embeddings of Vega & Vega-Lite charts.
-
- Options are fully documented at https://github.com/vega/vega-embed.
- Similar to the `enable()` method, this can be used as either
- a persistent global switch, or as a temporary local setting using
- a context manager (i.e. a `with` statement).
-
- Parameters
- ----------
- defaultStyle : bool or string
- Specify a default stylesheet for embed actions.
- renderer : string
- The renderer to use for the view. One of "canvas" (default) or "svg"
- width : integer
- The view width in pixels
- height : integer
- The view height in pixels
- padding : integer
- The view padding in pixels
- scaleFactor : number
- The number by which to multiply the width and height (default 1)
- of an exported PNG or SVG image.
- actions : bool or dict
- Determines if action links ("Export as PNG/SVG", "View Source",
- "View Vega" (only for Vega-Lite), "Open in Vega Editor") are
- included with the embedded view. If the value is true, all action
- links will be shown and none if the value is false. This property
- can take a key-value mapping object that maps keys (export, source,
- compiled, editor) to boolean values for determining if
- each action link should be shown.
- **kwargs :
- Additional options are passed directly to embed options.
- """
- options: Dict[str, Optional[Union[bool, str, float, Dict[str, bool]]]] = {
- "defaultStyle": defaultStyle,
- "renderer": renderer,
- "width": width,
- "height": height,
- "padding": padding,
- "scaleFactor": scaleFactor,
- "actions": actions,
- }
- kwargs.update({key: val for key, val in options.items() if val is not None})
- return self.enable(None, embed_options=kwargs)
-
-
-# ==============================================================================
-# VegaLite v1/v2 renderer logic
-# ==============================================================================
-
-
-class Displayable:
- """A base display class for VegaLite v1/v2.
-
- This class takes a VegaLite v1/v2 spec and does the following:
-
- 1. Optionally validates the spec against a schema.
- 2. Uses the RendererPlugin to grab a renderer and call it when the
- IPython/Jupyter display method (_repr_mimebundle_) is called.
-
- The spec passed to this class must be fully schema compliant and already
- have the data portion of the spec fully processed and ready to serialize.
- In practice, this means, the data portion of the spec should have been passed
- through appropriate data model transformers.
- """
-
- renderers: Optional[RendererRegistry] = None
- schema_path = ("altair", "")
-
- def __init__(self, spec: dict, validate: bool = False) -> None:
- self.spec = spec
- self.validate = validate
- self._validate()
-
- def _validate(self) -> None:
- """Validate the spec against the schema."""
- data = pkgutil.get_data(*self.schema_path)
- assert data is not None
- schema_dict: dict = json.loads(data.decode("utf-8"))
- validate_jsonschema(
- self.spec,
- schema_dict,
- )
-
- def _repr_mimebundle_(
- self, include: Any = None, exclude: Any = None
- ) -> MimeBundleType:
- """Return a MIME bundle for display in Jupyter frontends."""
- if self.renderers is not None:
- renderer_func = self.renderers.get()
- assert renderer_func is not None
- return renderer_func(self.spec)
- else:
- return {}
-
-
-def default_renderer_base(
- spec: dict, mime_type: str, str_repr: str, **options
-) -> DefaultRendererReturnType:
- """A default renderer for Vega or VegaLite that works for modern frontends.
-
- This renderer works with modern frontends (JupyterLab, nteract) that know
- how to render the custom VegaLite MIME type listed above.
- """
- # Local import to avoid circular ImportError
- from altair.vegalite.v5.display import VEGA_MIME_TYPE, VEGALITE_MIME_TYPE
-
- assert isinstance(spec, dict)
- bundle: Dict[str, Union[str, dict]] = {}
- metadata: Dict[str, Dict[str, Any]] = {}
-
- if using_vegafusion():
- spec = compile_with_vegafusion(spec)
-
- # Swap mimetype from Vega-Lite to Vega.
- # If mimetype was JSON, leave it alone
- if mime_type == VEGALITE_MIME_TYPE:
- mime_type = VEGA_MIME_TYPE
-
- bundle[mime_type] = spec
- bundle["text/plain"] = str_repr
- if options:
- metadata[mime_type] = options
- return bundle, metadata
-
-
-def json_renderer_base(
- spec: dict, str_repr: str, **options
-) -> DefaultRendererReturnType:
- """A renderer that returns a MIME type of application/json.
-
- In JupyterLab/nteract this is rendered as a nice JSON tree.
- """
- return default_renderer_base(
- spec, mime_type="application/json", str_repr=str_repr, **options
- )
-
-
-class HTMLRenderer:
- """Object to render charts as HTML, with a unique output div each time"""
-
- def __init__(self, output_div: str = "altair-viz-{}", **kwargs) -> None:
- self._output_div = output_div
- self.kwargs = kwargs
-
- @property
- def output_div(self) -> str:
- return self._output_div.format(uuid.uuid4().hex)
-
- def __call__(self, spec: dict, **metadata) -> Dict[str, str]:
- kwargs = self.kwargs.copy()
- kwargs.update(metadata)
- return spec_to_mimebundle(
- spec, format="html", output_div=self.output_div, **kwargs
- )
diff --git a/spaces/jone/Music_Source_Separation/scripts/1_pack_audios_to_hdf5s/instruments_solo/violin/sr=44100,chn=2.sh b/spaces/jone/Music_Source_Separation/scripts/1_pack_audios_to_hdf5s/instruments_solo/violin/sr=44100,chn=2.sh
deleted file mode 100644
index a19ffa39548062d491ba43eebf8bbcba729da422..0000000000000000000000000000000000000000
--- a/spaces/jone/Music_Source_Separation/scripts/1_pack_audios_to_hdf5s/instruments_solo/violin/sr=44100,chn=2.sh
+++ /dev/null
@@ -1,25 +0,0 @@
-#!/bin/bash
-INSTRUMENTS_SOLO_DATASET_DIR=${1:-"./datasets/instruments_solo"} # The first argument is dataset directory.
-WORKSPACE=${2:-"./workspaces/bytesep"} # The second argument is workspace directory.
-
-echo "INSTRUMENTS_SOLO_DATASET_DIR=${INSTRUMENTS_SOLO_DATASET_DIR}"
-echo "WORKSPACE=${WORKSPACE}"
-
-# Users can change the following settings.
-SAMPLE_RATE=44100
-CHANNELS=2
-
-INSTRUMENT="violin"
-
-# Paths
-SUB_DATASET_DIR="${INSTRUMENTS_SOLO_DATASET_DIR}/${INSTRUMENT}_solo/v0.1"
-
-HDF5S_DIR="${WORKSPACE}/hdf5s/instruments_solo/${INSTRUMENT}/sr=${SAMPLE_RATE}_chn=${CHANNELS}/train"
-
-python3 bytesep/dataset_creation/pack_audios_to_hdf5s/instruments_solo.py \
- --dataset_dir=$SUB_DATASET_DIR \
- --split="train" \
- --source_type=$INSTRUMENT \
- --hdf5s_dir=$HDF5S_DIR \
- --sample_rate=$SAMPLE_RATE \
- --channels=$CHANNELS
\ No newline at end of file
diff --git a/spaces/julien-c/sveltekit-demo/src/global.d.ts b/spaces/julien-c/sveltekit-demo/src/global.d.ts
deleted file mode 100644
index 63908c66cfd4acc4a5abbcc02180ca44c7e3787e..0000000000000000000000000000000000000000
--- a/spaces/julien-c/sveltekit-demo/src/global.d.ts
+++ /dev/null
@@ -1 +0,0 @@
-///
diff --git a/spaces/justest/gpt4free/g4f/Provider/Providers/Bard.py b/spaces/justest/gpt4free/g4f/Provider/Providers/Bard.py
deleted file mode 100644
index 4c37c4b719430031fce41ce49946f0e6ac93d155..0000000000000000000000000000000000000000
--- a/spaces/justest/gpt4free/g4f/Provider/Providers/Bard.py
+++ /dev/null
@@ -1,74 +0,0 @@
-import os, requests, json, browser_cookie3, re, random
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://bard.google.com'
-model = ['Palm2']
-supports_stream = False
-needs_auth = True
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- psid = {cookie.name: cookie.value for cookie in browser_cookie3.chrome(
- domain_name='.google.com')}['__Secure-1PSID']
-
- formatted = '\n'.join([
- '%s: %s' % (message['role'], message['content']) for message in messages
- ])
- prompt = f'{formatted}\nAssistant:'
-
- proxy = kwargs.get('proxy', False)
- if proxy == False:
- print('warning!, you did not give a proxy, a lot of countries are banned from Google Bard, so it may not work')
-
- snlm0e = None
- conversation_id = None
- response_id = None
- choice_id = None
-
- client = requests.Session()
- client.proxies = {
- 'http': f'http://{proxy}',
- 'https': f'http://{proxy}'} if proxy else None
-
- client.headers = {
- 'authority': 'bard.google.com',
- 'content-type': 'application/x-www-form-urlencoded;charset=UTF-8',
- 'origin': 'https://bard.google.com',
- 'referer': 'https://bard.google.com/',
- 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36',
- 'x-same-domain': '1',
- 'cookie': f'__Secure-1PSID={psid}'
- }
-
- snlm0e = re.search(r'SNlM0e\":\"(.*?)\"',
- client.get('https://bard.google.com/').text).group(1) if not snlm0e else snlm0e
-
- params = {
- 'bl': 'boq_assistant-bard-web-server_20230326.21_p0',
- '_reqid': random.randint(1111, 9999),
- 'rt': 'c'
- }
-
- data = {
- 'at': snlm0e,
- 'f.req': json.dumps([None, json.dumps([[prompt], None, [conversation_id, response_id, choice_id]])])}
-
- intents = '.'.join([
- 'assistant',
- 'lamda',
- 'BardFrontendService'
- ])
-
- response = client.post(f'https://bard.google.com/_/BardChatUi/data/{intents}/StreamGenerate',
- data=data, params=params)
-
- chat_data = json.loads(response.content.splitlines()[3])[0][2]
- if chat_data:
- json_chat_data = json.loads(chat_data)
-
- yield json_chat_data[0][0]
-
- else:
- yield 'error'
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
\ No newline at end of file
diff --git a/spaces/kainy/rvc_okiba_TTS/config.py b/spaces/kainy/rvc_okiba_TTS/config.py
deleted file mode 100644
index 4038dad0ac30ba03b6271499f4e37bbc745a2032..0000000000000000000000000000000000000000
--- a/spaces/kainy/rvc_okiba_TTS/config.py
+++ /dev/null
@@ -1,115 +0,0 @@
-import argparse
-import sys
-import torch
-from multiprocessing import cpu_count
-
-
-class Config:
- def __init__(self):
- self.device = "cuda:0"
- self.is_half = True
- self.n_cpu = 0
- self.gpu_name = None
- self.gpu_mem = None
- (
- self.python_cmd,
- self.listen_port,
- self.iscolab,
- self.noparallel,
- self.noautoopen,
- ) = self.arg_parse()
- self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config()
-
- @staticmethod
- def arg_parse() -> tuple:
- exe = sys.executable or "python"
- parser = argparse.ArgumentParser()
- parser.add_argument("--port", type=int, default=7865, help="Listen port")
- parser.add_argument("--pycmd", type=str, default=exe, help="Python command")
- parser.add_argument("--colab", action="store_true", help="Launch in colab")
- parser.add_argument(
- "--noparallel", action="store_true", help="Disable parallel processing"
- )
- parser.add_argument(
- "--noautoopen",
- action="store_true",
- help="Do not open in browser automatically",
- )
- cmd_opts = parser.parse_args()
-
- cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865
-
- return (
- cmd_opts.pycmd,
- cmd_opts.port,
- cmd_opts.colab,
- cmd_opts.noparallel,
- cmd_opts.noautoopen,
- )
-
- # has_mps is only available in nightly pytorch (for now) and MasOS 12.3+.
- # check `getattr` and try it for compatibility
- @staticmethod
- def has_mps() -> bool:
- if not torch.backends.mps.is_available():
- return False
- try:
- torch.zeros(1).to(torch.device("mps"))
- return True
- except Exception:
- return False
-
- def device_config(self) -> tuple:
- if torch.cuda.is_available():
- i_device = int(self.device.split(":")[-1])
- self.gpu_name = torch.cuda.get_device_name(i_device)
- if (
- ("16" in self.gpu_name and "V100" not in self.gpu_name.upper())
- or "P40" in self.gpu_name.upper()
- or "1060" in self.gpu_name
- or "1070" in self.gpu_name
- or "1080" in self.gpu_name
- ):
- print("Found GPU", self.gpu_name, ", force to fp32")
- self.is_half = False
- else:
- print("Found GPU", self.gpu_name)
- self.gpu_mem = int(
- torch.cuda.get_device_properties(i_device).total_memory
- / 1024
- / 1024
- / 1024
- + 0.4
- )
- elif self.has_mps():
- print("No supported Nvidia GPU found, use MPS instead")
- self.device = "mps"
- self.is_half = False
- else:
- print("No supported Nvidia GPU found, use CPU instead")
- self.device = "cpu"
- self.is_half = False
-
- if self.n_cpu == 0:
- self.n_cpu = cpu_count()
-
- if self.is_half:
- # 6G显存配置
- x_pad = 3
- x_query = 10
- x_center = 60
- x_max = 65
- else:
- # 5G显存配置
- x_pad = 1
- x_query = 6
- x_center = 38
- x_max = 41
-
- if self.gpu_mem != None and self.gpu_mem <= 4:
- x_pad = 1
- x_query = 5
- x_center = 30
- x_max = 32
-
- return x_pad, x_query, x_center, x_max
diff --git a/spaces/kamayali/anything-v4.0/app.py b/spaces/kamayali/anything-v4.0/app.py
deleted file mode 100644
index 146d4144fcc64ad8a5b69e399e22ae65a0a85c4f..0000000000000000000000000000000000000000
--- a/spaces/kamayali/anything-v4.0/app.py
+++ /dev/null
@@ -1,137 +0,0 @@
-from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler
-import gradio as gr
-import torch
-from PIL import Image
-
-model_id = 'andite/anything-v4.0'
-prefix = ''
-
-scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler")
-
-pipe = StableDiffusionPipeline.from_pretrained(
- model_id,
- torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
- scheduler=scheduler)
-
-pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained(
- model_id,
- torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
- scheduler=scheduler)
-
-if torch.cuda.is_available():
- pipe = pipe.to("cuda")
- pipe_i2i = pipe_i2i.to("cuda")
-
-def error_str(error, title="Error"):
- return f"""#### {title}
- {error}""" if error else ""
-
-def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False):
-
- generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None
- prompt = f"{prefix} {prompt}" if auto_prefix else prompt
-
- try:
- if img is not None:
- return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None
- else:
- return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None
- except Exception as e:
- return None, error_str(e)
-
-def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator):
-
- result = pipe(
- prompt,
- negative_prompt = neg_prompt,
- num_inference_steps = int(steps),
- guidance_scale = guidance,
- width = width,
- height = height,
- generator = generator)
-
- return result.images[0]
-
-def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator):
-
- ratio = min(height / img.height, width / img.width)
- img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS)
- result = pipe_i2i(
- prompt,
- negative_prompt = neg_prompt,
- init_image = img,
- num_inference_steps = int(steps),
- strength = strength,
- guidance_scale = guidance,
- width = width,
- height = height,
- generator = generator)
-
- return result.images[0]
-
-css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem}
-"""
-with gr.Blocks(css=css) as demo:
- gr.HTML(
- f"""
-
-
-
Anything V4.0
-
-
- Demo for Anything V4.0 Stable Diffusion model.
- {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""}
-
- Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"} after duplicating the space
- """)
-
-demo.queue(concurrency_count=1)
-demo.launch()
diff --git a/spaces/kcagle/AutoGPT/tests/unit/test_browse_scrape_text.py b/spaces/kcagle/AutoGPT/tests/unit/test_browse_scrape_text.py
deleted file mode 100644
index fea5ebfc05d466c7cb5711b5ac10e2ea102ddc45..0000000000000000000000000000000000000000
--- a/spaces/kcagle/AutoGPT/tests/unit/test_browse_scrape_text.py
+++ /dev/null
@@ -1,98 +0,0 @@
-# Generated by CodiumAI
-
-import requests
-
-from autogpt.commands.web_requests import scrape_text
-
-"""
-Code Analysis
-
-Objective:
-The objective of the "scrape_text" function is to scrape the text content from
-a given URL and return it as a string, after removing any unwanted HTML tags and scripts.
-
-Inputs:
-- url: a string representing the URL of the webpage to be scraped.
-
-Flow:
-1. Send a GET request to the given URL using the requests library and the user agent header from the config file.
-2. Check if the response contains an HTTP error. If it does, return an error message.
-3. Use BeautifulSoup to parse the HTML content of the response and extract all script and style tags.
-4. Get the text content of the remaining HTML using the get_text() method of BeautifulSoup.
-5. Split the text into lines and then into chunks, removing any extra whitespace.
-6. Join the chunks into a single string with newline characters between them.
-7. Return the cleaned text.
-
-Outputs:
-- A string representing the cleaned text content of the webpage.
-
-Additional aspects:
-- The function uses the requests library and BeautifulSoup to handle the HTTP request and HTML parsing, respectively.
-- The function removes script and style tags from the HTML to avoid including unwanted content in the text output.
-- The function uses a generator expression to split the text into lines and chunks, which can improve performance for large amounts of text.
-"""
-
-
-class TestScrapeText:
- # Tests that scrape_text() returns the expected text when given a valid URL.
- def test_scrape_text_with_valid_url(self, mocker):
- # Mock the requests.get() method to return a response with expected text
- expected_text = "This is some sample text"
- mock_response = mocker.Mock()
- mock_response.status_code = 200
- mock_response.text = f"
{expected_text}
"
- mocker.patch("requests.Session.get", return_value=mock_response)
-
- # Call the function with a valid URL and assert that it returns the expected text
- url = "http://www.example.com"
- assert scrape_text(url) == expected_text
-
- # Tests that the function returns an error message when an invalid or unreachable url is provided.
- def test_invalid_url(self, mocker):
- # Mock the requests.get() method to raise an exception
- mocker.patch(
- "requests.Session.get", side_effect=requests.exceptions.RequestException
- )
-
- # Call the function with an invalid URL and assert that it returns an error message
- url = "http://www.invalidurl.com"
- error_message = scrape_text(url)
- assert "Error:" in error_message
-
- # Tests that the function returns an empty string when the html page contains no text to be scraped.
- def test_no_text(self, mocker):
- # Mock the requests.get() method to return a response with no text
- mock_response = mocker.Mock()
- mock_response.status_code = 200
- mock_response.text = ""
- mocker.patch("requests.Session.get", return_value=mock_response)
-
- # Call the function with a valid URL and assert that it returns an empty string
- url = "http://www.example.com"
- assert scrape_text(url) == ""
-
- # Tests that the function returns an error message when the response status code is an http error (>=400).
- def test_http_error(self, mocker):
- # Mock the requests.get() method to return a response with a 404 status code
- mocker.patch("requests.Session.get", return_value=mocker.Mock(status_code=404))
-
- # Call the function with a URL
- result = scrape_text("https://www.example.com")
-
- # Check that the function returns an error message
- assert result == "Error: HTTP 404 error"
-
- # Tests that scrape_text() properly handles HTML tags.
- def test_scrape_text_with_html_tags(self, mocker):
- # Create a mock response object with HTML containing tags
- html = "
This is bold text.
"
- mock_response = mocker.Mock()
- mock_response.status_code = 200
- mock_response.text = html
- mocker.patch("requests.Session.get", return_value=mock_response)
-
- # Call the function with a URL
- result = scrape_text("https://www.example.com")
-
- # Check that the function properly handles HTML tags
- assert result == "This is bold text."
diff --git a/spaces/keras-io/adamatch-domain-adaption/README.md b/spaces/keras-io/adamatch-domain-adaption/README.md
deleted file mode 100644
index 5f1ebc1a33695f1d27f5192aedc768a7e2783e8a..0000000000000000000000000000000000000000
--- a/spaces/keras-io/adamatch-domain-adaption/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: AdaMatch Semi-Supervised Domain Adaption
-emoji: ⚗️
-colorFrom: green
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.0.17
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/keras-io/structured-data-classification-grn-vsn/README.md b/spaces/keras-io/structured-data-classification-grn-vsn/README.md
deleted file mode 100644
index 935d23a6ad1d790a949e8228edba93e51d71436a..0000000000000000000000000000000000000000
--- a/spaces/keras-io/structured-data-classification-grn-vsn/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Structured Data Classification with GRN-VSN
-emoji: 🐨
-colorFrom: gray
-colorTo: red
-sdk: gradio
-sdk_version: 3.0.24
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/kernelguardian/llama2action/Dockerfile b/spaces/kernelguardian/llama2action/Dockerfile
deleted file mode 100644
index 94ee76a4f45af463ab7f945633c9258172f9cc80..0000000000000000000000000000000000000000
--- a/spaces/kernelguardian/llama2action/Dockerfile
+++ /dev/null
@@ -1,2 +0,0 @@
-FROM huggingface/autotrain-advanced:latest
-CMD autotrain app --port 7860
diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/mel_processing.py b/spaces/kevinwang676/ChatGLM2-SadTalker/mel_processing.py
deleted file mode 100644
index 99c5b35beb83f3b288af0fac5b49ebf2c69f062c..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/ChatGLM2-SadTalker/mel_processing.py
+++ /dev/null
@@ -1,112 +0,0 @@
-import math
-import os
-import random
-import torch
-from torch import nn
-import torch.nn.functional as F
-import torch.utils.data
-import numpy as np
-import librosa
-import librosa.util as librosa_util
-from librosa.util import normalize, pad_center, tiny
-from scipy.signal import get_window
-from scipy.io.wavfile import read
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + '_' + str(spec.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
- if torch.min(y) < -1.:
- print('min value is ', torch.min(y))
- if torch.max(y) > 1.:
- print('max value is ', torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + '_' + str(y.device)
- fmax_dtype_device = str(fmax) + '_' + dtype_device
- wnsize_dtype_device = str(win_size) + '_' + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
- center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec
diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/src/facerender/modules/discriminator.py b/spaces/kevinwang676/ChatGLM2-SadTalker/src/facerender/modules/discriminator.py
deleted file mode 100644
index d4459b07cb075c9f9d345f9b3dffc02cd859313b..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/ChatGLM2-SadTalker/src/facerender/modules/discriminator.py
+++ /dev/null
@@ -1,90 +0,0 @@
-from torch import nn
-import torch.nn.functional as F
-from facerender.modules.util import kp2gaussian
-import torch
-
-
-class DownBlock2d(nn.Module):
- """
- Simple block for processing video (encoder).
- """
-
- def __init__(self, in_features, out_features, norm=False, kernel_size=4, pool=False, sn=False):
- super(DownBlock2d, self).__init__()
- self.conv = nn.Conv2d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size)
-
- if sn:
- self.conv = nn.utils.spectral_norm(self.conv)
-
- if norm:
- self.norm = nn.InstanceNorm2d(out_features, affine=True)
- else:
- self.norm = None
- self.pool = pool
-
- def forward(self, x):
- out = x
- out = self.conv(out)
- if self.norm:
- out = self.norm(out)
- out = F.leaky_relu(out, 0.2)
- if self.pool:
- out = F.avg_pool2d(out, (2, 2))
- return out
-
-
-class Discriminator(nn.Module):
- """
- Discriminator similar to Pix2Pix
- """
-
- def __init__(self, num_channels=3, block_expansion=64, num_blocks=4, max_features=512,
- sn=False, **kwargs):
- super(Discriminator, self).__init__()
-
- down_blocks = []
- for i in range(num_blocks):
- down_blocks.append(
- DownBlock2d(num_channels if i == 0 else min(max_features, block_expansion * (2 ** i)),
- min(max_features, block_expansion * (2 ** (i + 1))),
- norm=(i != 0), kernel_size=4, pool=(i != num_blocks - 1), sn=sn))
-
- self.down_blocks = nn.ModuleList(down_blocks)
- self.conv = nn.Conv2d(self.down_blocks[-1].conv.out_channels, out_channels=1, kernel_size=1)
- if sn:
- self.conv = nn.utils.spectral_norm(self.conv)
-
- def forward(self, x):
- feature_maps = []
- out = x
-
- for down_block in self.down_blocks:
- feature_maps.append(down_block(out))
- out = feature_maps[-1]
- prediction_map = self.conv(out)
-
- return feature_maps, prediction_map
-
-
-class MultiScaleDiscriminator(nn.Module):
- """
- Multi-scale (scale) discriminator
- """
-
- def __init__(self, scales=(), **kwargs):
- super(MultiScaleDiscriminator, self).__init__()
- self.scales = scales
- discs = {}
- for scale in scales:
- discs[str(scale).replace('.', '-')] = Discriminator(**kwargs)
- self.discs = nn.ModuleDict(discs)
-
- def forward(self, x):
- out_dict = {}
- for scale, disc in self.discs.items():
- scale = str(scale).replace('-', '.')
- key = 'prediction_' + scale
- feature_maps, prediction_map = disc(x[key])
- out_dict['feature_maps_' + scale] = feature_maps
- out_dict['prediction_map_' + scale] = prediction_map
- return out_dict
diff --git a/spaces/kevinwang676/SadTalker/src/facerender/animate.py b/spaces/kevinwang676/SadTalker/src/facerender/animate.py
deleted file mode 100644
index 781f5a3318a086049cc6b74393073ddda7001d5e..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/SadTalker/src/facerender/animate.py
+++ /dev/null
@@ -1,257 +0,0 @@
-import os
-import cv2
-import yaml
-import numpy as np
-import warnings
-from skimage import img_as_ubyte
-import safetensors
-import safetensors.torch
-warnings.filterwarnings('ignore')
-
-
-import imageio
-import torch
-import torchvision
-
-
-from src.facerender.modules.keypoint_detector import HEEstimator, KPDetector
-from src.facerender.modules.mapping import MappingNet
-from src.facerender.modules.generator import OcclusionAwareGenerator, OcclusionAwareSPADEGenerator
-from src.facerender.modules.make_animation import make_animation
-
-from pydub import AudioSegment
-from src.utils.face_enhancer import enhancer_generator_with_len, enhancer_list
-from src.utils.paste_pic import paste_pic
-from src.utils.videoio import save_video_with_watermark
-
-try:
- import webui # in webui
- in_webui = True
-except:
- in_webui = False
-
-class AnimateFromCoeff():
-
- def __init__(self, sadtalker_path, device):
-
- with open(sadtalker_path['facerender_yaml']) as f:
- config = yaml.safe_load(f)
-
- generator = OcclusionAwareSPADEGenerator(**config['model_params']['generator_params'],
- **config['model_params']['common_params'])
- kp_extractor = KPDetector(**config['model_params']['kp_detector_params'],
- **config['model_params']['common_params'])
- he_estimator = HEEstimator(**config['model_params']['he_estimator_params'],
- **config['model_params']['common_params'])
- mapping = MappingNet(**config['model_params']['mapping_params'])
-
- generator.to(device)
- kp_extractor.to(device)
- he_estimator.to(device)
- mapping.to(device)
- for param in generator.parameters():
- param.requires_grad = False
- for param in kp_extractor.parameters():
- param.requires_grad = False
- for param in he_estimator.parameters():
- param.requires_grad = False
- for param in mapping.parameters():
- param.requires_grad = False
-
- if sadtalker_path is not None:
- if 'checkpoint' in sadtalker_path: # use safe tensor
- self.load_cpk_facevid2vid_safetensor(sadtalker_path['checkpoint'], kp_detector=kp_extractor, generator=generator, he_estimator=None)
- else:
- self.load_cpk_facevid2vid(sadtalker_path['free_view_checkpoint'], kp_detector=kp_extractor, generator=generator, he_estimator=he_estimator)
- else:
- raise AttributeError("Checkpoint should be specified for video head pose estimator.")
-
- if sadtalker_path['mappingnet_checkpoint'] is not None:
- self.load_cpk_mapping(sadtalker_path['mappingnet_checkpoint'], mapping=mapping)
- else:
- raise AttributeError("Checkpoint should be specified for video head pose estimator.")
-
- self.kp_extractor = kp_extractor
- self.generator = generator
- self.he_estimator = he_estimator
- self.mapping = mapping
-
- self.kp_extractor.eval()
- self.generator.eval()
- self.he_estimator.eval()
- self.mapping.eval()
-
- self.device = device
-
- def load_cpk_facevid2vid_safetensor(self, checkpoint_path, generator=None,
- kp_detector=None, he_estimator=None,
- device="cpu"):
-
- checkpoint = safetensors.torch.load_file(checkpoint_path)
-
- if generator is not None:
- x_generator = {}
- for k,v in checkpoint.items():
- if 'generator' in k:
- x_generator[k.replace('generator.', '')] = v
- generator.load_state_dict(x_generator)
- if kp_detector is not None:
- x_generator = {}
- for k,v in checkpoint.items():
- if 'kp_extractor' in k:
- x_generator[k.replace('kp_extractor.', '')] = v
- kp_detector.load_state_dict(x_generator)
- if he_estimator is not None:
- x_generator = {}
- for k,v in checkpoint.items():
- if 'he_estimator' in k:
- x_generator[k.replace('he_estimator.', '')] = v
- he_estimator.load_state_dict(x_generator)
-
- return None
-
- def load_cpk_facevid2vid(self, checkpoint_path, generator=None, discriminator=None,
- kp_detector=None, he_estimator=None, optimizer_generator=None,
- optimizer_discriminator=None, optimizer_kp_detector=None,
- optimizer_he_estimator=None, device="cpu"):
- checkpoint = torch.load(checkpoint_path, map_location=torch.device(device))
- if generator is not None:
- generator.load_state_dict(checkpoint['generator'])
- if kp_detector is not None:
- kp_detector.load_state_dict(checkpoint['kp_detector'])
- if he_estimator is not None:
- he_estimator.load_state_dict(checkpoint['he_estimator'])
- if discriminator is not None:
- try:
- discriminator.load_state_dict(checkpoint['discriminator'])
- except:
- print ('No discriminator in the state-dict. Dicriminator will be randomly initialized')
- if optimizer_generator is not None:
- optimizer_generator.load_state_dict(checkpoint['optimizer_generator'])
- if optimizer_discriminator is not None:
- try:
- optimizer_discriminator.load_state_dict(checkpoint['optimizer_discriminator'])
- except RuntimeError as e:
- print ('No discriminator optimizer in the state-dict. Optimizer will be not initialized')
- if optimizer_kp_detector is not None:
- optimizer_kp_detector.load_state_dict(checkpoint['optimizer_kp_detector'])
- if optimizer_he_estimator is not None:
- optimizer_he_estimator.load_state_dict(checkpoint['optimizer_he_estimator'])
-
- return checkpoint['epoch']
-
- def load_cpk_mapping(self, checkpoint_path, mapping=None, discriminator=None,
- optimizer_mapping=None, optimizer_discriminator=None, device='cpu'):
- checkpoint = torch.load(checkpoint_path, map_location=torch.device(device))
- if mapping is not None:
- mapping.load_state_dict(checkpoint['mapping'])
- if discriminator is not None:
- discriminator.load_state_dict(checkpoint['discriminator'])
- if optimizer_mapping is not None:
- optimizer_mapping.load_state_dict(checkpoint['optimizer_mapping'])
- if optimizer_discriminator is not None:
- optimizer_discriminator.load_state_dict(checkpoint['optimizer_discriminator'])
-
- return checkpoint['epoch']
-
- def generate(self, x, video_save_dir, pic_path, crop_info, enhancer=None, background_enhancer=None, preprocess='crop', img_size=256):
-
- source_image=x['source_image'].type(torch.FloatTensor)
- source_semantics=x['source_semantics'].type(torch.FloatTensor)
- target_semantics=x['target_semantics_list'].type(torch.FloatTensor)
- source_image=source_image.to(self.device)
- source_semantics=source_semantics.to(self.device)
- target_semantics=target_semantics.to(self.device)
- if 'yaw_c_seq' in x:
- yaw_c_seq = x['yaw_c_seq'].type(torch.FloatTensor)
- yaw_c_seq = x['yaw_c_seq'].to(self.device)
- else:
- yaw_c_seq = None
- if 'pitch_c_seq' in x:
- pitch_c_seq = x['pitch_c_seq'].type(torch.FloatTensor)
- pitch_c_seq = x['pitch_c_seq'].to(self.device)
- else:
- pitch_c_seq = None
- if 'roll_c_seq' in x:
- roll_c_seq = x['roll_c_seq'].type(torch.FloatTensor)
- roll_c_seq = x['roll_c_seq'].to(self.device)
- else:
- roll_c_seq = None
-
- frame_num = x['frame_num']
-
- predictions_video = make_animation(source_image, source_semantics, target_semantics,
- self.generator, self.kp_extractor, self.he_estimator, self.mapping,
- yaw_c_seq, pitch_c_seq, roll_c_seq, use_exp = True)
-
- predictions_video = predictions_video.reshape((-1,)+predictions_video.shape[2:])
- predictions_video = predictions_video[:frame_num]
-
- video = []
- for idx in range(predictions_video.shape[0]):
- image = predictions_video[idx]
- image = np.transpose(image.data.cpu().numpy(), [1, 2, 0]).astype(np.float32)
- video.append(image)
- result = img_as_ubyte(video)
-
- ### the generated video is 256x256, so we keep the aspect ratio,
- original_size = crop_info[0]
- if original_size:
- result = [ cv2.resize(result_i,(img_size, int(img_size * original_size[1]/original_size[0]) )) for result_i in result ]
-
- video_name = x['video_name'] + '.mp4'
- path = os.path.join(video_save_dir, 'temp_'+video_name)
-
- imageio.mimsave(path, result, fps=float(25))
-
- av_path = os.path.join(video_save_dir, video_name)
- return_path = av_path
-
- audio_path = x['audio_path']
- audio_name = os.path.splitext(os.path.split(audio_path)[-1])[0]
- new_audio_path = os.path.join(video_save_dir, audio_name+'.wav')
- start_time = 0
- # cog will not keep the .mp3 filename
- sound = AudioSegment.from_file(audio_path)
- frames = frame_num
- end_time = start_time + frames*1/25*1000
- word1=sound.set_frame_rate(16000)
- word = word1[start_time:end_time]
- word.export(new_audio_path, format="wav")
-
- save_video_with_watermark(path, new_audio_path, av_path, watermark= False)
- print(f'The generated video is named {video_save_dir}/{video_name}')
-
- if 'full' in preprocess.lower():
- # only add watermark to the full image.
- video_name_full = x['video_name'] + '_full.mp4'
- full_video_path = os.path.join(video_save_dir, video_name_full)
- return_path = full_video_path
- paste_pic(path, pic_path, crop_info, new_audio_path, full_video_path, extended_crop= True if 'ext' in preprocess.lower() else False)
- print(f'The generated video is named {video_save_dir}/{video_name_full}')
- else:
- full_video_path = av_path
-
- #### paste back then enhancers
- if enhancer:
- video_name_enhancer = x['video_name'] + '_enhanced.mp4'
- enhanced_path = os.path.join(video_save_dir, 'temp_'+video_name_enhancer)
- av_path_enhancer = os.path.join(video_save_dir, video_name_enhancer)
- return_path = av_path_enhancer
-
- try:
- enhanced_images_gen_with_len = enhancer_generator_with_len(full_video_path, method=enhancer, bg_upsampler=background_enhancer)
- imageio.mimsave(enhanced_path, enhanced_images_gen_with_len, fps=float(25))
- except:
- enhanced_images_gen_with_len = enhancer_list(full_video_path, method=enhancer, bg_upsampler=background_enhancer)
- imageio.mimsave(enhanced_path, enhanced_images_gen_with_len, fps=float(25))
-
- save_video_with_watermark(enhanced_path, new_audio_path, av_path_enhancer, watermark= False)
- print(f'The generated video is named {video_save_dir}/{video_name_enhancer}')
- os.remove(enhanced_path)
-
- os.remove(path)
- os.remove(new_audio_path)
-
- return return_path
-
diff --git a/spaces/kevinwang676/VITS2-Mandarin/text/__init__.py b/spaces/kevinwang676/VITS2-Mandarin/text/__init__.py
deleted file mode 100644
index 48ae82f3e40ecd1bf17a7de78d87790327af3362..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/VITS2-Mandarin/text/__init__.py
+++ /dev/null
@@ -1,56 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-from text import cleaners
-from text.symbols import symbols
-
-
-# Mappings from symbol to numeric ID and vice versa:
-_symbol_to_id = {s: i for i, s in enumerate(symbols)}
-_id_to_symbol = {i: s for i, s in enumerate(symbols)}
-
-
-def text_to_sequence(text, cleaner_names):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- cleaner_names: names of the cleaner functions to run the text through
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- sequence = []
-
- clean_text = _clean_text(text, cleaner_names)
- for symbol in clean_text:
- if symbol not in _symbol_to_id.keys():
- continue
- symbol_id = _symbol_to_id[symbol]
- sequence += [symbol_id]
- return sequence
-
-
-def cleaned_text_to_sequence(cleaned_text):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()]
- return sequence
-
-
-def sequence_to_text(sequence):
- '''Converts a sequence of IDs back to a string'''
- result = ''
- for symbol_id in sequence:
- s = _id_to_symbol[symbol_id]
- result += s
- return result
-
-
-def _clean_text(text, cleaner_names):
- for name in cleaner_names:
- cleaner = getattr(cleaners, name)
- if not cleaner:
- raise Exception('Unknown cleaner: %s' % name)
- text = cleaner(text)
- return text
diff --git a/spaces/kira4424/VITS-fast-fine-tuning/finetune_speaker_v2.py b/spaces/kira4424/VITS-fast-fine-tuning/finetune_speaker_v2.py
deleted file mode 100644
index 85fa044c2fa8e05da688cf937963fc9f592f9f6c..0000000000000000000000000000000000000000
--- a/spaces/kira4424/VITS-fast-fine-tuning/finetune_speaker_v2.py
+++ /dev/null
@@ -1,321 +0,0 @@
-import os
-import json
-import argparse
-import itertools
-import math
-import torch
-from torch import nn, optim
-from torch.nn import functional as F
-from torch.utils.data import DataLoader
-from torch.utils.tensorboard import SummaryWriter
-import torch.multiprocessing as mp
-import torch.distributed as dist
-from torch.nn.parallel import DistributedDataParallel as DDP
-from torch.cuda.amp import autocast, GradScaler
-from tqdm import tqdm
-
-import librosa
-import logging
-
-logging.getLogger('numba').setLevel(logging.WARNING)
-
-import commons
-import utils
-from data_utils import (
- TextAudioSpeakerLoader,
- TextAudioSpeakerCollate,
- DistributedBucketSampler
-)
-from models import (
- SynthesizerTrn,
- MultiPeriodDiscriminator,
-)
-from losses import (
- generator_loss,
- discriminator_loss,
- feature_loss,
- kl_loss
-)
-from mel_processing import mel_spectrogram_torch, spec_to_mel_torch
-
-
-torch.backends.cudnn.benchmark = True
-global_step = 0
-
-
-def main():
- """Assume Single Node Multi GPUs Training Only"""
- assert torch.cuda.is_available(), "CPU training is not allowed."
-
- n_gpus = torch.cuda.device_count()
- os.environ['MASTER_ADDR'] = 'localhost'
- os.environ['MASTER_PORT'] = '8000'
-
- hps = utils.get_hparams()
- mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,))
-
-
-def run(rank, n_gpus, hps):
- global global_step
- symbols = hps['symbols']
- if rank == 0:
- logger = utils.get_logger(hps.model_dir)
- logger.info(hps)
- utils.check_git_hash(hps.model_dir)
- writer = SummaryWriter(log_dir=hps.model_dir)
- writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval"))
-
- # Use gloo backend on Windows for Pytorch
- dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank)
- torch.manual_seed(hps.train.seed)
- torch.cuda.set_device(rank)
-
- train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data, symbols)
- train_sampler = DistributedBucketSampler(
- train_dataset,
- hps.train.batch_size,
- [32,300,400,500,600,700,800,900,1000],
- num_replicas=n_gpus,
- rank=rank,
- shuffle=True)
- collate_fn = TextAudioSpeakerCollate()
- train_loader = DataLoader(train_dataset, num_workers=2, shuffle=False, pin_memory=True,
- collate_fn=collate_fn, batch_sampler=train_sampler)
- # train_loader = DataLoader(train_dataset, batch_size=hps.train.batch_size, num_workers=2, shuffle=False, pin_memory=True,
- # collate_fn=collate_fn)
- if rank == 0:
- eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data, symbols)
- eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False,
- batch_size=hps.train.batch_size, pin_memory=True,
- drop_last=False, collate_fn=collate_fn)
-
- net_g = SynthesizerTrn(
- len(symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- **hps.model).cuda(rank)
- net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank)
-
- # load existing model
- _, _, _, _ = utils.load_checkpoint("./pretrained_models/G_0.pth", net_g, None, drop_speaker_emb=hps.drop_speaker_embed)
- _, _, _, _ = utils.load_checkpoint("./pretrained_models/D_0.pth", net_d, None)
- epoch_str = 1
- global_step = 0
- # freeze all other layers except speaker embedding
- for p in net_g.parameters():
- p.requires_grad = True
- for p in net_d.parameters():
- p.requires_grad = True
- # for p in net_d.parameters():
- # p.requires_grad = False
- # net_g.emb_g.weight.requires_grad = True
- optim_g = torch.optim.AdamW(
- net_g.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- optim_d = torch.optim.AdamW(
- net_d.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- # optim_d = None
- net_g = DDP(net_g, device_ids=[rank])
- net_d = DDP(net_d, device_ids=[rank])
-
- scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay)
- scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay)
-
- scaler = GradScaler(enabled=hps.train.fp16_run)
-
- for epoch in range(epoch_str, hps.train.epochs + 1):
- if rank==0:
- train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, eval_loader], logger, [writer, writer_eval])
- else:
- train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, None], None, None)
- scheduler_g.step()
- scheduler_d.step()
-
-
-def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers):
- net_g, net_d = nets
- optim_g, optim_d = optims
- scheduler_g, scheduler_d = schedulers
- train_loader, eval_loader = loaders
- if writers is not None:
- writer, writer_eval = writers
-
- # train_loader.batch_sampler.set_epoch(epoch)
- global global_step
-
- net_g.train()
- net_d.train()
- for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers) in enumerate(tqdm(train_loader)):
- x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True)
- spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True)
- y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True)
- speakers = speakers.cuda(rank, non_blocking=True)
-
- with autocast(enabled=hps.train.fp16_run):
- y_hat, l_length, attn, ids_slice, x_mask, z_mask,\
- (z, z_p, m_p, logs_p, m_q, logs_q) = net_g(x, x_lengths, spec, spec_lengths, speakers)
-
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax)
- y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length)
- y_hat_mel = mel_spectrogram_torch(
- y_hat.squeeze(1),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax
- )
-
- y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice
-
- # Discriminator
- y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach())
- with autocast(enabled=False):
- loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g)
- loss_disc_all = loss_disc
- optim_d.zero_grad()
- scaler.scale(loss_disc_all).backward()
- scaler.unscale_(optim_d)
- grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None)
- scaler.step(optim_d)
-
- with autocast(enabled=hps.train.fp16_run):
- # Generator
- y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat)
- with autocast(enabled=False):
- loss_dur = torch.sum(l_length.float())
- loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel
- loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl
-
- loss_fm = feature_loss(fmap_r, fmap_g)
- loss_gen, losses_gen = generator_loss(y_d_hat_g)
- loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl
- optim_g.zero_grad()
- scaler.scale(loss_gen_all).backward()
- scaler.unscale_(optim_g)
- grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None)
- scaler.step(optim_g)
- scaler.update()
-
- if rank==0:
- if global_step % hps.train.log_interval == 0:
- lr = optim_g.param_groups[0]['lr']
- losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl]
- logger.info('Train Epoch: {} [{:.0f}%]'.format(
- epoch,
- 100. * batch_idx / len(train_loader)))
- logger.info([x.item() for x in losses] + [global_step, lr])
-
- scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, "grad_norm_g": grad_norm_g}
- scalar_dict.update({"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl})
-
- scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)})
- scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)})
- scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)})
- image_dict = {
- "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()),
- "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()),
- "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()),
- "all/attn": utils.plot_alignment_to_numpy(attn[0,0].data.cpu().numpy())
- }
- utils.summarize(
- writer=writer,
- global_step=global_step,
- images=image_dict,
- scalars=scalar_dict)
-
- if global_step % hps.train.eval_interval == 0:
- evaluate(hps, net_g, eval_loader, writer_eval)
- utils.save_checkpoint(net_g, None, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "G_{}.pth".format(global_step)))
- utils.save_checkpoint(net_g, None, hps.train.learning_rate, epoch,
- os.path.join(hps.model_dir, "G_latest.pth".format(global_step)))
- # utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "D_{}.pth".format(global_step)))
- old_g=os.path.join(hps.model_dir, "G_{}.pth".format(global_step-4000))
- # old_d=os.path.join(hps.model_dir, "D_{}.pth".format(global_step-400))
- if os.path.exists(old_g):
- os.remove(old_g)
- # if os.path.exists(old_d):
- # os.remove(old_d)
- global_step += 1
- if epoch > hps.max_epochs:
- print("Maximum epoch reached, closing training...")
- exit()
-
- if rank == 0:
- logger.info('====> Epoch: {}'.format(epoch))
-
-
-def evaluate(hps, generator, eval_loader, writer_eval):
- generator.eval()
- with torch.no_grad():
- for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers) in enumerate(eval_loader):
- x, x_lengths = x.cuda(0), x_lengths.cuda(0)
- spec, spec_lengths = spec.cuda(0), spec_lengths.cuda(0)
- y, y_lengths = y.cuda(0), y_lengths.cuda(0)
- speakers = speakers.cuda(0)
-
- # remove else
- x = x[:1]
- x_lengths = x_lengths[:1]
- spec = spec[:1]
- spec_lengths = spec_lengths[:1]
- y = y[:1]
- y_lengths = y_lengths[:1]
- speakers = speakers[:1]
- break
- y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, max_len=1000)
- y_hat_lengths = mask.sum([1,2]).long() * hps.data.hop_length
-
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax)
- y_hat_mel = mel_spectrogram_torch(
- y_hat.squeeze(1).float(),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax
- )
- image_dict = {
- "gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy())
- }
- audio_dict = {
- "gen/audio": y_hat[0,:,:y_hat_lengths[0]]
- }
- if global_step == 0:
- image_dict.update({"gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())})
- audio_dict.update({"gt/audio": y[0,:,:y_lengths[0]]})
-
- utils.summarize(
- writer=writer_eval,
- global_step=global_step,
- images=image_dict,
- audios=audio_dict,
- audio_sampling_rate=hps.data.sampling_rate
- )
- generator.train()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/criss/unsupervised_mt/eval.sh b/spaces/koajoel/PolyFormer/fairseq/examples/criss/unsupervised_mt/eval.sh
deleted file mode 100644
index 03b773ed5a522eb82186fea8ffbb6c557e14b6d3..0000000000000000000000000000000000000000
--- a/spaces/koajoel/PolyFormer/fairseq/examples/criss/unsupervised_mt/eval.sh
+++ /dev/null
@@ -1,37 +0,0 @@
-#!/bin/bash
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-#
-SRC=si_LK
-TGT=en_XX
-MODEL=criss_checkpoints/criss.3rd.pt
-
-MULTIBLEU=mosesdecoder/scripts/generic/multi-bleu.perl
-MOSES=mosesdecoder
-REPLACE_UNICODE_PUNCT=$MOSES/scripts/tokenizer/replace-unicode-punctuation.perl
-NORM_PUNC=$MOSES/scripts/tokenizer/normalize-punctuation.perl
-REM_NON_PRINT_CHAR=$MOSES/scripts/tokenizer/remove-non-printing-char.perl
-TOKENIZER=$MOSES/scripts/tokenizer/tokenizer.perl
-GEN_TMP_DIR=gen_tmp
-LANG_DICT=criss_checkpoints/lang_dict.txt
-
-if [ ! -d "mosesdecoder" ]; then
- git clone https://github.com/moses-smt/mosesdecoder
-fi
-mkdir -p $GEN_TMP_DIR
-fairseq-generate data_tmp/${SRC}-${TGT}-flores \
- --task translation_multi_simple_epoch \
- --max-tokens 2000 \
- --path ${MODEL} \
- --skip-invalid-size-inputs-valid-test \
- --beam 5 --lenpen 1.0 --gen-subset test \
- --remove-bpe=sentencepiece \
- --source-lang ${SRC} --target-lang ${TGT} \
- --decoder-langtok --lang-pairs 'en_XX-ar_AR,en_XX-de_DE,en_XX-es_XX,en_XX-fr_XX,en_XX-hi_IN,en_XX-it_IT,en_XX-ja_XX,en_XX-ko_KR,en_XX-nl_XX,en_XX-ru_RU,en_XX-zh_CN,en_XX-tr_TR,en_XX-vi_VN,en_XX-ro_RO,en_XX-my_MM,en_XX-ne_NP,en_XX-si_LK,en_XX-cs_CZ,en_XX-lt_LT,en_XX-kk_KZ,en_XX-gu_IN,en_XX-fi_FI,en_XX-et_EE,en_XX-lv_LV,ar_AR-en_XX,cs_CZ-en_XX,de_DE-en_XX,es_XX-en_XX,et_EE-en_XX,fi_FI-en_XX,fr_XX-en_XX,gu_IN-en_XX,hi_IN-en_XX,it_IT-en_XX,ja_XX-en_XX,kk_KZ-en_XX,ko_KR-en_XX,lt_LT-en_XX,lv_LV-en_XX,my_MM-en_XX,ne_NP-en_XX,nl_XX-en_XX,ro_RO-en_XX,ru_RU-en_XX,si_LK-en_XX,tr_TR-en_XX,vi_VN-en_XX,zh_CN-en_XX,ar_AR-es_XX,es_XX-ar_AR,ar_AR-hi_IN,hi_IN-ar_AR,ar_AR-zh_CN,zh_CN-ar_AR,cs_CZ-es_XX,es_XX-cs_CZ,cs_CZ-hi_IN,hi_IN-cs_CZ,cs_CZ-zh_CN,zh_CN-cs_CZ,de_DE-es_XX,es_XX-de_DE,de_DE-hi_IN,hi_IN-de_DE,de_DE-zh_CN,zh_CN-de_DE,es_XX-hi_IN,hi_IN-es_XX,es_XX-zh_CN,zh_CN-es_XX,et_EE-es_XX,es_XX-et_EE,et_EE-hi_IN,hi_IN-et_EE,et_EE-zh_CN,zh_CN-et_EE,fi_FI-es_XX,es_XX-fi_FI,fi_FI-hi_IN,hi_IN-fi_FI,fi_FI-zh_CN,zh_CN-fi_FI,fr_XX-es_XX,es_XX-fr_XX,fr_XX-hi_IN,hi_IN-fr_XX,fr_XX-zh_CN,zh_CN-fr_XX,gu_IN-es_XX,es_XX-gu_IN,gu_IN-hi_IN,hi_IN-gu_IN,gu_IN-zh_CN,zh_CN-gu_IN,hi_IN-zh_CN,zh_CN-hi_IN,it_IT-es_XX,es_XX-it_IT,it_IT-hi_IN,hi_IN-it_IT,it_IT-zh_CN,zh_CN-it_IT,ja_XX-es_XX,es_XX-ja_XX,ja_XX-hi_IN,hi_IN-ja_XX,ja_XX-zh_CN,zh_CN-ja_XX,kk_KZ-es_XX,es_XX-kk_KZ,kk_KZ-hi_IN,hi_IN-kk_KZ,kk_KZ-zh_CN,zh_CN-kk_KZ,ko_KR-es_XX,es_XX-ko_KR,ko_KR-hi_IN,hi_IN-ko_KR,ko_KR-zh_CN,zh_CN-ko_KR,lt_LT-es_XX,es_XX-lt_LT,lt_LT-hi_IN,hi_IN-lt_LT,lt_LT-zh_CN,zh_CN-lt_LT,lv_LV-es_XX,es_XX-lv_LV,lv_LV-hi_IN,hi_IN-lv_LV,lv_LV-zh_CN,zh_CN-lv_LV,my_MM-es_XX,es_XX-my_MM,my_MM-hi_IN,hi_IN-my_MM,my_MM-zh_CN,zh_CN-my_MM,ne_NP-es_XX,es_XX-ne_NP,ne_NP-hi_IN,hi_IN-ne_NP,ne_NP-zh_CN,zh_CN-ne_NP,nl_XX-es_XX,es_XX-nl_XX,nl_XX-hi_IN,hi_IN-nl_XX,nl_XX-zh_CN,zh_CN-nl_XX,ro_RO-es_XX,es_XX-ro_RO,ro_RO-hi_IN,hi_IN-ro_RO,ro_RO-zh_CN,zh_CN-ro_RO,ru_RU-es_XX,es_XX-ru_RU,ru_RU-hi_IN,hi_IN-ru_RU,ru_RU-zh_CN,zh_CN-ru_RU,si_LK-es_XX,es_XX-si_LK,si_LK-hi_IN,hi_IN-si_LK,si_LK-zh_CN,zh_CN-si_LK,tr_TR-es_XX,es_XX-tr_TR,tr_TR-hi_IN,hi_IN-tr_TR,tr_TR-zh_CN,zh_CN-tr_TR,vi_VN-es_XX,es_XX-vi_VN,vi_VN-hi_IN,hi_IN-vi_VN,vi_VN-zh_CN,zh_CN-vi_VN' \
- --lang-dict ${LANG_DICT} --lang-tok-style 'mbart' --sampling-method 'temperature' --sampling-temperature '1.0' > $GEN_TMP_DIR/${SRC}_${TGT}.gen
-cat $GEN_TMP_DIR/${SRC}_${TGT}.gen | grep -P "^T-" | cut -f2 | $REPLACE_UNICODE_PUNCT | $NORM_PUNC -l ${TGT:0:2} | $REM_NON_PRINT_CHAR | $TOKENIZER -no-escape ${TGT:0:2} > $GEN_TMP_DIR/${SRC}_${TGT}.hyp
-cat $GEN_TMP_DIR/${SRC}_${TGT}.gen | grep -P "^H-" | cut -f3 | $REPLACE_UNICODE_PUNCT | $NORM_PUNC -l ${TGT:0:2} | $REM_NON_PRINT_CHAR | $TOKENIZER -no-escape ${TGT:0:2} > $GEN_TMP_DIR/${SRC}_${TGT}.ref
-${MULTIBLEU} $GEN_TMP_DIR/${SRC}_${TGT}.ref < $GEN_TMP_DIR/${SRC}_${TGT}.hyp
diff --git a/spaces/kwangjong/food-classifier-MobileNetV3/app.py b/spaces/kwangjong/food-classifier-MobileNetV3/app.py
deleted file mode 100644
index 4b3da9a795a184ada0daa1d08ca3e260c0379cba..0000000000000000000000000000000000000000
--- a/spaces/kwangjong/food-classifier-MobileNetV3/app.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import gradio as gr
-import numpy as np
-from PIL import Image
-import tensorflow as tf
-import logging
-
-
-#load label
-labels = open("labels.txt", "r")
-labels = labels.read().splitlines()
-
-# load model
-model = tf.keras.models.load_model('mobilenet_v3_large_final.h5')
-
-def predict(img):
-
- img = np.expand_dims(img, axis=0)/255
- pred = model.predict(img)
- return {labels[i]: float(pred[0][i]) for i in range(len(labels))}
-
-title = "Shazam for Food"
-description = "A food classifier trained on MobileNetV3Large."
-article="
"
-examples = ['img/waffle.jpg', "img/lasagna.jpg", "img/taco.jpg", "img/bibimbap.jpg", "img/pad-thai.jpg"]
-interpretation='default'
-enable_queue=True
-
-gr.Interface(fn=predict,inputs=gr.inputs.Image(shape=(224,224)),outputs=gr.outputs.Label(num_top_classes=5),title=title,description=description,article=article,examples=examples,interpretation=interpretation,enable_queue=enable_queue).launch()
\ No newline at end of file
diff --git a/spaces/kxqt/Expedit-SAM/segment_anything/utils/onnx.py b/spaces/kxqt/Expedit-SAM/segment_anything/utils/onnx.py
deleted file mode 100644
index 4297b31291e036700d6ad0b818afb7dd72da3054..0000000000000000000000000000000000000000
--- a/spaces/kxqt/Expedit-SAM/segment_anything/utils/onnx.py
+++ /dev/null
@@ -1,144 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-import torch.nn as nn
-from torch.nn import functional as F
-
-from typing import Tuple
-
-from ..modeling import Sam
-from .amg import calculate_stability_score
-
-
-class SamOnnxModel(nn.Module):
- """
- This model should not be called directly, but is used in ONNX export.
- It combines the prompt encoder, mask decoder, and mask postprocessing of Sam,
- with some functions modified to enable model tracing. Also supports extra
- options controlling what information. See the ONNX export script for details.
- """
-
- def __init__(
- self,
- model: Sam,
- return_single_mask: bool,
- use_stability_score: bool = False,
- return_extra_metrics: bool = False,
- ) -> None:
- super().__init__()
- self.mask_decoder = model.mask_decoder
- self.model = model
- self.img_size = model.image_encoder.img_size
- self.return_single_mask = return_single_mask
- self.use_stability_score = use_stability_score
- self.stability_score_offset = 1.0
- self.return_extra_metrics = return_extra_metrics
-
- @staticmethod
- def resize_longest_image_size(
- input_image_size: torch.Tensor, longest_side: int
- ) -> torch.Tensor:
- input_image_size = input_image_size.to(torch.float32)
- scale = longest_side / torch.max(input_image_size)
- transformed_size = scale * input_image_size
- transformed_size = torch.floor(transformed_size + 0.5).to(torch.int64)
- return transformed_size
-
- def _embed_points(self, point_coords: torch.Tensor, point_labels: torch.Tensor) -> torch.Tensor:
- point_coords = point_coords + 0.5
- point_coords = point_coords / self.img_size
- point_embedding = self.model.prompt_encoder.pe_layer._pe_encoding(point_coords)
- point_labels = point_labels.unsqueeze(-1).expand_as(point_embedding)
-
- point_embedding = point_embedding * (point_labels != -1)
- point_embedding = point_embedding + self.model.prompt_encoder.not_a_point_embed.weight * (
- point_labels == -1
- )
-
- for i in range(self.model.prompt_encoder.num_point_embeddings):
- point_embedding = point_embedding + self.model.prompt_encoder.point_embeddings[
- i
- ].weight * (point_labels == i)
-
- return point_embedding
-
- def _embed_masks(self, input_mask: torch.Tensor, has_mask_input: torch.Tensor) -> torch.Tensor:
- mask_embedding = has_mask_input * self.model.prompt_encoder.mask_downscaling(input_mask)
- mask_embedding = mask_embedding + (
- 1 - has_mask_input
- ) * self.model.prompt_encoder.no_mask_embed.weight.reshape(1, -1, 1, 1)
- return mask_embedding
-
- def mask_postprocessing(self, masks: torch.Tensor, orig_im_size: torch.Tensor) -> torch.Tensor:
- masks = F.interpolate(
- masks,
- size=(self.img_size, self.img_size),
- mode="bilinear",
- align_corners=False,
- )
-
- prepadded_size = self.resize_longest_image_size(orig_im_size, self.img_size)
- masks = masks[..., : int(prepadded_size[0]), : int(prepadded_size[1])]
-
- orig_im_size = orig_im_size.to(torch.int64)
- h, w = orig_im_size[0], orig_im_size[1]
- masks = F.interpolate(masks, size=(h, w), mode="bilinear", align_corners=False)
- return masks
-
- def select_masks(
- self, masks: torch.Tensor, iou_preds: torch.Tensor, num_points: int
- ) -> Tuple[torch.Tensor, torch.Tensor]:
- # Determine if we should return the multiclick mask or not from the number of points.
- # The reweighting is used to avoid control flow.
- score_reweight = torch.tensor(
- [[1000] + [0] * (self.model.mask_decoder.num_mask_tokens - 1)]
- ).to(iou_preds.device)
- score = iou_preds + (num_points - 2.5) * score_reweight
- best_idx = torch.argmax(score, dim=1)
- masks = masks[torch.arange(masks.shape[0]), best_idx, :, :].unsqueeze(1)
- iou_preds = iou_preds[torch.arange(masks.shape[0]), best_idx].unsqueeze(1)
-
- return masks, iou_preds
-
- @torch.no_grad()
- def forward(
- self,
- image_embeddings: torch.Tensor,
- point_coords: torch.Tensor,
- point_labels: torch.Tensor,
- mask_input: torch.Tensor,
- has_mask_input: torch.Tensor,
- orig_im_size: torch.Tensor,
- ):
- sparse_embedding = self._embed_points(point_coords, point_labels)
- dense_embedding = self._embed_masks(mask_input, has_mask_input)
-
- masks, scores = self.model.mask_decoder.predict_masks(
- image_embeddings=image_embeddings,
- image_pe=self.model.prompt_encoder.get_dense_pe(),
- sparse_prompt_embeddings=sparse_embedding,
- dense_prompt_embeddings=dense_embedding,
- )
-
- if self.use_stability_score:
- scores = calculate_stability_score(
- masks, self.model.mask_threshold, self.stability_score_offset
- )
-
- if self.return_single_mask:
- masks, scores = self.select_masks(masks, scores, point_coords.shape[1])
-
- upscaled_masks = self.mask_postprocessing(masks, orig_im_size)
-
- if self.return_extra_metrics:
- stability_scores = calculate_stability_score(
- upscaled_masks, self.model.mask_threshold, self.stability_score_offset
- )
- areas = (upscaled_masks > self.model.mask_threshold).sum(-1).sum(-1)
- return upscaled_masks, scores, stability_scores, areas, masks
-
- return upscaled_masks, scores, masks
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/httpcore/_async/http_proxy.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/httpcore/_async/http_proxy.py
deleted file mode 100644
index 3dd1cb4fe34f1d21f08e9c199108f0dc219f124b..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/httpcore/_async/http_proxy.py
+++ /dev/null
@@ -1,350 +0,0 @@
-import logging
-import ssl
-from base64 import b64encode
-from typing import Iterable, List, Mapping, Optional, Sequence, Tuple, Union
-
-from .._exceptions import ProxyError
-from .._models import (
- URL,
- Origin,
- Request,
- Response,
- enforce_bytes,
- enforce_headers,
- enforce_url,
-)
-from .._ssl import default_ssl_context
-from .._synchronization import AsyncLock
-from .._trace import Trace
-from ..backends.base import SOCKET_OPTION, AsyncNetworkBackend
-from .connection import AsyncHTTPConnection
-from .connection_pool import AsyncConnectionPool
-from .http11 import AsyncHTTP11Connection
-from .interfaces import AsyncConnectionInterface
-
-HeadersAsSequence = Sequence[Tuple[Union[bytes, str], Union[bytes, str]]]
-HeadersAsMapping = Mapping[Union[bytes, str], Union[bytes, str]]
-
-
-logger = logging.getLogger("httpcore.proxy")
-
-
-def merge_headers(
- default_headers: Optional[Sequence[Tuple[bytes, bytes]]] = None,
- override_headers: Optional[Sequence[Tuple[bytes, bytes]]] = None,
-) -> List[Tuple[bytes, bytes]]:
- """
- Append default_headers and override_headers, de-duplicating if a key exists
- in both cases.
- """
- default_headers = [] if default_headers is None else list(default_headers)
- override_headers = [] if override_headers is None else list(override_headers)
- has_override = set(key.lower() for key, value in override_headers)
- default_headers = [
- (key, value)
- for key, value in default_headers
- if key.lower() not in has_override
- ]
- return default_headers + override_headers
-
-
-def build_auth_header(username: bytes, password: bytes) -> bytes:
- userpass = username + b":" + password
- return b"Basic " + b64encode(userpass)
-
-
-class AsyncHTTPProxy(AsyncConnectionPool):
- """
- A connection pool that sends requests via an HTTP proxy.
- """
-
- def __init__(
- self,
- proxy_url: Union[URL, bytes, str],
- proxy_auth: Optional[Tuple[Union[bytes, str], Union[bytes, str]]] = None,
- proxy_headers: Union[HeadersAsMapping, HeadersAsSequence, None] = None,
- ssl_context: Optional[ssl.SSLContext] = None,
- max_connections: Optional[int] = 10,
- max_keepalive_connections: Optional[int] = None,
- keepalive_expiry: Optional[float] = None,
- http1: bool = True,
- http2: bool = False,
- retries: int = 0,
- local_address: Optional[str] = None,
- uds: Optional[str] = None,
- network_backend: Optional[AsyncNetworkBackend] = None,
- socket_options: Optional[Iterable[SOCKET_OPTION]] = None,
- ) -> None:
- """
- A connection pool for making HTTP requests.
-
- Parameters:
- proxy_url: The URL to use when connecting to the proxy server.
- For example `"http://127.0.0.1:8080/"`.
- proxy_auth: Any proxy authentication as a two-tuple of
- (username, password). May be either bytes or ascii-only str.
- proxy_headers: Any HTTP headers to use for the proxy requests.
- For example `{"Proxy-Authorization": "Basic :"}`.
- ssl_context: An SSL context to use for verifying connections.
- If not specified, the default `httpcore.default_ssl_context()`
- will be used.
- max_connections: The maximum number of concurrent HTTP connections that
- the pool should allow. Any attempt to send a request on a pool that
- would exceed this amount will block until a connection is available.
- max_keepalive_connections: The maximum number of idle HTTP connections
- that will be maintained in the pool.
- keepalive_expiry: The duration in seconds that an idle HTTP connection
- may be maintained for before being expired from the pool.
- http1: A boolean indicating if HTTP/1.1 requests should be supported
- by the connection pool. Defaults to True.
- http2: A boolean indicating if HTTP/2 requests should be supported by
- the connection pool. Defaults to False.
- retries: The maximum number of retries when trying to establish
- a connection.
- local_address: Local address to connect from. Can also be used to
- connect using a particular address family. Using
- `local_address="0.0.0.0"` will connect using an `AF_INET` address
- (IPv4), while using `local_address="::"` will connect using an
- `AF_INET6` address (IPv6).
- uds: Path to a Unix Domain Socket to use instead of TCP sockets.
- network_backend: A backend instance to use for handling network I/O.
- """
- super().__init__(
- ssl_context=ssl_context,
- max_connections=max_connections,
- max_keepalive_connections=max_keepalive_connections,
- keepalive_expiry=keepalive_expiry,
- http1=http1,
- http2=http2,
- network_backend=network_backend,
- retries=retries,
- local_address=local_address,
- uds=uds,
- socket_options=socket_options,
- )
- self._ssl_context = ssl_context
- self._proxy_url = enforce_url(proxy_url, name="proxy_url")
- self._proxy_headers = enforce_headers(proxy_headers, name="proxy_headers")
- if proxy_auth is not None:
- username = enforce_bytes(proxy_auth[0], name="proxy_auth")
- password = enforce_bytes(proxy_auth[1], name="proxy_auth")
- authorization = build_auth_header(username, password)
- self._proxy_headers = [
- (b"Proxy-Authorization", authorization)
- ] + self._proxy_headers
-
- def create_connection(self, origin: Origin) -> AsyncConnectionInterface:
- if origin.scheme == b"http":
- return AsyncForwardHTTPConnection(
- proxy_origin=self._proxy_url.origin,
- proxy_headers=self._proxy_headers,
- remote_origin=origin,
- keepalive_expiry=self._keepalive_expiry,
- network_backend=self._network_backend,
- )
- return AsyncTunnelHTTPConnection(
- proxy_origin=self._proxy_url.origin,
- proxy_headers=self._proxy_headers,
- remote_origin=origin,
- ssl_context=self._ssl_context,
- keepalive_expiry=self._keepalive_expiry,
- http1=self._http1,
- http2=self._http2,
- network_backend=self._network_backend,
- )
-
-
-class AsyncForwardHTTPConnection(AsyncConnectionInterface):
- def __init__(
- self,
- proxy_origin: Origin,
- remote_origin: Origin,
- proxy_headers: Union[HeadersAsMapping, HeadersAsSequence, None] = None,
- keepalive_expiry: Optional[float] = None,
- network_backend: Optional[AsyncNetworkBackend] = None,
- socket_options: Optional[Iterable[SOCKET_OPTION]] = None,
- ) -> None:
- self._connection = AsyncHTTPConnection(
- origin=proxy_origin,
- keepalive_expiry=keepalive_expiry,
- network_backend=network_backend,
- socket_options=socket_options,
- )
- self._proxy_origin = proxy_origin
- self._proxy_headers = enforce_headers(proxy_headers, name="proxy_headers")
- self._remote_origin = remote_origin
-
- async def handle_async_request(self, request: Request) -> Response:
- headers = merge_headers(self._proxy_headers, request.headers)
- url = URL(
- scheme=self._proxy_origin.scheme,
- host=self._proxy_origin.host,
- port=self._proxy_origin.port,
- target=bytes(request.url),
- )
- proxy_request = Request(
- method=request.method,
- url=url,
- headers=headers,
- content=request.stream,
- extensions=request.extensions,
- )
- return await self._connection.handle_async_request(proxy_request)
-
- def can_handle_request(self, origin: Origin) -> bool:
- return origin == self._remote_origin
-
- async def aclose(self) -> None:
- await self._connection.aclose()
-
- def info(self) -> str:
- return self._connection.info()
-
- def is_available(self) -> bool:
- return self._connection.is_available()
-
- def has_expired(self) -> bool:
- return self._connection.has_expired()
-
- def is_idle(self) -> bool:
- return self._connection.is_idle()
-
- def is_closed(self) -> bool:
- return self._connection.is_closed()
-
- def __repr__(self) -> str:
- return f"<{self.__class__.__name__} [{self.info()}]>"
-
-
-class AsyncTunnelHTTPConnection(AsyncConnectionInterface):
- def __init__(
- self,
- proxy_origin: Origin,
- remote_origin: Origin,
- ssl_context: Optional[ssl.SSLContext] = None,
- proxy_headers: Optional[Sequence[Tuple[bytes, bytes]]] = None,
- keepalive_expiry: Optional[float] = None,
- http1: bool = True,
- http2: bool = False,
- network_backend: Optional[AsyncNetworkBackend] = None,
- socket_options: Optional[Iterable[SOCKET_OPTION]] = None,
- ) -> None:
- self._connection: AsyncConnectionInterface = AsyncHTTPConnection(
- origin=proxy_origin,
- keepalive_expiry=keepalive_expiry,
- network_backend=network_backend,
- socket_options=socket_options,
- )
- self._proxy_origin = proxy_origin
- self._remote_origin = remote_origin
- self._ssl_context = ssl_context
- self._proxy_headers = enforce_headers(proxy_headers, name="proxy_headers")
- self._keepalive_expiry = keepalive_expiry
- self._http1 = http1
- self._http2 = http2
- self._connect_lock = AsyncLock()
- self._connected = False
-
- async def handle_async_request(self, request: Request) -> Response:
- timeouts = request.extensions.get("timeout", {})
- timeout = timeouts.get("connect", None)
-
- async with self._connect_lock:
- if not self._connected:
- target = b"%b:%d" % (self._remote_origin.host, self._remote_origin.port)
-
- connect_url = URL(
- scheme=self._proxy_origin.scheme,
- host=self._proxy_origin.host,
- port=self._proxy_origin.port,
- target=target,
- )
- connect_headers = merge_headers(
- [(b"Host", target), (b"Accept", b"*/*")], self._proxy_headers
- )
- connect_request = Request(
- method=b"CONNECT",
- url=connect_url,
- headers=connect_headers,
- extensions=request.extensions,
- )
- connect_response = await self._connection.handle_async_request(
- connect_request
- )
-
- if connect_response.status < 200 or connect_response.status > 299:
- reason_bytes = connect_response.extensions.get("reason_phrase", b"")
- reason_str = reason_bytes.decode("ascii", errors="ignore")
- msg = "%d %s" % (connect_response.status, reason_str)
- await self._connection.aclose()
- raise ProxyError(msg)
-
- stream = connect_response.extensions["network_stream"]
-
- # Upgrade the stream to SSL
- ssl_context = (
- default_ssl_context()
- if self._ssl_context is None
- else self._ssl_context
- )
- alpn_protocols = ["http/1.1", "h2"] if self._http2 else ["http/1.1"]
- ssl_context.set_alpn_protocols(alpn_protocols)
-
- kwargs = {
- "ssl_context": ssl_context,
- "server_hostname": self._remote_origin.host.decode("ascii"),
- "timeout": timeout,
- }
- async with Trace("start_tls", logger, request, kwargs) as trace:
- stream = await stream.start_tls(**kwargs)
- trace.return_value = stream
-
- # Determine if we should be using HTTP/1.1 or HTTP/2
- ssl_object = stream.get_extra_info("ssl_object")
- http2_negotiated = (
- ssl_object is not None
- and ssl_object.selected_alpn_protocol() == "h2"
- )
-
- # Create the HTTP/1.1 or HTTP/2 connection
- if http2_negotiated or (self._http2 and not self._http1):
- from .http2 import AsyncHTTP2Connection
-
- self._connection = AsyncHTTP2Connection(
- origin=self._remote_origin,
- stream=stream,
- keepalive_expiry=self._keepalive_expiry,
- )
- else:
- self._connection = AsyncHTTP11Connection(
- origin=self._remote_origin,
- stream=stream,
- keepalive_expiry=self._keepalive_expiry,
- )
-
- self._connected = True
- return await self._connection.handle_async_request(request)
-
- def can_handle_request(self, origin: Origin) -> bool:
- return origin == self._remote_origin
-
- async def aclose(self) -> None:
- await self._connection.aclose()
-
- def info(self) -> str:
- return self._connection.info()
-
- def is_available(self) -> bool:
- return self._connection.is_available()
-
- def has_expired(self) -> bool:
- return self._connection.has_expired()
-
- def is_idle(self) -> bool:
- return self._connection.is_idle()
-
- def is_closed(self) -> bool:
- return self._connection.is_closed()
-
- def __repr__(self) -> str:
- return f"<{self.__class__.__name__} [{self.info()}]>"
diff --git a/spaces/lambdalabs/generative-music-visualizer/torch_utils/ops/bias_act.cpp b/spaces/lambdalabs/generative-music-visualizer/torch_utils/ops/bias_act.cpp
deleted file mode 100644
index 3adaeee2ae44e96655d354c2bdfb81de8ebfe6c6..0000000000000000000000000000000000000000
--- a/spaces/lambdalabs/generative-music-visualizer/torch_utils/ops/bias_act.cpp
+++ /dev/null
@@ -1,99 +0,0 @@
-// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-//
-// NVIDIA CORPORATION and its licensors retain all intellectual property
-// and proprietary rights in and to this software, related documentation
-// and any modifications thereto. Any use, reproduction, disclosure or
-// distribution of this software and related documentation without an express
-// license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-#include
-#include
-#include
-#include "bias_act.h"
-
-//------------------------------------------------------------------------
-
-static bool has_same_layout(torch::Tensor x, torch::Tensor y)
-{
- if (x.dim() != y.dim())
- return false;
- for (int64_t i = 0; i < x.dim(); i++)
- {
- if (x.size(i) != y.size(i))
- return false;
- if (x.size(i) >= 2 && x.stride(i) != y.stride(i))
- return false;
- }
- return true;
-}
-
-//------------------------------------------------------------------------
-
-static torch::Tensor bias_act(torch::Tensor x, torch::Tensor b, torch::Tensor xref, torch::Tensor yref, torch::Tensor dy, int grad, int dim, int act, float alpha, float gain, float clamp)
-{
- // Validate arguments.
- TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device");
- TORCH_CHECK(b.numel() == 0 || (b.dtype() == x.dtype() && b.device() == x.device()), "b must have the same dtype and device as x");
- TORCH_CHECK(xref.numel() == 0 || (xref.sizes() == x.sizes() && xref.dtype() == x.dtype() && xref.device() == x.device()), "xref must have the same shape, dtype, and device as x");
- TORCH_CHECK(yref.numel() == 0 || (yref.sizes() == x.sizes() && yref.dtype() == x.dtype() && yref.device() == x.device()), "yref must have the same shape, dtype, and device as x");
- TORCH_CHECK(dy.numel() == 0 || (dy.sizes() == x.sizes() && dy.dtype() == x.dtype() && dy.device() == x.device()), "dy must have the same dtype and device as x");
- TORCH_CHECK(x.numel() <= INT_MAX, "x is too large");
- TORCH_CHECK(b.dim() == 1, "b must have rank 1");
- TORCH_CHECK(b.numel() == 0 || (dim >= 0 && dim < x.dim()), "dim is out of bounds");
- TORCH_CHECK(b.numel() == 0 || b.numel() == x.size(dim), "b has wrong number of elements");
- TORCH_CHECK(grad >= 0, "grad must be non-negative");
-
- // Validate layout.
- TORCH_CHECK(x.is_non_overlapping_and_dense(), "x must be non-overlapping and dense");
- TORCH_CHECK(b.is_contiguous(), "b must be contiguous");
- TORCH_CHECK(xref.numel() == 0 || has_same_layout(xref, x), "xref must have the same layout as x");
- TORCH_CHECK(yref.numel() == 0 || has_same_layout(yref, x), "yref must have the same layout as x");
- TORCH_CHECK(dy.numel() == 0 || has_same_layout(dy, x), "dy must have the same layout as x");
-
- // Create output tensor.
- const at::cuda::OptionalCUDAGuard device_guard(device_of(x));
- torch::Tensor y = torch::empty_like(x);
- TORCH_CHECK(has_same_layout(y, x), "y must have the same layout as x");
-
- // Initialize CUDA kernel parameters.
- bias_act_kernel_params p;
- p.x = x.data_ptr();
- p.b = (b.numel()) ? b.data_ptr() : NULL;
- p.xref = (xref.numel()) ? xref.data_ptr() : NULL;
- p.yref = (yref.numel()) ? yref.data_ptr() : NULL;
- p.dy = (dy.numel()) ? dy.data_ptr() : NULL;
- p.y = y.data_ptr();
- p.grad = grad;
- p.act = act;
- p.alpha = alpha;
- p.gain = gain;
- p.clamp = clamp;
- p.sizeX = (int)x.numel();
- p.sizeB = (int)b.numel();
- p.stepB = (b.numel()) ? (int)x.stride(dim) : 1;
-
- // Choose CUDA kernel.
- void* kernel;
- AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&]
- {
- kernel = choose_bias_act_kernel(p);
- });
- TORCH_CHECK(kernel, "no CUDA kernel found for the specified activation func");
-
- // Launch CUDA kernel.
- p.loopX = 4;
- int blockSize = 4 * 32;
- int gridSize = (p.sizeX - 1) / (p.loopX * blockSize) + 1;
- void* args[] = {&p};
- AT_CUDA_CHECK(cudaLaunchKernel(kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream()));
- return y;
-}
-
-//------------------------------------------------------------------------
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m)
-{
- m.def("bias_act", &bias_act);
-}
-
-//------------------------------------------------------------------------
diff --git a/spaces/ldhldh/demo/README.md b/spaces/ldhldh/demo/README.md
deleted file mode 100644
index 81b1999f61e60597f9a13b625b623126aa6d1de1..0000000000000000000000000000000000000000
--- a/spaces/ldhldh/demo/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: 🤗 KoRWKV-1.5B 🔥Streaming🔥
-emoji: 💻
-colorFrom: yellow
-colorTo: purple
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: ldhldh/polyglot_ko_1.3B_PEFT_demo
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Graitec Arche OMD 2009 Fr.47.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Graitec Arche OMD 2009 Fr.47.md
deleted file mode 100644
index 0204801937d50c3117149545d0fa067ef1e6f1c8..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Graitec Arche OMD 2009 Fr.47.md
+++ /dev/null
@@ -1,30 +0,0 @@
-
-
What You Need to Know About Graitec Arche OMD 2009 Fr.47
-
If you are a concrete structure engineer, you might have heard of Graitec Arche OMD 2009 Fr.47, a software that simplifies and optimizes the design and construction stages of a concrete structure project. Graitec Arche OMD 2009 Fr.47 is a powerful and user-friendly software that helps you create and manage your concrete structure models, perform structural analysis and design, generate detailed drawings and reports, and export your data to other software such as Revit or AutoCAD.
In this article, we will give you an overview of the features and benefits of Graitec Arche OMD 2009 Fr.47, and how you can use it to improve your workflow and productivity.
-
Features of Graitec Arche OMD 2009 Fr.47
-
Graitec Arche OMD 2009 Fr.47 is a comprehensive software that covers all the aspects of concrete structure design, from modeling to documentation. Here are some of the main features of Graitec Arche OMD 2009 Fr.47:
-
-
Modeling: You can create your concrete structure model using a graphical interface that allows you to draw elements such as beams, columns, slabs, walls, foundations, openings, stairs, etc. You can also import your model from other software such as Revit or AutoCAD.
-
Analysis: You can perform static and dynamic analysis of your concrete structure model using the integrated finite element solver that takes into account the material properties, loads, supports, and boundary conditions. You can also perform seismic analysis and fire resistance analysis.
-
Design: You can design your concrete structure elements according to various codes and standards such as Eurocode 2, ACI 318, BS 8110, etc. You can also optimize your design by checking the reinforcement ratios, deflections, cracks, stresses, etc.
-
Documentation: You can generate detailed drawings and reports of your concrete structure model that include dimensions, annotations, symbols, schedules, quantities, etc. You can also customize your drawings and reports according to your preferences and standards.
-
Data exchange: You can export your concrete structure model and data to other software such as Revit or AutoCAD using various formats such as IFC, DWG, DXF, etc. You can also import data from other software such as Excel or Word.
-
-
Benefits of Graitec Arche OMD 2009 Fr.47
-
Graitec Arche OMD 2009 Fr.47 is a software that offers many benefits for concrete structure engineers who want to improve their workflow and productivity. Here are some of the benefits of Graitec Arche OMD 2009 Fr.47:
-
-
-
Efficiency: You can save time and resources by using a single software that covers all the stages of concrete structure design, from modeling to documentation. You can also avoid errors and inconsistencies by using a unified data model that ensures accuracy and coherence.
-
Flexibility: You can adapt your concrete structure model and data to various scenarios and requirements by using the parametric modeling capabilities that allow you to modify your model easily and quickly. You can also use the customization options that allow you to tailor your drawings and reports to your needs and standards.
-
Compatibility: You can collaborate with other professionals and stakeholders by using the data exchange features that allow you to import and export your model and data to other software such as Revit or AutoCAD. You can also use the interoperability features that allow you to integrate your model and data with other Graitec software such as Advance Steel or Advance Design.
-
-
Conclusion
-
Graitec Arche OMD 2009 Fr.47 is a software that simplifies and optimizes the design and construction stages of a concrete structure project. It is a powerful and user-friendly software that helps you create and manage your concrete structure models, perform structural analysis and design, generate detailed drawings and reports, and export your data to other software such as Revit or AutoCAD.
-
If you want to learn more about Graitec Arche OMD 2009 Fr.47, you can download it from here, or contact us for more information.
-
Conclusion
-
Graitec Arche OMD 2009 Fr.47 is a software that simplifies and optimizes the design and construction stages of a concrete structure project. It is a powerful and user-friendly software that helps you create and manage your concrete structure models, perform structural analysis and design, generate detailed drawings and reports, and export your data to other software such as Revit or AutoCAD.
-
If you want to learn more about Graitec Arche OMD 2009 Fr.47, you can download it from here, or contact us for more information.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Hunting Simulator Game For PC Full Version PORTABLE.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Hunting Simulator Game For PC Full Version PORTABLE.md
deleted file mode 100644
index 11296907a5e27c262a4a28430afa685a599692ca..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Hunting Simulator Game For PC Full Version PORTABLE.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
One of the first of these models, BERT, is trained by taking sentences, splitting them into individual words, randomly hiding some of them, and predicting what the hidden words are. After doing this millions of times, BERT has “read” enough Shakespeare to predict how this phrase usually ends:
-
-
-
This page is hooked up to a version of BERT trained on Wikipedia and books.¹ Try clicking on different words to see how they’d be filled in or typing in another sentence to see what else has BERT picked up on.
-
-
-
Cattle or Clothes?
-
Besides Hamlet’s existential dread, the text BERT was trained on also contains more patterns:
-
-
-
Cattle and horses aren’t top purchase predictions in every state, though! In New York, some of the most likely words are clothes, books and art:
-
-
-
There are more than 30,000 words, punctuation marks and word fragments in BERT’s vocabulary. Every time BERT fills in a hidden word, it assigns each of them a probability. By looking at how slightly different sentences shift those probabilities, we can get a glimpse at how purchasing patterns in different places are understood.
-
-
-
You can edit these sentences. Or try one of these comparisons to get started:
-
To the extent that a computer program can “know” something, what does BERT know about where you live?
-
What’s in a Name?
-
This technique can also probe what associations BERT has learned about different groups of people. For example, it predicts people named Elsie are older than people named Lauren:
-
-
-
It’s also learned that people named Jim have more typically masculine jobs than people named Jane:
-
-
-
These aren’t just spurious correlations — Elsies really are more likely to be older than Laurens. And occupations the model associates with feminine names are held by a higher percentage of women.
-
Should we be concerned about these correlations? BERT was trained to fill in blanks in Wikipedia articles and books — it does a great job at that! The problem is that the internal representations of language these models have learned are used for much more – by some measures, they’re the best way we have of getting computers to understand and manipulate text.
-
We wouldn’t hesitate to call a conversation partner or recruiter who blithely assumed that doctors are men sexist, but that’s exactly what BERT might do if heedlessly incorporated into a chatbot or HR software:
-
-
-
Adjusting for assumptions like this isn’t trivial. Why machine learning systems produce a given output still isn’t well understood – determining if a credit model built on top of BERT rejected a loan application because of gender discrimation might be quite difficult.
-
Deploying large language models at scale also risks amplifying and perpetuating today’s harmful stereotypes. When prompted with “Two Muslims walked into a…”, for example, GPT-3 typically finishes the sentence with descriptions of violence.
-
How Can We Fix This?
-
One conceptually straightforward approach: reduce unwanted correlations from the training data to mitigate model bias.
-
Last year a version of BERT called Zari was trained with an additional set of generated sentences. For every sentence with a gendered noun, like boy or aunt, another sentence that replaced the noun with its gender-partner was added to the training data: in addition to “The lady doth protest too much,” Zari was also trained on “The gentleman doth protest too much.”
-
-
-
Unlike BERT, Zari assigns nurses and doctors an equal probability of being a “she” or a “he” after being trained on the swapped sentences. This approach hasn’t removed all the gender correlations; because names weren’t swapped, Zari’s association between masculine names and doctors has only slightly decreased from BERT’s. And the retraining doesn’t change how the model understands nonbinary gender.
-
Something similar happened with other attempts to remove gender bias from models’ representations of words. It’s possible to mathematically define bias and perform “brain surgery” on a model to remove it, but language is steeped in gender. Large models can have billions of parameters in which to learn stereotypes — slightly different measures of bias have found the retrained models only shifted the stereotypes around to be undetectable by the initial measure.
-
As with other applications of machine learning, it’s helpful to focus instead on the actual harms that could occur. Tools like AllenNLP, LMdiff and the Language Interpretability Tool make it easier to interact with language models to find where they might be falling short. Once those shortcomings are spotted, task specific mitigation measures can be simpler to apply than modifying the entire model.
-
It’s also possible that as models grow more capable, they might be able to explain and perform some of this debiasing themselves. Instead of forcing the model to tell us the gender of “the doctor,” we could let it respond with uncertainty that’s shown to the user and controls to override assumptions.
-
Credits
-
Adam Pearce // July 2021
-
Thanks to Ben Wedin, Emily Reif, James Wexler, Fernanda Viégas, Ian Tenney, Kellie Webster, Kevin Robinson, Lucas Dixon, Ludovic Peran, Martin Wattenberg, Michael Terry, Tolga Bolukbasi, Vinodkumar Prabhakaran, Xuezhi Wang, Yannick Assogba, and Zan Armstrong for their help with this piece.
-
Footnotes
-
The BERT model used on this page is the Hugging Face version of bert-large-uncased-whole-word-masking. “BERT” also refers to a type of model architecture; hundreds of BERT models have been trained and published. The model and chart code used here are available on GitHub.
-
Notice that “1800”, “1900” and “2000” are some of the top predictions, though. People aren’t actually more likely to be born at the start of a century, but in BERT’s training corpus of books and Wikipedia articles round numbers are more common.
-
Comparing BERT and Zari in this interface requires carefully tracking tokens during a transition. The BERT Difference Plots colab has ideas for extensions to systemically look at differences between the models’ output.
-
This analysis shouldn’t stop once a model is deployed — as language and model usage shifts, it’s important to continue studying and mitigating potential harms.
-
Appendix: Differences Over Time
-
In addition to looking at how predictions for men and women are different for a given sentence, we can also chart how those differences have changed over time:
-
-
-
The convergence in more recent years suggests another potential mitigation technique: using a prefix to steer the model away from unwanted correlations while preserving its understanding of natural language.
-
Using “In $year” as the prefix is quite limited, though, as it doesn’t handle gender-neutral pronouns and potentially increases other correlations. However, it may be possible to find a better prefix that mitigates a specific type of bias with just a couple of dozen examples.
-
-
-
Closer examination of these differences in differences also shows there’s a limit to the facts we can pull out of BERT this way.
-
Below, the top row of charts shows how predicted differences in occupations between men and women change between 1908 and 2018. The rightmost chart shows the he/she difference in 1908 against the he/she difference in 2018.
-
The flat slope of the rightmost chart indicates that the he/she difference has decreased for each job by about the same amount. But in reality, shifts in occupation weren’t nearly so smooth and some occupations, like accounting, switched from being majority male to majority female.
-
-
-
This reality-prediction mismatch could be caused by lack of training data, model size or the coarseness of the probing method. There’s an immense amount of general knowledge inside of these models — with a little bit of focused training, they can even become expert trivia players.
-
More Explorables
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/merve/hidden-bias/public/hidden-bias/annotations.js b/spaces/merve/hidden-bias/public/hidden-bias/annotations.js
deleted file mode 100644
index b0fd377b443ee9bd31e7bd1d9dbacafc4e5282e3..0000000000000000000000000000000000000000
--- a/spaces/merve/hidden-bias/public/hidden-bias/annotations.js
+++ /dev/null
@@ -1,86 +0,0 @@
-window.annotations = [
- {
- "slide": 0,
- "x": 1.77,
- "y": 3.17,
- "path": "M -3,-59 A 31.215 31.215 0 1 0 -10,2",
- "text": "Joshua had a high school GPA of 3.2 and 1.8 in college",
- "textOffset": [
- -1,
- -48
- ]
- },
- {
- "slide": 0,
- "x": 2.93,
- "y": 2.08,
- "path": "M 56,61 A 45.102 45.102 0 0 0 19.000001907348633,1.0000003576278687",
- "text": "Abigail has a 2.1 in high school and 2.9 in college",
- "textOffset": [
- -5,
- 85
- ],
- "width": 18
- },
- {
- "slide": 1,
- "x": 3.7,
- "y": 2,
- "path": "M 1,41 A 209.709 209.709 0 0 1 -310,76",
- "text": "Most students have a higher GPA in high school",
- "textOffset": [
- -69,
- 11
- ],
- "width": 18
- },
- {
- "slide": 2,
- "x": 1,
- "y": 4,
- "path": "M 0 0",
- "text": "A well adjusted model will usually over predict about half the students' grades...",
- "textOffset": [
- 25,
- 50
- ],
- "width": 25
- },
- {
- "slide": 2,
- "x": 4,
- "y": 1,
- "path": "M 0 0",
- "text": "...and under predict the other half",
- "textOffset": [
- -109,
- -51
- ],
- "width": 18
- },
- {
- "slide": 5,
- "x": 2.58,
- "y": 2,
- "path": "M 54,34 A 29.707 29.707 0 0 0 11,-6",
- "text": "The model predicted both Lucas and Mia would get a 2.0, but she ended up with a higher GPA",
- "html": "The model predicted both Lucas and Mia would get a 2.0, but she ended up with a higher GPA",
- "textOffset": [
- -22,
- 44
- ],
- "width": 23
- },
- {
- "slide": 5,
- "x": 2.14,
- "y": 2,
- "path": "M 40,61 A 35.025 35.025 0 0 1 -4,7",
- "text": "",
- "textOffset": [
- -100,
- 179
- ],
- "width": 14
- }
-]
\ No newline at end of file
diff --git a/spaces/mgfrantz/pii_masking/app.py b/spaces/mgfrantz/pii_masking/app.py
deleted file mode 100644
index e95a31a37cc7fc5f0bcc5f0eb0e587438a957d38..0000000000000000000000000000000000000000
--- a/spaces/mgfrantz/pii_masking/app.py
+++ /dev/null
@@ -1,113 +0,0 @@
-from presidio_anonymizer import AnonymizerEngine
-from presidio_analyzer import AnalyzerEngine
-from presidio_anonymizer.entities import RecognizerResult, OperatorConfig
-
-from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
-import torch
-import re
-
-import gradio as gr
-
-# Initialize the engine:
-analyzer = AnalyzerEngine()
-anonymizer = AnonymizerEngine()
-
-# Create the NER pipeline
-tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER-uncased")
-tokenizer.add_tokens('')
-model = AutoModelForTokenClassification.from_pretrained("dslim/bert-base-NER-uncased")
-pipe = pipeline(model=model, tokenizer=tokenizer, task='ner')
-
-# https://microsoft.github.io/presidio/supported_entities/
-ENT_TYPES = [
-# 'PERSON',
- 'CREDIT_CARD',
- 'EMAIL_ADDRESS',
- 'IP_ADDRESS',
- 'PHONE_NUMBER'
-]
-
-def mask_names_hf(text):
- # Tokenize inputs
- inputs = tokenizer(text, return_tensors='pt', truncation=True)
- tokens = inputs.tokens()
-
- # Make inferences
- outputs = model(**inputs).logits
- predictions = torch.argmax(outputs, dim=2)
-
- # Replace tokens that are people with
- words = []
- for token, prediction in zip(tokens, predictions[0].numpy()):
- prediction = model.config.id2label[prediction]
- if prediction not in ('I-PER', 'B-PER'):
- words.append(token)
- elif prediction == 'B-PER':
- if words[-1] != '':
- words.append('')
- else:
- pass
- # Convert those tokens to a string
- return tokenizer.convert_tokens_to_string(words[1:-1])
-
-# def mask_names_hf(text):
-# outputs = pipe(text)
-# tokens = []
-# for token in outputs:
-# if 'PER' in token['entity']:
-# if tokens[-1] != '':
-# tokens.append('')
-# else:
-# tokens.append(token['word'])
-
-# t = tokenizer.convert_tokens_to_string(tokens)
-# return t
-
-def anonymize(text, min_len=3):
-
- # Find and replace other stuff (Presidio NER)
- ents = analyzer.analyze(text, language='en', entities=ENT_TYPES)
- results = anonymizer.anonymize(text, analyzer_results=ents)
- t = results.text
-
-# t = copy(text)
- # Find and replace names (HF NER)
- t = mask_names_hf(t)
-
- pats = re.findall('<.+?>', t)
- for p in pats:
- t = t.replace(p, p.upper().replace(' ', ''))
-
-
- t = t.replace('', '')
- return t
-
-title = "PII Masking"
-description = """
-In many applications, personally identifiable information (PII) is easy to remove from databases since a column may contain specific PII.
-Common techniques like hashing also allow the identity of these values to be preserved without exposing the contents of the value.
-
-However, it can be less straightforward to remove from unstructured text data, where PII may or may not be present.
-Further, text may contain multiple types of PII that present an increased risk of exposure when coupled together.
-For example, a name and IP address together may be used to pinpoint a specific person's location.
-Hashing the data outright is not an option since consumers of these data often prefer to work with the raw text data.
-Thus, preserving privacy in raw text data remains a challenge.
-
-This space applies both rule-based and ML-based approaches to remove names, phone numbers, emails, and IP addresses from raw text.
-This app accepts raw text and returns the same text, but with PII replaced with special tokens that preserve some characteristics of the masked entities without revealing their contents.
-"""
-
-gr.Interface(
- anonymize,
- inputs='text',
- outputs='text',
- title=title,
- description=description,
- examples=[
- "Hi, my name is Mike and my phone number is 1-234-567-9000",
- "Hi, my name is Mike and my email address is my_name@my_domain.com",
- "Hi, my name is Mike and my IP address is 127.0.0.1",
- # "Hi, my name is Mike and my credit card is 1200 3859 8281 0593"
- ]
-).launch(debug=True)
-
diff --git a/spaces/michaelwja/maskformer-satellite-trees-gradio/app.py b/spaces/michaelwja/maskformer-satellite-trees-gradio/app.py
deleted file mode 100644
index db4d09be9e021680ff53723ff21d7de22020fa74..0000000000000000000000000000000000000000
--- a/spaces/michaelwja/maskformer-satellite-trees-gradio/app.py
+++ /dev/null
@@ -1,104 +0,0 @@
-import glob
-import gradio as gr
-import numpy as np
-from os import environ
-from PIL import Image
-from torchvision import transforms as T
-from transformers import MaskFormerForInstanceSegmentation, MaskFormerImageProcessor
-
-
-example_images = sorted(glob.glob('examples/map*.jpg'))
-
-ade_mean=[0.485, 0.456, 0.406]
-ade_std=[0.229, 0.224, 0.225]
-
-test_transform = T.Compose([
- T.ToTensor(),
- T.Normalize(mean=ade_mean, std=ade_std)
-])
-
-palette = [
- [120, 120, 120], [4, 200, 4], [4, 4, 250], [6, 230, 230],
- [80, 50, 50], [120, 120, 80], [140, 140, 140], [204, 5, 255]
-]
-
-model_id = f"thiagohersan/maskformer-satellite-trees"
-vegetation_labels = ["vegetation"]
-
-# preprocessor = MaskFormerImageProcessor.from_pretrained(model_id)
-preprocessor = MaskFormerImageProcessor(
- do_resize=False,
- do_normalize=False,
- do_rescale=False,
- ignore_index=255,
- reduce_labels=False
-)
-
-hf_token = environ.get('HFTOKEN')
-model = MaskFormerForInstanceSegmentation.from_pretrained(model_id, use_auth_token=hf_token)
-
-
-def visualize_instance_seg_mask(img_in, mask, id2label, included_labels):
- img_out = np.zeros((mask.shape[0], mask.shape[1], 3))
- image_total_pixels = mask.shape[0] * mask.shape[1]
- label_ids = np.unique(mask)
-
- id2color = {id: palette[id] for id in label_ids}
- id2count = {id: 0 for id in label_ids}
-
- for i in range(img_out.shape[0]):
- for j in range(img_out.shape[1]):
- img_out[i, j, :] = id2color[mask[i, j]]
- id2count[mask[i, j]] = id2count[mask[i, j]] + 1
-
- image_res = (0.5 * img_in + 0.5 * img_out).astype(np.uint8)
-
- dataframe = [[
- f"{id2label[id]}",
- f"{(100 * id2count[id] / image_total_pixels):.2f} %",
- f"{np.sqrt(id2count[id] / image_total_pixels):.2f} m"
- ] for id in label_ids if id2label[id] in included_labels]
-
- if len(dataframe) < 1:
- dataframe = [[
- f"",
- f"{(0):.2f} %",
- f"{(0):.2f} m"
- ]]
-
- return image_res, dataframe
-
-
-def query_image(image_path):
- img = np.array(Image.open(image_path))
- img_size = (img.shape[0], img.shape[1])
- inputs = preprocessor(images=test_transform(img), return_tensors="pt")
- outputs = model(**inputs)
- results = preprocessor.post_process_semantic_segmentation(outputs=outputs, target_sizes=[img_size])[0]
- mask_img, dataframe = visualize_instance_seg_mask(img, results.numpy(), model.config.id2label, vegetation_labels)
- return mask_img, dataframe
-
-def get_system_memory():
- memory = psutil.virtual_memory()
- memory_percent = memory.percent
- memory_used = memory.used / (1024.0 ** 3)
- memory_total = memory.total / (1024.0 ** 3)
- return {"percent": f"{memory_percent}%", "used": f"{memory_used:.3f}GB", "total": f"{memory_total:.3f}GB"}
-
-demo = gr.Interface(
- title="Maskformer Satellite+Trees",
- description="Using a finetuned version of the [facebook/maskformer-swin-base-ade](https://huggingface.co/facebook/maskformer-swin-base-ade) model (created specifically to work with satellite images) to calculate percentage of pixels in an image that belong to vegetation.",
-
- fn=query_image,
- inputs=[gr.Image(type="filepath", label="Input Image")],
- outputs=[
- gr.Image(label="Vegetation"),
- gr.DataFrame(label="Info", headers=["Object Label", "Pixel Percent", "Square Length"])
- ],
- examples=example_images,
- cache_examples=True,
- allow_flagging="never",
- analytics_enabled=None
-)
-
-demo.launch(show_api=True)
diff --git a/spaces/mies8888/intfloat-multilingual-e5-large/app.py b/spaces/mies8888/intfloat-multilingual-e5-large/app.py
deleted file mode 100644
index 4cf950f8940d918ad7ee9d2cbbf426a2ec30c0e8..0000000000000000000000000000000000000000
--- a/spaces/mies8888/intfloat-multilingual-e5-large/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/intfloat/multilingual-e5-large").launch()
\ No newline at end of file
diff --git a/spaces/mimiboy/biying/Dockerfile b/spaces/mimiboy/biying/Dockerfile
deleted file mode 100644
index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000
--- a/spaces/mimiboy/biying/Dockerfile
+++ /dev/null
@@ -1,7 +0,0 @@
-FROM weaigc/bingo:latest
-
-ARG DEBIAN_FRONTEND=noninteractive
-
-ENV BING_HEADER ""
-
-CMD npm start
diff --git a/spaces/mixcard/image-captioning-ru/README.md b/spaces/mixcard/image-captioning-ru/README.md
deleted file mode 100644
index ae585d7c4484a33ae5f65630f008125a4ad88f7c..0000000000000000000000000000000000000000
--- a/spaces/mixcard/image-captioning-ru/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Image Captioning Ru
-emoji: 👀
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-sdk_version: 3.46.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/mrneuralnet/P-DFD/dataset/faceforensics.py b/spaces/mrneuralnet/P-DFD/dataset/faceforensics.py
deleted file mode 100644
index baf9fa43f250e6e585a6f1f771e17250373a5f0a..0000000000000000000000000000000000000000
--- a/spaces/mrneuralnet/P-DFD/dataset/faceforensics.py
+++ /dev/null
@@ -1,107 +0,0 @@
-import torch
-import numpy as np
-from os.path import join
-from dataset import AbstractDataset
-
-METHOD = ['all', 'Deepfakes', 'Face2Face', 'FaceSwap', 'NeuralTextures']
-SPLIT = ['train', 'val', 'test']
-COMP2NAME = {'c0': 'raw', 'c23': 'c23', 'c40': 'c40'}
-SOURCE_MAP = {'youtube': 2, 'Deepfakes': 3, 'Face2Face': 4, 'FaceSwap': 5, 'NeuralTextures': 6}
-
-
-class FaceForensics(AbstractDataset):
- """
- FaceForensics++ Dataset proposed in "FaceForensics++: Learning to Detect Manipulated Facial Images"
- """
-
- def __init__(self, cfg, seed=2022, transforms=None, transform=None, target_transform=None):
- # pre-check
- if cfg['split'] not in SPLIT:
- raise ValueError(f"split should be one of {SPLIT}, "
- f"but found {cfg['split']}.")
- if cfg['method'] not in METHOD:
- raise ValueError(f"method should be one of {METHOD}, "
- f"but found {cfg['method']}.")
- if cfg['compression'] not in COMP2NAME.keys():
- raise ValueError(f"compression should be one of {COMP2NAME.keys()}, "
- f"but found {cfg['compression']}.")
- super(FaceForensics, self).__init__(
- cfg, seed, transforms, transform, target_transform)
- print(f"Loading data from 'FF++ {cfg['method']}' of split '{cfg['split']}' "
- f"and compression '{cfg['compression']}'\nPlease wait patiently...")
-
- self.categories = ['original', 'fake']
- # load the path of dataset images
- indices = join(self.root, cfg['split'] + "_" + cfg['compression'] + ".pickle")
- indices = torch.load(indices)
- if cfg['method'] == "all":
- # full dataset
- self.images = [join(cfg['root'], _[0]) for _ in indices]
- self.targets = [_[1] for _ in indices]
- else:
- # specific manipulated method
- self.images = list()
- self.targets = list()
- nums = 0
- for _ in indices:
- if cfg['method'] in _[0]:
- self.images.append(join(cfg['root'], _[0]))
- self.targets.append(_[1])
- nums = len(self.targets)
- ori = list()
- for _ in indices:
- if "original_sequences" in _[0]:
- ori.append(join(cfg['root'], _[0]))
- choices = np.random.choice(ori, size=nums, replace=False)
- self.images.extend(choices)
- self.targets.extend([0] * nums)
- print("Data from 'FF++' loaded.\n")
- print(f"Dataset contains {len(self.images)} images.\n")
-
-
-if __name__ == '__main__':
- import yaml
-
- config_path = "../config/dataset/faceforensics.yml"
- with open(config_path) as config_file:
- config = yaml.load(config_file, Loader=yaml.FullLoader)
- config = config["train_cfg"]
- # config = config["test_cfg"]
-
- def run_dataset():
- dataset = FaceForensics(config)
- print(f"dataset: {len(dataset)}")
- for i, _ in enumerate(dataset):
- path, target = _
- print(f"path: {path}, target: {target}")
- if i >= 9:
- break
-
-
- def run_dataloader(display_samples=False):
- from torch.utils import data
- import matplotlib.pyplot as plt
-
- dataset = FaceForensics(config)
- dataloader = data.DataLoader(dataset, batch_size=8, shuffle=True)
- print(f"dataset: {len(dataset)}")
- for i, _ in enumerate(dataloader):
- path, targets = _
- image = dataloader.dataset.load_item(path)
- print(f"image: {image.shape}, target: {targets}")
- if display_samples:
- plt.figure()
- img = image[0].permute([1, 2, 0]).numpy()
- plt.imshow(img)
- # plt.savefig("./img_" + str(i) + ".png")
- plt.show()
- if i >= 9:
- break
-
-
- ###########################
- # run the functions below #
- ###########################
-
- # run_dataset()
- run_dataloader(False)
diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/speech_synthesis/preprocessing/denoiser/demucs.py b/spaces/mshukor/UnIVAL/fairseq/examples/speech_synthesis/preprocessing/denoiser/demucs.py
deleted file mode 100644
index 3f70e73d6a37d32e05b6cf0e87f42e13c467cd52..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/examples/speech_synthesis/preprocessing/denoiser/demucs.py
+++ /dev/null
@@ -1,473 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-# author: adefossez
-
-import math
-import time
-
-import torch as th
-from torch import nn
-from torch.nn import functional as F
-
-from .resample import downsample2, upsample2
-from .utils import capture_init
-
-
-class BLSTM(nn.Module):
- def __init__(self, dim, layers=2, bi=True):
- super().__init__()
- klass = nn.LSTM
- self.lstm = klass(
- bidirectional=bi, num_layers=layers, hidden_size=dim, input_size=dim
- )
- self.linear = None
- if bi:
- self.linear = nn.Linear(2 * dim, dim)
-
- def forward(self, x, hidden=None):
- x, hidden = self.lstm(x, hidden)
- if self.linear:
- x = self.linear(x)
- return x, hidden
-
-
-def rescale_conv(conv, reference):
- std = conv.weight.std().detach()
- scale = (std / reference)**0.5
- conv.weight.data /= scale
- if conv.bias is not None:
- conv.bias.data /= scale
-
-
-def rescale_module(module, reference):
- for sub in module.modules():
- if isinstance(sub, (nn.Conv1d, nn.ConvTranspose1d)):
- rescale_conv(sub, reference)
-
-
-class Demucs(nn.Module):
- """
- Demucs speech enhancement model.
- Args:
- - chin (int): number of input channels.
- - chout (int): number of output channels.
- - hidden (int): number of initial hidden channels.
- - depth (int): number of layers.
- - kernel_size (int): kernel size for each layer.
- - stride (int): stride for each layer.
- - causal (bool): if false, uses BiLSTM instead of LSTM.
- - resample (int): amount of resampling to apply to the input/output.
- Can be one of 1, 2 or 4.
- - growth (float): number of channels is multiplied by this for every layer.
- - max_hidden (int): maximum number of channels. Can be useful to
- control the size/speed of the model.
- - normalize (bool): if true, normalize the input.
- - glu (bool): if true uses GLU instead of ReLU in 1x1 convolutions.
- - rescale (float): controls custom weight initialization.
- See https://arxiv.org/abs/1911.13254.
- - floor (float): stability flooring when normalizing.
-
- """
- @capture_init
- def __init__(self,
- chin=1,
- chout=1,
- hidden=48,
- depth=5,
- kernel_size=8,
- stride=4,
- causal=True,
- resample=4,
- growth=2,
- max_hidden=10_000,
- normalize=True,
- glu=True,
- rescale=0.1,
- floor=1e-3):
-
- super().__init__()
- if resample not in [1, 2, 4]:
- raise ValueError("Resample should be 1, 2 or 4.")
-
- self.chin = chin
- self.chout = chout
- self.hidden = hidden
- self.depth = depth
- self.kernel_size = kernel_size
- self.stride = stride
- self.causal = causal
- self.floor = floor
- self.resample = resample
- self.normalize = normalize
-
- self.encoder = nn.ModuleList()
- self.decoder = nn.ModuleList()
- activation = nn.GLU(1) if glu else nn.ReLU()
- ch_scale = 2 if glu else 1
-
- for index in range(depth):
- encode = []
- encode += [
- nn.Conv1d(chin, hidden, kernel_size, stride),
- nn.ReLU(),
- nn.Conv1d(hidden, hidden * ch_scale, 1), activation,
- ]
- self.encoder.append(nn.Sequential(*encode))
-
- decode = []
- decode += [
- nn.Conv1d(hidden, ch_scale * hidden, 1), activation,
- nn.ConvTranspose1d(hidden, chout, kernel_size, stride),
- ]
- if index > 0:
- decode.append(nn.ReLU())
- self.decoder.insert(0, nn.Sequential(*decode))
- chout = hidden
- chin = hidden
- hidden = min(int(growth * hidden), max_hidden)
-
- self.lstm = BLSTM(chin, bi=not causal)
- if rescale:
- rescale_module(self, reference=rescale)
-
- def valid_length(self, length):
- """
- Return the nearest valid length to use with the model so that
- there is no time steps left over in a convolutions, e.g. for all
- layers, size of the input - kernel_size % stride = 0.
-
- If the mixture has a valid length, the estimated sources
- will have exactly the same length.
- """
- length = math.ceil(length * self.resample)
- for _ in range(self.depth):
- length = math.ceil((length - self.kernel_size) / self.stride) + 1
- length = max(length, 1)
- for _ in range(self.depth):
- length = (length - 1) * self.stride + self.kernel_size
- length = int(math.ceil(length / self.resample))
- return int(length)
-
- @property
- def total_stride(self):
- return self.stride ** self.depth // self.resample
-
- def forward(self, mix):
- if mix.dim() == 2:
- mix = mix.unsqueeze(1)
-
- if self.normalize:
- mono = mix.mean(dim=1, keepdim=True)
- std = mono.std(dim=-1, keepdim=True)
- mix = mix / (self.floor + std)
- else:
- std = 1
- length = mix.shape[-1]
- x = mix
- x = F.pad(x, (0, self.valid_length(length) - length))
- if self.resample == 2:
- x = upsample2(x)
- elif self.resample == 4:
- x = upsample2(x)
- x = upsample2(x)
- skips = []
- for encode in self.encoder:
- x = encode(x)
- skips.append(x)
- x = x.permute(2, 0, 1)
- x, _ = self.lstm(x)
- x = x.permute(1, 2, 0)
- for decode in self.decoder:
- skip = skips.pop(-1)
- x = x + skip[..., :x.shape[-1]]
- x = decode(x)
- if self.resample == 2:
- x = downsample2(x)
- elif self.resample == 4:
- x = downsample2(x)
- x = downsample2(x)
-
- x = x[..., :length]
- return std * x
-
-
-def fast_conv(conv, x):
- """
- Faster convolution evaluation if either kernel size is 1
- or length of sequence is 1.
- """
- batch, chin, length = x.shape
- chout, chin, kernel = conv.weight.shape
- assert batch == 1
- if kernel == 1:
- x = x.view(chin, length)
- out = th.addmm(conv.bias.view(-1, 1),
- conv.weight.view(chout, chin), x)
- elif length == kernel:
- x = x.view(chin * kernel, 1)
- out = th.addmm(conv.bias.view(-1, 1),
- conv.weight.view(chout, chin * kernel), x)
- else:
- out = conv(x)
- return out.view(batch, chout, -1)
-
-
-class DemucsStreamer:
- """
- Streaming implementation for Demucs. It supports being fed with any amount
- of audio at a time. You will get back as much audio as possible at that
- point.
-
- Args:
- - demucs (Demucs): Demucs model.
- - dry (float): amount of dry (e.g. input) signal to keep. 0 is maximum
- noise removal, 1 just returns the input signal. Small values > 0
- allows to limit distortions.
- - num_frames (int): number of frames to process at once. Higher values
- will increase overall latency but improve the real time factor.
- - resample_lookahead (int): extra lookahead used for the resampling.
- - resample_buffer (int): size of the buffer of previous inputs/outputs
- kept for resampling.
- """
- def __init__(self, demucs,
- dry=0,
- num_frames=1,
- resample_lookahead=64,
- resample_buffer=256):
- device = next(iter(demucs.parameters())).device
- self.demucs = demucs
- self.lstm_state = None
- self.conv_state = None
- self.dry = dry
- self.resample_lookahead = resample_lookahead
- resample_buffer = min(demucs.total_stride, resample_buffer)
- self.resample_buffer = resample_buffer
- self.frame_length = demucs.valid_length(1) + \
- demucs.total_stride * (num_frames - 1)
- self.total_length = self.frame_length + self.resample_lookahead
- self.stride = demucs.total_stride * num_frames
- self.resample_in = th.zeros(demucs.chin, resample_buffer, device=device)
- self.resample_out = th.zeros(
- demucs.chin, resample_buffer, device=device
- )
-
- self.frames = 0
- self.total_time = 0
- self.variance = 0
- self.pending = th.zeros(demucs.chin, 0, device=device)
-
- bias = demucs.decoder[0][2].bias
- weight = demucs.decoder[0][2].weight
- chin, chout, kernel = weight.shape
- self._bias = bias.view(-1, 1).repeat(1, kernel).view(-1, 1)
- self._weight = weight.permute(1, 2, 0).contiguous()
-
- def reset_time_per_frame(self):
- self.total_time = 0
- self.frames = 0
-
- @property
- def time_per_frame(self):
- return self.total_time / self.frames
-
- def flush(self):
- """
- Flush remaining audio by padding it with zero. Call this
- when you have no more input and want to get back the last chunk of audio.
- """
- pending_length = self.pending.shape[1]
- padding = th.zeros(
- self.demucs.chin, self.total_length, device=self.pending.device
- )
- out = self.feed(padding)
- return out[:, :pending_length]
-
- def feed(self, wav):
- """
- Apply the model to mix using true real time evaluation.
- Normalization is done online as is the resampling.
- """
- begin = time.time()
- demucs = self.demucs
- resample_buffer = self.resample_buffer
- stride = self.stride
- resample = demucs.resample
-
- if wav.dim() != 2:
- raise ValueError("input wav should be two dimensional.")
- chin, _ = wav.shape
- if chin != demucs.chin:
- raise ValueError(f"Expected {demucs.chin} channels, got {chin}")
-
- self.pending = th.cat([self.pending, wav], dim=1)
- outs = []
- while self.pending.shape[1] >= self.total_length:
- self.frames += 1
- frame = self.pending[:, :self.total_length]
- dry_signal = frame[:, :stride]
- if demucs.normalize:
- mono = frame.mean(0)
- variance = (mono**2).mean()
- self.variance = variance / self.frames + \
- (1 - 1 / self.frames) * self.variance
- frame = frame / (demucs.floor + math.sqrt(self.variance))
- frame = th.cat([self.resample_in, frame], dim=-1)
- self.resample_in[:] = frame[:, stride - resample_buffer:stride]
-
- if resample == 4:
- frame = upsample2(upsample2(frame))
- elif resample == 2:
- frame = upsample2(frame)
- # remove pre sampling buffer
- frame = frame[:, resample * resample_buffer:]
- # remove extra samples after window
- frame = frame[:, :resample * self.frame_length]
-
- out, extra = self._separate_frame(frame)
- padded_out = th.cat([self.resample_out, out, extra], 1)
- self.resample_out[:] = out[:, -resample_buffer:]
- if resample == 4:
- out = downsample2(downsample2(padded_out))
- elif resample == 2:
- out = downsample2(padded_out)
- else:
- out = padded_out
-
- out = out[:, resample_buffer // resample:]
- out = out[:, :stride]
-
- if demucs.normalize:
- out *= math.sqrt(self.variance)
- out = self.dry * dry_signal + (1 - self.dry) * out
- outs.append(out)
- self.pending = self.pending[:, stride:]
-
- self.total_time += time.time() - begin
- if outs:
- out = th.cat(outs, 1)
- else:
- out = th.zeros(chin, 0, device=wav.device)
- return out
-
- def _separate_frame(self, frame):
- demucs = self.demucs
- skips = []
- next_state = []
- first = self.conv_state is None
- stride = self.stride * demucs.resample
- x = frame[None]
- for idx, encode in enumerate(demucs.encoder):
- stride //= demucs.stride
- length = x.shape[2]
- if idx == demucs.depth - 1:
- # This is sligthly faster for the last conv
- x = fast_conv(encode[0], x)
- x = encode[1](x)
- x = fast_conv(encode[2], x)
- x = encode[3](x)
- else:
- if not first:
- prev = self.conv_state.pop(0)
- prev = prev[..., stride:]
- tgt = (length - demucs.kernel_size) // demucs.stride + 1
- missing = tgt - prev.shape[-1]
- offset = length - demucs.kernel_size - \
- demucs.stride * (missing - 1)
- x = x[..., offset:]
- x = encode[1](encode[0](x))
- x = fast_conv(encode[2], x)
- x = encode[3](x)
- if not first:
- x = th.cat([prev, x], -1)
- next_state.append(x)
- skips.append(x)
-
- x = x.permute(2, 0, 1)
- x, self.lstm_state = demucs.lstm(x, self.lstm_state)
- x = x.permute(1, 2, 0)
- # In the following, x contains only correct samples, i.e. the one
- # for which each time position is covered by two window of the upper
- # layer. extra contains extra samples to the right, and is used only as
- # a better padding for the online resampling.
- extra = None
- for idx, decode in enumerate(demucs.decoder):
- skip = skips.pop(-1)
- x += skip[..., :x.shape[-1]]
- x = fast_conv(decode[0], x)
- x = decode[1](x)
-
- if extra is not None:
- skip = skip[..., x.shape[-1]:]
- extra += skip[..., :extra.shape[-1]]
- extra = decode[2](decode[1](decode[0](extra)))
- x = decode[2](x)
- next_state.append(
- x[..., -demucs.stride:] - decode[2].bias.view(-1, 1)
- )
- if extra is None:
- extra = x[..., -demucs.stride:]
- else:
- extra[..., :demucs.stride] += next_state[-1]
- x = x[..., :-demucs.stride]
-
- if not first:
- prev = self.conv_state.pop(0)
- x[..., :demucs.stride] += prev
- if idx != demucs.depth - 1:
- x = decode[3](x)
- extra = decode[3](extra)
- self.conv_state = next_state
- return x[0], extra[0]
-
-
-def test():
- import argparse
- parser = argparse.ArgumentParser(
- "denoiser.demucs",
- description="Benchmark the streaming Demucs implementation, as well as "
- "checking the delta with the offline implementation.")
- parser.add_argument("--depth", default=5, type=int)
- parser.add_argument("--resample", default=4, type=int)
- parser.add_argument("--hidden", default=48, type=int)
- parser.add_argument("--sample_rate", default=16000, type=float)
- parser.add_argument("--device", default="cpu")
- parser.add_argument("-t", "--num_threads", type=int)
- parser.add_argument("-f", "--num_frames", type=int, default=1)
- args = parser.parse_args()
- if args.num_threads:
- th.set_num_threads(args.num_threads)
- sr = args.sample_rate
- sr_ms = sr / 1000
- demucs = Demucs(
- depth=args.depth, hidden=args.hidden, resample=args.resample
- ).to(args.device)
- x = th.randn(1, int(sr * 4)).to(args.device)
- out = demucs(x[None])[0]
- streamer = DemucsStreamer(demucs, num_frames=args.num_frames)
- out_rt = []
- frame_size = streamer.total_length
- with th.no_grad():
- while x.shape[1] > 0:
- out_rt.append(streamer.feed(x[:, :frame_size]))
- x = x[:, frame_size:]
- frame_size = streamer.demucs.total_stride
- out_rt.append(streamer.flush())
- out_rt = th.cat(out_rt, 1)
- model_size = sum(p.numel() for p in demucs.parameters()) * 4 / 2**20
- initial_lag = streamer.total_length / sr_ms
- tpf = 1000 * streamer.time_per_frame
- print(f"model size: {model_size:.1f}MB, ", end='')
- print(f"delta batch/streaming: {th.norm(out - out_rt) / th.norm(out):.2%}")
- print(f"initial lag: {initial_lag:.1f}ms, ", end='')
- print(f"stride: {streamer.stride * args.num_frames / sr_ms:.1f}ms")
- print(f"time per frame: {tpf:.1f}ms, ", end='')
- rtf = (1000 * streamer.time_per_frame) / (streamer.stride / sr_ms)
- print(f"RTF: {rtf:.2f}")
- print(f"Total lag with computation: {initial_lag + tpf:.1f}ms")
-
-
-if __name__ == "__main__":
- test()
diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/truncated_bptt/truncated_bptt_lm_task.py b/spaces/mshukor/UnIVAL/fairseq/examples/truncated_bptt/truncated_bptt_lm_task.py
deleted file mode 100644
index 02be0e7fb4213b98798c85b79e9046e9990b97fc..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/examples/truncated_bptt/truncated_bptt_lm_task.py
+++ /dev/null
@@ -1,281 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import os
-from dataclasses import dataclass, field
-from typing import List, Optional, Tuple
-
-import torch
-from fairseq import utils
-from fairseq.data import (
- Dictionary,
- TokenBlockDataset,
- data_utils,
- iterators,
-)
-from fairseq.dataclass import FairseqDataclass
-from fairseq.distributed import utils as dist_utils
-from fairseq.tasks import FairseqTask, register_task
-from omegaconf import II
-
-
-logger = logging.getLogger(__name__)
-
-
-@dataclass
-class TruncatedBPTTLMConfig(FairseqDataclass):
- data: str = field(default="???", metadata={"help": "path to data directory"})
- tokens_per_sample: int = field(
- default=1024,
- metadata={"help": "max number of tokens per sequence"},
- )
- batch_size: int = II("dataset.batch_size")
- # Some models use *max_target_positions* to know how many positional
- # embeddings to learn. We use II(...) to make it default to
- # *tokens_per_sample*, but in principle there could be more positional
- # embeddings than tokens in a single batch. This may also be irrelevant for
- # custom model implementations.
- max_target_positions: int = II("task.tokens_per_sample")
- # these will be populated automatically if not provided
- data_parallel_rank: Optional[int] = None
- data_parallel_size: Optional[int] = None
-
-
-@register_task("truncated_bptt_lm", dataclass=TruncatedBPTTLMConfig)
-class TruncatedBPTTLMTask(FairseqTask):
- def __init__(self, cfg: TruncatedBPTTLMConfig):
- super().__init__(cfg)
-
- if cfg.data_parallel_rank is None or cfg.data_parallel_size is None:
- if torch.distributed.is_initialized():
- cfg.data_parallel_rank = dist_utils.get_data_parallel_rank()
- cfg.data_parallel_size = dist_utils.get_data_parallel_world_size()
- else:
- cfg.data_parallel_rank = 0
- cfg.data_parallel_size = 1
-
- # load the dictionary
- paths = utils.split_paths(cfg.data)
- assert len(paths) > 0
- self.dictionary = Dictionary.load(os.path.join(paths[0], "dict.txt"))
- logger.info("dictionary: {} types".format(len(self.dictionary)))
-
- def load_dataset(self, split, epoch=1, combine=False, **kwargs):
- """Load a given dataset split (e.g., train, valid, test)"""
-
- # support sharded datasets
- paths = utils.split_paths(self.cfg.data)
- assert len(paths) > 0
- data_path = paths[(epoch - 1) % len(paths)]
- split_path = os.path.join(data_path, split)
-
- # each element of *data* will be a tensorized line from the original
- # text dataset, similar to ``open(split_path).readlines()``
- data = data_utils.load_indexed_dataset(
- split_path, self.dictionary, combine=combine
- )
- if data is None:
- raise FileNotFoundError(
- "Dataset not found: {} ({})".format(split, split_path)
- )
-
- # this is similar to ``data.view(-1).split(tokens_per_sample)``
- data = TokenBlockDataset(
- data,
- data.sizes,
- block_size=self.cfg.tokens_per_sample,
- pad=None, # unused
- eos=None, # unused
- break_mode="none",
- )
-
- self.datasets[split] = TruncatedBPTTDataset(
- data=data,
- bsz_per_shard=self.cfg.batch_size,
- shard_id=self.cfg.data_parallel_rank,
- num_shards=self.cfg.data_parallel_size,
- )
-
- def dataset(self, split):
- return self.datasets[split]
-
- def get_batch_iterator(
- self, dataset, num_workers=0, epoch=1, data_buffer_size=0, **kwargs
- ):
- return iterators.EpochBatchIterator(
- dataset=dataset,
- collate_fn=self._collate_fn,
- num_workers=num_workers,
- epoch=epoch,
- buffer_size=data_buffer_size,
- # we don't use the batching functionality from EpochBatchIterator;
- # instead every item in *dataset* is a whole batch
- batch_sampler=[[i] for i in range(len(dataset))],
- disable_shuffling=True,
- )
-
- def _collate_fn(self, items: List[List[torch.Tensor]]):
- # we don't use fairseq's batching functionality, so we expect a single
- # Tensor of type List[torch.Tensor]
- assert len(items) == 1
-
- # item will have shape B x T (the last batch may have length < T)
- id, item = items[0]
- item = data_utils.collate_tokens(item, pad_idx=self.source_dictionary.pad())
- B, T = item.size()
-
- # shift item one position over and append a padding token for the target
- target = torch.nn.functional.pad(
- item[:, 1:], (0, 1, 0, 0), value=self.target_dictionary.pad()
- )
-
- # fairseq expects batches to have the following structure
- return {
- "id": torch.tensor([id]*item.size(0)),
- "net_input": {
- "src_tokens": item,
- },
- "target": target,
- "nsentences": item.size(0),
- "ntokens": item.numel(),
- }
-
- def build_dataset_for_inference(
- self, src_tokens: List[torch.Tensor], src_lengths: List[int], **kwargs
- ) -> torch.utils.data.Dataset:
- eos = self.source_dictionary.eos()
- dataset = TokenBlockDataset(
- src_tokens,
- src_lengths,
- block_size=None, # ignored for "eos" break mode
- pad=self.source_dictionary.pad(),
- eos=eos,
- break_mode="eos",
- )
-
- class Dataset(torch.utils.data.Dataset):
- def __getitem__(self, i):
- item = dataset[i]
- if item[-1] == eos:
- # remove eos to support generating with a prefix
- item = item[:-1]
- return (i, [item])
-
- def __len__(self):
- return len(dataset)
-
- return Dataset()
-
- def inference_step(
- self, generator, models, sample, prefix_tokens=None, constraints=None
- ):
- with torch.no_grad():
- if constraints is not None:
- raise NotImplementedError
-
- # SequenceGenerator doesn't use *src_tokens* directly, we need to
- # pass the *prefix_tokens* argument instead.
- if prefix_tokens is None and sample["net_input"]["src_tokens"].nelement():
- prefix_tokens = sample["net_input"]["src_tokens"]
-
- # begin generation with the end-of-sentence token
- bos_token = self.source_dictionary.eos()
-
- return generator.generate(
- models, sample, prefix_tokens=prefix_tokens, bos_token=bos_token
- )
-
- def eval_lm_dataloader(
- self,
- dataset,
- max_tokens: Optional[int] = 36000,
- batch_size: Optional[int] = None,
- max_positions: Optional[int] = None,
- num_shards: int = 1,
- shard_id: int = 0,
- num_workers: int = 1,
- data_buffer_size: int = 10,
- context_window: int = 0,
- ):
- if context_window > 0:
- raise NotImplementedError(
- "Transformer-XL doesn't need --context-window, try "
- "--model-overrides '{\"mem_len\":42}' instead "
- )
- return self.get_batch_iterator(
- dataset=dataset,
- max_tokens=max_tokens,
- max_sentences=batch_size,
- max_positions=max_positions,
- ignore_invalid_inputs=True,
- num_shards=num_shards,
- shard_id=shard_id,
- num_workers=num_workers,
- data_buffer_size=data_buffer_size,
- ).next_epoch_itr(shuffle=False)
-
- @property
- def source_dictionary(self):
- return self.dictionary
-
- @property
- def target_dictionary(self):
- return self.dictionary
-
-
-class TruncatedBPTTDataset(torch.utils.data.Dataset):
- def __init__(
- self,
- data: List[torch.Tensor], # ordered list of items
- bsz_per_shard, # number of items processed per GPUs per forward
- shard_id, # current GPU ID
- num_shards, # number of GPUs
- ):
- super().__init__()
- self.data = data
-
- def batchify(data, bsz):
- # Work out how cleanly we can divide the dataset into bsz parts.
- nbatch = data.size(0) // bsz
- # Trim off any extra elements that wouldn't cleanly fit (remainders).
- data = data.narrow(0, 0, nbatch * bsz)
- # Evenly divide the data across the bsz batches.
- data = data.view(bsz, -1).contiguous()
- return data
-
- # total number of sequences processed by all GPUs in each forward pass
- global_batch_size = bsz_per_shard * num_shards
-
- """
- With a 16 item dataset, bsz_per_shard=2 and num_shards=3,
- *indices* might look like:
-
- indices = [[0, 1],
- [2, 3],
- [4, 5],
- [6, 7],
- [8, 9],
- [10, 11]]
-
- The size of the TruncatedBPTTDataset instance will be 2,
- and shard 1 will see items:
-
- [(0, [data[4], data[6]]),
- (1, [data[5], data[7]])]
- """
- indices = batchify(torch.arange(len(data)), global_batch_size)
- assert indices.size(0) == global_batch_size
-
- self.my_indices = indices[
- shard_id * bsz_per_shard : (shard_id + 1) * bsz_per_shard
- ]
- assert self.my_indices.size(0) == bsz_per_shard
-
- def __len__(self):
- return self.my_indices.size(1)
-
- def __getitem__(self, i) -> Tuple[int, List[torch.Tensor]]:
- return (i, [self.data[idx] for idx in self.my_indices[:, i]])
diff --git a/spaces/mshukor/UnIVAL/run_scripts/caption/eval/eval_caption_base_best.sh b/spaces/mshukor/UnIVAL/run_scripts/caption/eval/eval_caption_base_best.sh
deleted file mode 100644
index aa5450c987a123a2ccf25efe49a3810e53c6e1e7..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/run_scripts/caption/eval/eval_caption_base_best.sh
+++ /dev/null
@@ -1,103 +0,0 @@
-#!/usr/bin/env bash
-
-# The port for communication. Note that if you want to run multiple tasks on the same machine,
-# you need to specify different port numbers.
-# Number of GPUs per GPU worker
-export GPUS_PER_NODE=8
-# Number of GPU workers, for single-worker training, please set to 1
-export NUM_NODES=$SLURM_NNODES
-# The ip address of the rank-0 worker, for single-worker training, please set to localhost
-master_addr=$(scontrol show hostnames "$SLURM_JOB_NODELIST" | head -n 1)
-export MASTER_ADDR=$master_addr
-
-# The port for communication
-export MASTER_PORT=12350
-# The rank of this worker, should be in {0, ..., WORKER_CNT-1}, for single-worker training, please set to 0
-export RANK=$SLURM_NODEID
-
-echo "MASTER_ADDR: $MASTER_ADDR"
-echo "RANK :$RANK"
-echo "NUM_NODES :$NUM_NODES"
-echo "GPUS_PER_NODE :$GPUS_PER_NODE"
-
-export MIOPEN_USER_DB_PATH=/lus/home/NAT/gda2204/mshukor/.config/miopen_${MASTER_ADDR}_${SLURM_PROCID}/
-
-echo "MIOPEN_USER_DB_PATH :$MIOPEN_USER_DB_PATH"
-
-num_workers=0
-
-
-exp_name=eval_caption_stage_1_ofaplus_base_pretrain_s2
-
-
-
-ofa_dir=/lus/home/NAT/gda2204/mshukor/code/unival
-base_data_dir=/lus/scratch/NAT/gda2204/SHARED/data
-base_log_dir=/work/NAT/gda2204/mshukor/logs
-
-
-
-
-bpe_dir=${ofa_dir}/utils/BPE
-user_dir=${ofa_dir}/ofa_module
-
-
-data_dir=${base_data_dir}/ofa/caption_data
-split=test # val test
-data=${data_dir}/caption_${split}.tsv # caption_val caption_test
-
-zero_shot=''
-
-
-new_base_log_dir=/lus/scratch/NAT/gda2204/SHARED/logs
-
-
-# model_name=avg_postratafusevanilla
-# path=/lus/scratch/NAT/gda2204/SHARED/logs/ofa/pretrained_models/average_models/avg_postratafusevanilla.pt
-# zero_shot='--zero-shot'
-
-model_name=unival_caption_stage_1
-path=/work/NAT/gda2204/mshukor/logs/ofa/checkpoints/caption/unival_caption_stage_1/checkpoint_best.pt
-
-
-
-result_path=${new_base_log_dir}/ofa/results/caption/eval_${model_name}_${split}
-mkdir ${result_path}
-
-selected_cols=1,4,2
-
-
-image_encoder_name=timm_resnet #vit_base_patch16_224 timm_resnet resnet
-resnet_type=resnet101
-
-
-python3 -m torch.distributed.launch \
- --nnodes=${NUM_NODES} \
- --nproc_per_node=${GPUS_PER_NODE} \
- --master_port=${MASTER_PORT} \
- --node_rank=${RANK} \
- --master_addr=${MASTER_ADDR} \
- --use_env ${ofa_dir}/evaluate.py \
- ${data} \
- --path=${path} \
- --user-dir=${user_dir} \
- --task=caption \
- --batch-size=16 \
- --log-format=simple --log-interval=10 \
- --seed=7 \
- --gen-subset=${split} \
- --results-path=${result_path} \
- --beam=5 \
- --max-len-b=22 \
- --unnormalized \
- --no-repeat-ngram-size=3 \
- --fp16 \
- --num-workers=0 \
- --patch-image-size=480 \
- ${zero_shot} \
- --model-overrides="{\"data\":\"${data}\",\"bpe_dir\":\"${bpe_dir}\",\"eval_cider\":False,\"selected_cols\":\"${selected_cols}\"}"
-
-
-python ${ofa_dir}/run_scripts/caption/coco_eval.py ${result_path}/${split}_predict.json ${data_dir}/test_caption_coco_format.json
-
-
diff --git a/spaces/mueller-franzes/medfusion-app/tests/models/latent_embedders/test_vae_simple.py b/spaces/mueller-franzes/medfusion-app/tests/models/latent_embedders/test_vae_simple.py
deleted file mode 100644
index e2c8d6f3277c194a41bf8ea55e725f35de8d47eb..0000000000000000000000000000000000000000
--- a/spaces/mueller-franzes/medfusion-app/tests/models/latent_embedders/test_vae_simple.py
+++ /dev/null
@@ -1,12 +0,0 @@
-import torch
-from medical_diffusion.models.embedders.latent_embedders import VAE
-
-
-input = torch.randn((1, 3, 128, 128)) # [B, C, H, W]
-
-
-model = VAE(in_channels=3, out_channels=3, spatial_dims = 2, deep_supervision=True)
-output = model(input)
-print(output)
-
-
diff --git a/spaces/mullikine/ilambda/README.md b/spaces/mullikine/ilambda/README.md
deleted file mode 100644
index c7e34b1d76bfe8a88e9103d0c6e5f1434c359447..0000000000000000000000000000000000000000
--- a/spaces/mullikine/ilambda/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: ilambda
-emoji: ࿋
-colorFrom: black
-colorTo: white
-sdk: static
-pinned: false
-license: gpl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/mygyasir/genious_bgremover/carvekit/web/static/js/wow.min.js b/spaces/mygyasir/genious_bgremover/carvekit/web/static/js/wow.min.js
deleted file mode 100644
index cbfde62ccf95257c5979fbb1fc8d8ebe25407a9c..0000000000000000000000000000000000000000
--- a/spaces/mygyasir/genious_bgremover/carvekit/web/static/js/wow.min.js
+++ /dev/null
@@ -1,2 +0,0 @@
-/*! WOW - v0.1.9 - 2014-05-10
-* Copyright (c) 2014 Matthieu Aussaguel; Licensed MIT */(function(){var a,b,c=function(a,b){return function(){return a.apply(b,arguments)}};a=function(){function a(){}return a.prototype.extend=function(a,b){var c,d;for(c in a)d=a[c],null!=d&&(b[c]=d);return b},a.prototype.isMobile=function(a){return/Android|webOS|iPhone|iPad|iPod|BlackBerry|IEMobile|Opera Mini/i.test(a)},a}(),b=this.WeakMap||(b=function(){function a(){this.keys=[],this.values=[]}return a.prototype.get=function(a){var b,c,d,e,f;for(f=this.keys,b=d=0,e=f.length;e>d;b=++d)if(c=f[b],c===a)return this.values[b]},a.prototype.set=function(a,b){var c,d,e,f,g;for(g=this.keys,c=e=0,f=g.length;f>e;c=++e)if(d=g[c],d===a)return void(this.values[c]=b);return this.keys.push(a),this.values.push(b)},a}()),this.WOW=function(){function d(a){null==a&&(a={}),this.scrollCallback=c(this.scrollCallback,this),this.scrollHandler=c(this.scrollHandler,this),this.start=c(this.start,this),this.scrolled=!0,this.config=this.util().extend(a,this.defaults),this.animationNameCache=new b}return d.prototype.defaults={boxClass:"wow",animateClass:"animated",offset:0,mobile:!0},d.prototype.init=function(){var a;return this.element=window.document.documentElement,"interactive"===(a=document.readyState)||"complete"===a?this.start():document.addEventListener("DOMContentLoaded",this.start)},d.prototype.start=function(){var a,b,c,d;if(this.boxes=this.element.getElementsByClassName(this.config.boxClass),this.boxes.length){if(this.disabled())return this.resetStyle();for(d=this.boxes,b=0,c=d.length;c>b;b++)a=d[b],this.applyStyle(a,!0);return window.addEventListener("scroll",this.scrollHandler,!1),window.addEventListener("resize",this.scrollHandler,!1),this.interval=setInterval(this.scrollCallback,50)}},d.prototype.stop=function(){return window.removeEventListener("scroll",this.scrollHandler,!1),window.removeEventListener("resize",this.scrollHandler,!1),null!=this.interval?clearInterval(this.interval):void 0},d.prototype.show=function(a){return this.applyStyle(a),a.className=""+a.className+" "+this.config.animateClass},d.prototype.applyStyle=function(a,b){var c,d,e;return d=a.getAttribute("data-wow-duration"),c=a.getAttribute("data-wow-delay"),e=a.getAttribute("data-wow-iteration"),this.animate(function(f){return function(){return f.customStyle(a,b,d,c,e)}}(this))},d.prototype.animate=function(){return"requestAnimationFrame"in window?function(a){return window.requestAnimationFrame(a)}:function(a){return a()}}(),d.prototype.resetStyle=function(){var a,b,c,d,e;for(d=this.boxes,e=[],b=0,c=d.length;c>b;b++)a=d[b],e.push(a.setAttribute("style","visibility: visible;"));return e},d.prototype.customStyle=function(a,b,c,d,e){return b&&this.cacheAnimationName(a),a.style.visibility=b?"hidden":"visible",c&&this.vendorSet(a.style,{animationDuration:c}),d&&this.vendorSet(a.style,{animationDelay:d}),e&&this.vendorSet(a.style,{animationIterationCount:e}),this.vendorSet(a.style,{animationName:b?"none":this.cachedAnimationName(a)}),a},d.prototype.vendors=["moz","webkit"],d.prototype.vendorSet=function(a,b){var c,d,e,f;f=[];for(c in b)d=b[c],a[""+c]=d,f.push(function(){var b,f,g,h;for(g=this.vendors,h=[],b=0,f=g.length;f>b;b++)e=g[b],h.push(a[""+e+c.charAt(0).toUpperCase()+c.substr(1)]=d);return h}.call(this));return f},d.prototype.vendorCSS=function(a,b){var c,d,e,f,g,h;for(d=window.getComputedStyle(a),c=d.getPropertyCSSValue(b),h=this.vendors,f=0,g=h.length;g>f;f++)e=h[f],c=c||d.getPropertyCSSValue("-"+e+"-"+b);return c},d.prototype.animationName=function(a){var b;try{b=this.vendorCSS(a,"animation-name").cssText}catch(c){b=window.getComputedStyle(a).getPropertyValue("animation-name")}return"none"===b?"":b},d.prototype.cacheAnimationName=function(a){return this.animationNameCache.set(a,this.animationName(a))},d.prototype.cachedAnimationName=function(a){return this.animationNameCache.get(a)},d.prototype.scrollHandler=function(){return this.scrolled=!0},d.prototype.scrollCallback=function(){var a;return this.scrolled&&(this.scrolled=!1,this.boxes=function(){var b,c,d,e;for(d=this.boxes,e=[],b=0,c=d.length;c>b;b++)a=d[b],a&&(this.isVisible(a)?this.show(a):e.push(a));return e}.call(this),!this.boxes.length)?this.stop():void 0},d.prototype.offsetTop=function(a){for(var b;void 0===a.offsetTop;)a=a.parentNode;for(b=a.offsetTop;a=a.offsetParent;)b+=a.offsetTop;return b},d.prototype.isVisible=function(a){var b,c,d,e,f;return c=a.getAttribute("data-wow-offset")||this.config.offset,f=window.pageYOffset,e=f+this.element.clientHeight-c,d=this.offsetTop(a),b=d+a.clientHeight,e>=d&&b>=f},d.prototype.util=function(){return this._util||(this._util=new a)},d.prototype.disabled=function(){return!this.config.mobile&&this.util().isMobile(navigator.userAgent)},d}()}).call(this);
\ No newline at end of file
diff --git a/spaces/nagolinc/safetyWaifu/app.py b/spaces/nagolinc/safetyWaifu/app.py
deleted file mode 100644
index a9213065820c8ff4e60167dc5a6417f019033e95..0000000000000000000000000000000000000000
--- a/spaces/nagolinc/safetyWaifu/app.py
+++ /dev/null
@@ -1,46 +0,0 @@
-from asyncio import constants
-import gradio as gr
-import requests
-import os
-import random
-
-def desc_to_image(desc):
-
- random.seed(desc)
- #tadneSeed=random.randint(0,2**256)
- tadneSeed=random.randint(0,2**32)
- psi=0.7
-
- print("seed",tadneSeed,psi)
-
- #iface = gr.Interface.load("spaces/hysts/TADNE")
- #print("about to die",iface,dir(iface))
-
-
- #img=iface.fns[0].fn(tadneSeed,psi)
- print("loading interface")
- tadne=gr.Interface.load("spaces/hysts/TADNE")
- print("calling interface")
- img=tadne(tadneSeed,psi,False)
- print("got img",img)
- return img
-
-demo = gr.Blocks()
-
-with demo:
- gr.Markdown("
'
- )
-
- with gr.Row():
- desc_txt = gr.Textbox(label="description",placeholder="0x1f9840a85d5aF5bf1D1762F925BDADdC4201F984")
- output_image = gr.Image(label="portrait",type="filepath", shape=(256,256))
-
- b0 = gr.Button("Generate Waifu")
-
- b0.click(desc_to_image,desc_txt,output_image)
- #examples=examples
-
-demo.launch(enable_queue=True, debug=True)
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Entrepreneurship By Feliciano Fajardo Pdf 16.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Entrepreneurship By Feliciano Fajardo Pdf 16.md
deleted file mode 100644
index 1b405c28d9269d22961e1f5cc48bc744bb10f1fa..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Entrepreneurship By Feliciano Fajardo Pdf 16.md
+++ /dev/null
@@ -1,42 +0,0 @@
-
-
Entrepreneurship by Feliciano Fajardo PDF 16: A Comprehensive Guide for Aspiring Entrepreneurs
-
Entrepreneurship is the process of creating, launching and managing a new business venture. It involves identifying opportunities, developing strategies, acquiring resources and overcoming challenges. Entrepreneurship can be rewarding, but also risky and demanding.
If you are interested in learning more about entrepreneurship, you may want to check out the book Entrepreneurship by Feliciano Fajardo PDF 16. This book is a comprehensive guide for aspiring entrepreneurs who want to start their own business or improve their existing one. It covers topics such as:
-
-
The nature and importance of entrepreneurship
-
The entrepreneurial mindset and skills
-
The entrepreneurial process and stages
-
The types and forms of entrepreneurship
-
The sources and methods of financing entrepreneurship
-
The legal and ethical aspects of entrepreneurship
-
The challenges and opportunities of entrepreneurship in the global context
-
-
The book is written in a clear and concise manner, with examples, case studies, exercises and self-assessment tools. It is suitable for students, teachers, practitioners and anyone who wants to learn more about entrepreneurship.
-
You can download the book Entrepreneurship by Feliciano Fajardo PDF 16 for free from the link below. You will need a PDF reader to open the file. The book is also available in hard copy from various online and offline retailers.
In this article, we will discuss some of the key concepts and principles of entrepreneurship that are covered in the book Entrepreneurship by Feliciano Fajardo PDF 16. We will also provide some tips and advice on how to apply them in your own entrepreneurial journey.
-
What is Entrepreneurship?
-
Entrepreneurship is the process of creating, launching and managing a new business venture. It involves identifying opportunities, developing strategies, acquiring resources and overcoming challenges. Entrepreneurship can be rewarding, but also risky and demanding.
-
Entrepreneurship can be classified into different types and forms, depending on the nature, scope and purpose of the business venture. Some of the common types and forms of entrepreneurship are:
-
-
Innovation entrepreneurship: This involves creating new products, services or processes that meet the needs or wants of customers or solve existing problems.
-
Opportunity entrepreneurship: This involves exploiting existing opportunities in the market or environment that are not yet fully utilized or satisfied by competitors.
-
Necessity entrepreneurship: This involves starting a business out of necessity or lack of alternatives, such as unemployment, poverty or discrimination.
-
Social entrepreneurship: This involves creating a business that has a social or environmental mission or impact, such as addressing social problems, improving the quality of life or protecting the environment.
-
Corporate entrepreneurship: This involves creating a new business within an existing organization or corporation, such as developing new products, markets or ventures.
-
Franchise entrepreneurship: This involves acquiring the rights to use an established brand name, product or service from another business entity, such as a franchisor.
-
-
Entrepreneurship can also be classified into different stages, depending on the level of development and growth of the business venture. Some of the common stages of entrepreneurship are:
-
-
Idea stage: This involves generating and evaluating ideas for a potential business venture.
-
Planning stage: This involves developing a business plan that outlines the goals, strategies, resources and actions for the business venture.
-
Startup stage: This involves launching and establishing the business venture in the market or environment.
-
Growth stage: This involves expanding and improving the business venture in terms of sales, customers, products, services or markets.
-
Maturity stage: This involves stabilizing and optimizing the business venture in terms of profitability, efficiency and sustainability.
-
Exit stage: This involves exiting or terminating the business venture due to various reasons, such as selling, merging, closing or retiring.
-
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Lennar Digital Sylenth1 V.2.21 X64 X32 Utorrent.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Lennar Digital Sylenth1 V.2.21 X64 X32 Utorrent.md
deleted file mode 100644
index dc7ae3ef63be9f9256d25c184fc2aea9f8f4020b..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Lennar Digital Sylenth1 V.2.21 X64 X32 Utorrent.md
+++ /dev/null
@@ -1,106 +0,0 @@
-
-
Lennar Digital Sylenth1 v.2.21 x64 x32 utorrent: A Review
|
If you are looking for a powerful, versatile, and easy-to-use software synthesizer, you might have heard of Sylenth1. This virtual analog VSTi synthesizer is one of the most popular and widely used plugins in the music production industry. It has been praised for its incredible sound quality, rich features, and low CPU usage.
But what exactly is Sylenth1 and what makes it so special? How can you download and install it on your Windows computer? What are some of the alternatives and competitors that you can compare it to? And what do users and experts think about it?
-
In this article, we will answer all these questions and more. We will give you an overview of Sylenth1's features and benefits, a step-by-step installation guide, a comparison with other similar software synthesizers, and a summary of reviews and ratings from various sources. By the end of this article, you will have a clear idea of whether Sylenth1 is the right plugin for you or not.
-
Features and benefits
-
Sylenth1 is a virtual analog VSTi synthesizer that takes the definitions of quality and performance to a higher level. It was built from a producer's point of view, with the aim of producing superior quality sound and music. It was also built to perform well, using only minimal amounts of CPU resources.
-
-
Sylenth1 has four alias-free unison oscillators, which generate analog shaped waveforms. Each oscillator can produce 8 unison voices in full stereo, adding up to a total of 32 voices per note. The oscillators can perform extremely well in both low and high frequency regions, without losing their sharpness, liveliness, or character.
-
Sylenth1 also has two state-of-the-art filter sections, each consisting of four filter stages with nonlinear saturation. These filters can emulate the warmth and drive of a real analog filter, with resonance control that can go beyond self-oscillation. The filters can also scream, thanks to the drive control that adds distortion to the signal.
-
Sylenth1 offers many modulation options to sculpt the sound any way you like. There are two ADSR envelopes and two LFOs that can modulate a whole set of different parameters. There are also two amplitude envelopes, velocity, keyboard track, and modulation wheel that can be used as modulation sources.
-
The final part of Sylenth1 is the master effects section, which consists of seven professional quality sound effects and an arpeggiator. The effects include distortion, phaser, chorus/flanger, equalizer, delay, reverb, and compressor. The arpeggiator has 10 different melodic modes, a built-in step sequencer, adjustable pitch, velocity, and hold settings.
-
Some of the features and benefits of Sylenth1 are:
-
-
It has an incredible sound quality that rivals hardware synths.
-
It has a rich feature set that covers all kinds of sounds, from basses to bells.
-
It has a low CPU usage that allows you to use multiple instances without slowing down your computer.
-
It has a simple and intuitive interface that makes it easy to use.
-
It has a reasonable price that offers great value for money.
-
-
Installation guide
-
1 out, you can download a free demo version from the official website of Lennar Digital. The demo version is fully functional, except that it will go silent every 15 minutes and some presets are disabled. To download the demo version, you need to fill out a form with your name and email address, and then you will receive a download link in your inbox.
-
If you want to buy the full version of Sylenth1, you can also do it from the official website of Lennar Digital. The full version costs €139 (approximately $165) and it comes with a lifetime free update guarantee. You can pay with PayPal or credit card, and you will receive a license code and a download link in your email.
-
Once you have downloaded the Sylenth1 installer, you need to run it on your Windows computer. The installer is compatible with both 32-bit and 64-bit systems, and it supports VST and AAX formats. The installation process is very simple and straightforward, and it will guide you through the steps. You just need to agree to the terms and conditions, choose the destination folder, select the plugin formats, and click on install.
-
After the installation is complete, you need to activate Sylenth1 with your license code. You can do this by opening Sylenth1 in your DAW (digital audio workstation) of choice, clicking on the menu button on the top right corner of the plugin window, and selecting "activate". Then, you need to enter your license code and click on "activate". You will see a confirmation message that says "Sylenth1 has been activated successfully".
-
Now you are ready to use Sylenth1 and enjoy its amazing sounds and features. You can browse through the presets by clicking on the arrows on the top left corner of the plugin window, or by using the preset browser on the bottom right corner. You can also tweak the parameters and create your own sounds by using the knobs, sliders, buttons, and menus on the plugin interface.
-
Alternatives and competitors
-
Sylenth1 is not the only software synthesizer that can produce high-quality analog sounds. There are many other alternatives and competitors that you can compare it to. Some of them are:
-
-
Massive: This is another popular and widely used software synthesizer that was developed by Native Instruments. It is a wavetable synthesizer that can generate complex and dynamic sounds with its flexible modulation options and effects. It has three oscillators, two filters, four envelopes, four LFOs, a noise generator, a feedback loop, an insert effect slot, a master effect slot, and an arpeggiator. It also has over 1300 presets that cover various genres and styles.
-
Serum: This is a software synthesizer that was created by Xfer Records. It is also a wavetable synthesizer that can create rich and detailed sounds with its advanced wavetable editor and modulation system. It has two oscillators, two sub-oscillators, two filters, four envelopes, four LFOs, a noise generator, a distortion module, an effects rack, and an arpeggiator. It also has over 450 presets that range from basses to pads.
-
Spire: This is a software synthesizer that was developed by Reveal Sound. It is a hybrid synthesizer that combines analog modeling and digital synthesis techniques. It has four multimode oscillators, two multimode filters, four envelopes, four LFOs, two step sequencers, an arpeggiator/sequencer, an effects processor, and an equalizer. It also has over 800 presets that include leads, plucks, drums, FXs, and more.
-
-
there are many other software synthesizers that can offer different features, sounds, and styles. Some of them are:
-
-
Arturia Pigments: This is a software synthesizer that was designed by Arturia. It is a hybrid synthesizer that combines wavetable, virtual analog, granular, and sampling synthesis methods. It has two sound engines, three filters, three envelopes, three LFOs, three function generators, a modulation matrix, a sequencer, an arpeggiator, and an effects section. It also has over 600 presets that span various genres and moods.
-
u-he Diva: This is a software synthesizer that was created by u-he. It is a virtual analog synthesizer that emulates the sound and behavior of various classic hardware synths. It has five oscillator models, five filter models, five envelope models, two LFO models, a modulation matrix, an effects section, and an arpeggiator. It also has over 1200 presets that cover a wide range of sounds and styles.
-
Omnisphere: This is a software synthesizer that was developed by Spectrasonics. It is a powerful and versatile synthesizer that can create any kind of sound imaginable. It has four layers of synthesis, each with its own oscillator, filter, envelope, LFO, mod matrix, FX rack, and arpeggiator. It also has over 14,000 sounds that include samples of acoustic instruments, electronic sounds, and exotic sources.
-
-
These are some of the other software synthesizers that you can explore and experiment with. Each one of them has its own strengths and weaknesses, and it ultimately depends on your personal preference and taste which one you choose.
-
Reviews and ratings
-
Sylenth1 has received many positive reviews and ratings from users and experts alike. It has been praised for its sound quality, features, performance, usability, and value. Here are some of the reviews and ratings from various sources:
-
-
-
Source
-
Rating
-
Review
-
-
-
MusicRadar
-
5/5 stars
-
"Sylenth1 is one of the most widely used soft synths on the market today - and with good reason. Its excellent sound quality and easy-to-use interface have won over many fans since its release in 2007. ... Sylenth1 is a synth that can do it all - from huge leads to deep basses to complex pads to delicate plucks."
-
-
-
Plugin Boutique
-
4.8/5 stars
-
"Sylenth1 is a classic synth that has stood the test of time. It sounds amazing and is very easy to use. It has a huge library of presets that cover all kinds of genres and styles. It is also very CPU-friendly and stable."
-
-
-
Amazon
-
4.6/5 stars
-
"Sylenth1 is a great synth for beginners and advanced users alike. It has a simple layout that makes it easy to navigate and tweak the parameters. It also has a great sound quality that rivals hardware synths. It is definitely worth the money."
-
-
-
KVR Audio
-
4.5/5 stars
-
"Sylenth1 is a synth that I use almost every day. It has a warm and rich sound that fits well in any mix. It has a lot of features and modulation options that allow me to create unique and expressive sounds. It is also very light on CPU and reliable."
-
-
-
Trustpilot
-
4.4/5 stars
-
"Sylenth1 is a fantastic synth that I highly recommend to anyone who loves music production. It has an incredible sound quality that can compete with any hardware synth. It has a huge variety of presets that can inspire you to create your own sounds. It is also very easy to use and install."
-
-
-
there are also some negative reviews and ratings that point out some of the drawbacks and limitations of Sylenth1. Some of them are:
-
-
It has a dated and boring interface that does not match the modern standards of design and aesthetics.
-
It has a lack of updates and new features that make it seem stagnant and outdated compared to other software synthesizers.
-
It has a limited sound palette that does not offer much diversity and originality in terms of sound design and synthesis.
-
It has a poor customer service and support that does not respond quickly or effectively to the issues and queries of the users.
-
It has a high price tag that does not justify the value and quality of the product.
-
-
These are some of the negative feedback from different sources. However, these are not necessarily representative of the majority of the users and experts who have used and reviewed Sylenth1. They are also subjective and based on personal opinions and preferences that may vary from person to person.
-
Conclusion
-
Sylenth1 is a virtual analog VSTi synthesizer that has been around for more than a decade. It is one of the most popular and widely used plugins in the music production industry, thanks to its incredible sound quality, rich features, and low CPU usage. It is also easy to use, with a simple and intuitive interface that makes it accessible to beginners and advanced users alike.
-
Sylenth1 can produce a wide range of sounds, from basses to bells, from leads to pads, from plucks to FXs. It can also emulate the warmth and drive of a real analog filter, with resonance control that can go beyond self-oscillation. It can also modulate various parameters with its envelopes, LFOs, step sequencers, and modulation wheel. It can also add professional quality effects and arpeggios to enhance the sound further.
-
Sylenth1 is not without its flaws, though. It has a dated and boring interface that does not match the modern standards of design and aesthetics. It has a lack of updates and new features that make it seem stagnant and outdated compared to other software synthesizers. It has a limited sound palette that does not offer much diversity and originality in terms of sound design and synthesis. It has a poor customer service and support that does not respond quickly or effectively to the issues and queries of the users. It has a high price tag that does not justify the value and quality of the product.
-
However, these drawbacks do not outweigh the benefits and advantages of Sylenth1. It is still a great synth that can deliver amazing results in any genre and style of music. It is still a synth that can compete with any hardware synth in terms of sound quality and performance. It is still a synth that can satisfy any music producer who loves analog sounds and synthesis.
-
If you are interested in Sylenth1, you can download a free demo version from the official website of Lennar Digital. You can also buy the full version for €139 (approximately $165) from the same website. You will receive a lifetime free update guarantee, as well as access to over 2500 presets that are included in the package.
-
Sylenth1 is a synth that you should definitely try out if you are looking for a powerful, versatile, and easy-to-use software synthesizer. You will not regret it.
-
FAQs
-
Here are some frequently asked questions and answers about Sylenth1:
-
-
What are the system requirements for Sylenth1?
-
Sylenth1 works on Windows XP (32/64 bit), Vista (32/64 bit), Windows 7 (32/64 bit), Windows 8 (32/64 bit), Windows 10 (32/64 bit). It requires an Intel Pentium III or AMD Athlon processor or higher (SSE capable), 128 MB RAM or more, VSTi or AAX compatible host software.
-
How many instances of Sylenth1 can I run on my computer?
-
This depends on your CPU speed, RAM size, buffer size, sample rate, bit depth, number of voices used per instance, number of effects used per instance, etc. However, Sylenth1 is very CPU-friendly and you can run multiple instances without slowing down your computer significantly.
-
How can I get more presets for Sylenth1?
-
the preset browser. You can also exchange presets with other users and producers who use Sylenth1.
-
How can I update Sylenth1 to the latest version?
-
You can update Sylenth1 to the latest version by downloading the latest installer from the official website of Lennar Digital. You can also check for updates by clicking on the menu button on the top right corner of the plugin window, and selecting "check for updates". You will be notified if there is a new version available, and you can download and install it from there.
-
How can I contact Lennar Digital for support or feedback?
-
You can contact Lennar Digital for support or feedback by sending an email to support@lennardigital.com. You can also visit their website and fill out a contact form with your name, email address, subject, and message. You can also follow them on Facebook, Twitter, YouTube, and Instagram for news and updates.
- b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Libro Motores Macmillan Pdf Download _TOP_.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Libro Motores Macmillan Pdf Download _TOP_.md
deleted file mode 100644
index 938fa7afbe528d1a3c5e87771060a756274249c3..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Libro Motores Macmillan Pdf Download _TOP_.md
+++ /dev/null
@@ -1,31 +0,0 @@
-
-
Este libro es parte de la colección de formación profesional de grado medio de electromecánica de vehÃculos de Macmillan Education, una editorial lÃder en el sector educativo. El libro Motores Macmillan PDF está escrito por Secundino Escudero, Jesús González, Juan Luis Rivas y Alejandro Suárez, profesionales con amplia experiencia en el campo de la mecánica automotriz.
El libro Motores Macmillan PDF consta de siete unidades didácticas que abarcan desde la introducción a los motores y su clasificación, hasta la culata y sus elementos. Cada unidad incluye explicaciones claras y detalladas, ilustraciones, esquemas, tablas, ejemplos, actividades y autoevaluaciones para facilitar el aprendizaje y la comprensión de los conceptos.
Motores rotativos: son aquellos en los que los elementos móviles generan un movimiento circular continuo. Por ejemplo, los motores de turbina o los motores Wankel, que tienen un rotor triangular que gira dentro de una cámara ovalada.
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/niizam/sovits-models/vdecoder/__init__.py b/spaces/niizam/sovits-models/vdecoder/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/data/datasets/pascal_voc.py b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/data/datasets/pascal_voc.py
deleted file mode 100644
index 46f8536ad26f4d47a53a95bed62548d8aff5047e..0000000000000000000000000000000000000000
--- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/data/datasets/pascal_voc.py
+++ /dev/null
@@ -1,82 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import numpy as np
-import os
-import xml.etree.ElementTree as ET
-from typing import List, Tuple, Union
-
-from detectron2.data import DatasetCatalog, MetadataCatalog
-from detectron2.structures import BoxMode
-from detectron2.utils.file_io import PathManager
-
-__all__ = ["load_voc_instances", "register_pascal_voc"]
-
-
-# fmt: off
-CLASS_NAMES = (
- "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat",
- "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person",
- "pottedplant", "sheep", "sofa", "train", "tvmonitor"
-)
-# fmt: on
-
-
-def load_voc_instances(dirname: str, split: str, class_names: Union[List[str], Tuple[str, ...]]):
- """
- Load Pascal VOC detection annotations to Detectron2 format.
-
- Args:
- dirname: Contain "Annotations", "ImageSets", "JPEGImages"
- split (str): one of "train", "test", "val", "trainval"
- class_names: list or tuple of class names
- """
- with PathManager.open(os.path.join(dirname, "ImageSets", "Main", split + ".txt")) as f:
- fileids = np.loadtxt(f, dtype=str)
-
- # Needs to read many small annotation files. Makes sense at local
- annotation_dirname = PathManager.get_local_path(os.path.join(dirname, "Annotations/"))
- dicts = []
- for fileid in fileids:
- anno_file = os.path.join(annotation_dirname, fileid + ".xml")
- jpeg_file = os.path.join(dirname, "JPEGImages", fileid + ".jpg")
-
- with PathManager.open(anno_file) as f:
- tree = ET.parse(f)
-
- r = {
- "file_name": jpeg_file,
- "image_id": fileid,
- "height": int(tree.findall("./size/height")[0].text),
- "width": int(tree.findall("./size/width")[0].text),
- }
- instances = []
-
- for obj in tree.findall("object"):
- cls = obj.find("name").text
- # We include "difficult" samples in training.
- # Based on limited experiments, they don't hurt accuracy.
- # difficult = int(obj.find("difficult").text)
- # if difficult == 1:
- # continue
- bbox = obj.find("bndbox")
- bbox = [float(bbox.find(x).text) for x in ["xmin", "ymin", "xmax", "ymax"]]
- # Original annotations are integers in the range [1, W or H]
- # Assuming they mean 1-based pixel indices (inclusive),
- # a box with annotation (xmin=1, xmax=W) covers the whole image.
- # In coordinate space this is represented by (xmin=0, xmax=W)
- bbox[0] -= 1.0
- bbox[1] -= 1.0
- instances.append(
- {"category_id": class_names.index(cls), "bbox": bbox, "bbox_mode": BoxMode.XYXY_ABS}
- )
- r["annotations"] = instances
- dicts.append(r)
- return dicts
-
-
-def register_pascal_voc(name, dirname, split, year, class_names=CLASS_NAMES):
- DatasetCatalog.register(name, lambda: load_voc_instances(dirname, split, class_names))
- MetadataCatalog.get(name).set(
- thing_classes=list(class_names), dirname=dirname, year=year, split=split
- )
diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/doc/RELEASE_2020_04.md b/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/doc/RELEASE_2020_04.md
deleted file mode 100644
index 2fab6ae78e887c630ad94e71aa6e946115c61593..0000000000000000000000000000000000000000
--- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/doc/RELEASE_2020_04.md
+++ /dev/null
@@ -1,6 +0,0 @@
-# DensePose Confidence Estimation and Model Zoo Improvements
-
-* [DensePose models with confidence estimation](doc/DENSEPOSE_IUV.md#ModelZooConfidence)
-* [Panoptic FPN and DeepLabV3 head implementation](doc/DENSEPOSE_IUV.md#ModelZooDeepLabV3)
-* Test time augmentations for DensePose
-* New evaluation metric (GPSm) that yields more reliable scores
diff --git a/spaces/nomic-ai/Dahoas_full-hh-rlhf/index.html b/spaces/nomic-ai/Dahoas_full-hh-rlhf/index.html
deleted file mode 100644
index 9e565214c8f71cf0766a49dd7a9ca6deb587621d..0000000000000000000000000000000000000000
--- a/spaces/nomic-ai/Dahoas_full-hh-rlhf/index.html
+++ /dev/null
@@ -1,42 +0,0 @@
-
-
-
- Dahoas/full-hh-rlhf
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/nomic-ai/wikisql/style.css b/spaces/nomic-ai/wikisql/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/nomic-ai/wikisql/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/nt3awnou/embed-rescue-map/src/utils.py b/spaces/nt3awnou/embed-rescue-map/src/utils.py
deleted file mode 100644
index 7feeda618673f272c8935c3252402b62928cd77c..0000000000000000000000000000000000000000
--- a/spaces/nt3awnou/embed-rescue-map/src/utils.py
+++ /dev/null
@@ -1,162 +0,0 @@
-import folium
-import pandas as pd
-from folium import plugins
-from src.map_utils import legend_macro
-
-
-EPICENTER_LOCATION = [31.12210171476489, -8.42945837915193]
-BORDER_COLOR = "black"
-
-def parse_gg_sheet(url):
- url = url.replace("edit#gid=", "export?format=csv&gid=")
- df = pd.read_csv(url, on_bad_lines="warn")
- return df
-
-
-def is_request_in_list(request, selection_list):
- if isinstance(request, float): # Check if the input is a float (like NaN)
- return False
- if "," in request:
- all_requests = [r.strip() for r in request.split(",")]
- else:
- all_requests = [request]
- return any([r in selection_list for r in all_requests])
-
-
-def marker_request(request):
- # in case of multiple requests we use the first one for the marker's icon
- # requests are already sorted by priority from the form
- try:
- displayed_request = request.split(',')[0]
- except:
- displayed_request = request
- return displayed_request
-
-
-def add_latlng_col(df, process_column):
- """Add a latlng column to the dataframe"""
- df = df.assign(latlng=df.iloc[:, process_column].apply(parse_latlng))
- return df
-
-# parse latlng (column 4) to [lat, lng]
-import re
-def parse_latlng(latlng):
- if pd.isna(latlng):
- return None
- # lat, lng = latlng.split(",")
- # return [float(lat), float(lng)]
-
- try:
- # check if it matches (30.9529832, -7.1010705) or (30.9529832,-7.1010705)
- if re.match(r"\(\d+\.\d+,\s?-\d+\.\d+\)", latlng):
- lat, lng = latlng[1:-1].split(",")
- return [float(lat), float(lng)]
- # check of it matches 30.9529832, -7.1010705 or 30.9529832,-7.1010705
- elif re.match(r"\d+\.\d+,\s?-\d+\.\d+", latlng):
- lat, lng = latlng.split(",")
- return [float(lat), float(lng)]
- # check if it matches 30,9529832, -7,1010705 or 30,9529832,-7,1010705, match1=30,9529832 and match2=-7,1010705
- elif re.match(r"\d+,\d+,\s?-\d+,\d+", latlng):
- d1, d2, d3, d4 = latlng.split(",")
- return [float(".".join([d1, d2])), float(".".join([d3, d4]))]
- except Exception as e:
- print(f"Error parsing latlng: {latlng} Reason: {e}")
- return None
- print(f"Error parsing latlng: {latlng}")
- return None
-
-def add_epicentre_to_map(fg):
- # Removed the spinner to not confuse the users as the map is already loaded
- icon_epicentre = folium.plugins.BeautifyIcon(
- icon='star',
- border_color='#b3334f',
- background_color='#b3334f',
- text_color='white'
- )
-
- fg.add_child(folium.Marker(location=EPICENTER_LOCATION,
- # popup="Epicenter مركز الزلزال",
- tooltip="Epicenter مركز الزلزال",
- icon=icon_epicentre))
-
-
-
-def add_danger_distances_to_map(map_obj):
- Danger_Distances_group = folium.FeatureGroup(name='Danger distances - earthquake magnitude 7 | مسافات الخطر - قوة الزلازل 7').add_to(map_obj)
-
- zones = [
- {"radius": 100000, "fill_opacity": 0.1, "weight": 1, "fill_color": "yellow", "tooltip": "50 to 100 km - Moderate risk area | منطقة خطر معتدلة"},
- {"radius": 50000, "fill_opacity": 0.1, "weight": 1, "fill_color": "orange", "tooltip": "30 to 50 km - High risk zone | منطقة عالية المخاطر"},
- {"radius": 30000, "fill_opacity": 0.2, "weight": 1, "fill_color": "#FF0000", "tooltip": "10 to 30 km - Very high risk zone | منطقة شديدة الخطورة"},
- {"radius": 10000, "fill_opacity": 0.2, "weight": 0.2, "fill_color": "#8B0000", "tooltip": "0 to 10km - direct impact zone | منطقة التأثير المباشر"}
- ]
-
- for zone in zones:
- folium.Circle(
- location=EPICENTER_LOCATION,
- radius=zone["radius"],
- color=BORDER_COLOR,
- weight=zone["weight"],
- fill_opacity=zone["fill_opacity"],
- opacity=zone["fill_opacity"], # Assuming border opacity should match fill_opacity
- fill_color=zone["fill_color"],
- # tooltip=zone["tooltip"],
- ).add_to(Danger_Distances_group)
-
-
-def init_map():
- m = folium.Map(
- location=[31.228674, -7.992047],
- zoom_start=8.5,
- min_zoom=8.5,
- max_lat=35.628674,
- min_lat=29.628674,
- max_lon=-4.992047,
- min_lon=-10.992047,
- max_bounds=True,
- )
- # Add a search bar to the map
- geocoder = plugins.Geocoder(
- collapsed=False,
- position="topright",
- placeholder="Search | البحث",
- )
- m.add_child(geocoder)
-
- # Add Fullscreen button to the map
- fullscreen = plugins.Fullscreen(
- position="topright",
- title="Expand me | تكبير الخريطة",
- title_cancel="Exit me | تصغير الخريطة",
- force_separate_button=True,
- )
- m.add_child(fullscreen)
-
- # Satellite View from Mapbox
- tileurl = "https://marocmap.ikiker.com/maroc/{z}/{x}/{y}.png"
- folium.TileLayer(
- tiles=tileurl,
- attr="Maroc Map",
- name="Maroc Map",
- overlay=False,
- control=False,
- ).add_to(m)
-
- # Add danger zones
- add_epicentre_to_map(m)
- add_danger_distances_to_map(m)
-
- # Add a LayerControl to the map to toggle between layers (Satellite View and Default One)
- folium.LayerControl().add_to(m)
-
- # Add detect location button
- plugins.LocateControl(
- position="topleft",
- drawCircle=False,
- flyTo=True,
- strings={"title": "My location | موقعي", "popup": "My location | موقعي"},
- ).add_to(m)
-
- # Macro to add legend
- m.get_root().add_child(legend_macro)
- return m
diff --git a/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/compute/gru_gates_generic.h b/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/compute/gru_gates_generic.h
deleted file mode 100644
index 691efb1f822e7f1e4862a99ef5ccb495fbc000d8..0000000000000000000000000000000000000000
--- a/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/compute/gru_gates_generic.h
+++ /dev/null
@@ -1,97 +0,0 @@
-/*
- * Copyright 2021 Google LLC
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#ifndef LYRA_CODEC_SPARSE_MATMUL_COMPUTE_GRU_GATES_GENERIC_H_
-#define LYRA_CODEC_SPARSE_MATMUL_COMPUTE_GRU_GATES_GENERIC_H_
-
-#include "sparse_matmul/compute/ar_inputs.h"
-#include "sparse_matmul/numerics/fast_transcendentals.h"
-
-namespace csrblocksparse {
-
-constexpr int kGenericSIMDWidth = 4;
-
-// TODO(b/188702959): Rename arguments to match gru_gates.h.
-template
-void GoThroughGates(int start, int end, const QR_W_Type* qr_ptr,
- const GRUMatMulOutType* gru_gates_ptr,
- const GRUMatMulOutType* gru_gates_other_ptr,
- const GRUMatMulOutType* conditioning_ptr,
- GRUStateType* gru_h_ptr, const QR_W_Type* w_hat,
- int proj_size, const SampleType* coarse_at_sminus1,
- const SampleType* fine_at_sminus1,
- const SampleType* coarse_at_s = nullptr) {
- float qr_cell = 0.0f, reset, update, cell;
- for (int i = start; i < end; ++i) {
- if (kInputsMode == ARInputsMode::k0ARInputs) {
- reset = static_cast(gru_gates_ptr[i]);
- update = static_cast(gru_gates_ptr[proj_size + i]);
- } else {
- float qr_c_reset = static_cast(qr_ptr[2 * i + 0]);
- float qr_f_reset = static_cast(qr_ptr[2 * i + 1]);
- float qr_c_update = static_cast(qr_ptr[2 * proj_size + 2 * i + 0]);
- float qr_f_update = static_cast(qr_ptr[2 * proj_size + 2 * i + 1]);
- float qr_c_cell = static_cast(qr_ptr[4 * proj_size + 2 * i + 0]);
- float qr_f_cell = static_cast(qr_ptr[4 * proj_size + 2 * i + 1]);
- float w_hat_i_reset = 0.0f;
- float w_hat_i_update = 0.0f;
- float w_hat_i_cell = 0.0f;
- if (kInputsMode == ARInputsMode::k3ARInputs) {
- w_hat_i_reset = static_cast(w_hat[i]);
- w_hat_i_update = static_cast(w_hat[proj_size + i]);
- w_hat_i_cell = static_cast(w_hat[2 * proj_size + i]);
- }
- float coarse = static_cast(coarse_at_sminus1[0]);
- float fine = static_cast(fine_at_sminus1[0]);
- reset = qr_c_reset * coarse + qr_f_reset * fine;
- update = qr_c_update * coarse + qr_f_update * fine;
- qr_cell = qr_c_cell * coarse + qr_f_cell * fine;
- if (kInputsMode == ARInputsMode::k3ARInputs) {
- float coarse = static_cast(coarse_at_s[0]);
- reset += w_hat_i_reset * coarse;
- update += w_hat_i_update * coarse;
- qr_cell += w_hat_i_cell * coarse;
- }
- reset += static_cast(gru_gates_ptr[i]);
- update += static_cast(gru_gates_ptr[proj_size + i]);
- }
- cell = static_cast(gru_gates_ptr[2 * proj_size + i]);
- if (SplitGates) {
- reset += static_cast(gru_gates_other_ptr[i]);
- update += static_cast(gru_gates_other_ptr[proj_size + i]);
- cell += static_cast(gru_gates_other_ptr[2 * proj_size + i]);
- }
- float reset_conditioning = static_cast(conditioning_ptr[i]);
- float update_conditioning =
- static_cast(conditioning_ptr[proj_size + i]);
- float cell_conditioning =
- static_cast(conditioning_ptr[2 * proj_size + i]);
- reset = fast_sigmoid(reset + reset_conditioning);
- update = fast_sigmoid(update + update_conditioning);
- float hbar = fast_tanh(qr_cell + reset * cell + cell_conditioning);
- int h_index = i;
- float prev_h = static_cast(gru_h_ptr[h_index]);
- float diff = prev_h - hbar;
- float new_h = hbar + diff * update;
- gru_h_ptr[h_index] = static_cast(new_h);
- }
-}
-
-} // namespace csrblocksparse
-
-#endif // LYRA_CODEC_SPARSE_MATMUL_COMPUTE_GRU_GATES_GENERIC_H_
diff --git a/spaces/oliver2023/chatgpt-on-wechat/lib/itchat/log.py b/spaces/oliver2023/chatgpt-on-wechat/lib/itchat/log.py
deleted file mode 100644
index 4485cc9215ecf5c03f2e3c0998a0dd9df9bb61fd..0000000000000000000000000000000000000000
--- a/spaces/oliver2023/chatgpt-on-wechat/lib/itchat/log.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import logging
-
-class LogSystem(object):
- handlerList = []
- showOnCmd = True
- loggingLevel = logging.INFO
- loggingFile = None
- def __init__(self):
- self.logger = logging.getLogger('itchat')
- self.logger.addHandler(logging.NullHandler())
- self.logger.setLevel(self.loggingLevel)
- self.cmdHandler = logging.StreamHandler()
- self.fileHandler = None
- self.logger.addHandler(self.cmdHandler)
- def set_logging(self, showOnCmd=True, loggingFile=None,
- loggingLevel=logging.INFO):
- if showOnCmd != self.showOnCmd:
- if showOnCmd:
- self.logger.addHandler(self.cmdHandler)
- else:
- self.logger.removeHandler(self.cmdHandler)
- self.showOnCmd = showOnCmd
- if loggingFile != self.loggingFile:
- if self.loggingFile is not None: # clear old fileHandler
- self.logger.removeHandler(self.fileHandler)
- self.fileHandler.close()
- if loggingFile is not None: # add new fileHandler
- self.fileHandler = logging.FileHandler(loggingFile)
- self.logger.addHandler(self.fileHandler)
- self.loggingFile = loggingFile
- if loggingLevel != self.loggingLevel:
- self.logger.setLevel(loggingLevel)
- self.loggingLevel = loggingLevel
-
-ls = LogSystem()
-set_logging = ls.set_logging
diff --git a/spaces/owsgfwnlgjuz/bsrgan/app.py b/spaces/owsgfwnlgjuz/bsrgan/app.py
deleted file mode 100644
index a9a7f608e38ab4da8bcaad0190ab8a47abeebc0b..0000000000000000000000000000000000000000
--- a/spaces/owsgfwnlgjuz/bsrgan/app.py
+++ /dev/null
@@ -1,59 +0,0 @@
-import gradio as gr
-import torch
-from bsrgan import BSRGAN
-
-# Images
-torch.hub.download_url_to_file('https://raw.githubusercontent.com/kadirnar/bsrgan-pip/main/data/images/butterfly.png', 'butterfly.jpg')
-
-def bsrgan_inference(
- image: gr.inputs.Image = None,
- model_path: gr.inputs.Dropdown = 'kadirnar/bsrgan',
-):
- """
- BSRGAN inference function
- Args:
- image: Input image
- model_path: Path to the model
- Returns:
- Rendered image
- """
- device = 'cuda:0' if torch.cuda.is_available() else 'cpu'
- model = BSRGAN(model_path, device=device, hf_model=True)
- pred = model.predict(img_path=image)
- return pred
-
-
-inputs = [
- gr.inputs.Image(type="filepath", label="Input Image"),
- gr.inputs.Dropdown(
- label="Model",
- choices=[
- "kadirnar/bsrgan",
- "kadirnar/BSRGANx2",
- "kadirnar/RRDB_PSNR_x4",
- "kadirnar/RRDB_ESRGAN_x4",
- "kadirnar/DF2K",
- "kadirnar/DPED",
- "kadirnar/DF2K_JPEG",
- ],
- default="kadirnar/bsrgan",
- ),
-]
-
-outputs = gr.outputs.Image(type="filepath", label="Output Image")
-title = "BSRGAN: Designing a Practical Degradation Model for Deep Blind Image Super-Resolution."
-description = "BSRGAN for Deep Blind Image Super-Resolution model aims to design a practical degradation model for deep blind image super-resolution by considering the deterioration of image quality over time. It uses deep learning methods to predict the deterioration of image quality and to assist in the re-creation of images at higher resolution using these predictions."
-examples = [["butterfly.jpg", "kadirnar/bsrgan"]]
-
-demo_app = gr.Interface(
- fn=bsrgan_inference,
- inputs=inputs,
- outputs=outputs,
- title=title,
- description=description,
- examples=examples,
- cache_examples=True,
- live=True,
- theme='huggingface',
-)
-demo_app.launch(debug=True, enable_queue=True)
\ No newline at end of file
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/community/run_onnx_controlnet.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/community/run_onnx_controlnet.py
deleted file mode 100644
index 6ccd7847c775c839aa174565ac1d021c867d0b79..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/community/run_onnx_controlnet.py
+++ /dev/null
@@ -1,909 +0,0 @@
-import argparse
-import inspect
-import os
-import time
-import warnings
-from typing import Any, Callable, Dict, List, Optional, Union
-
-import numpy as np
-import PIL.Image
-import torch
-from PIL import Image
-from transformers import CLIPTokenizer
-
-from diffusers import OnnxRuntimeModel, StableDiffusionImg2ImgPipeline, UniPCMultistepScheduler
-from diffusers.image_processor import VaeImageProcessor
-from diffusers.pipelines.pipeline_utils import DiffusionPipeline
-from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
-from diffusers.schedulers import KarrasDiffusionSchedulers
-from diffusers.utils import (
- deprecate,
- logging,
- replace_example_docstring,
-)
-from diffusers.utils.torch_utils import randn_tensor
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-EXAMPLE_DOC_STRING = """
- Examples:
- ```py
- >>> # !pip install opencv-python transformers accelerate
- >>> from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler
- >>> from diffusers.utils import load_image
- >>> import numpy as np
- >>> import torch
-
- >>> import cv2
- >>> from PIL import Image
-
- >>> # download an image
- >>> image = load_image(
- ... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png"
- ... )
- >>> np_image = np.array(image)
-
- >>> # get canny image
- >>> np_image = cv2.Canny(np_image, 100, 200)
- >>> np_image = np_image[:, :, None]
- >>> np_image = np.concatenate([np_image, np_image, np_image], axis=2)
- >>> canny_image = Image.fromarray(np_image)
-
- >>> # load control net and stable diffusion v1-5
- >>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
- >>> pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained(
- ... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16
- ... )
-
- >>> # speed up diffusion process with faster scheduler and memory optimization
- >>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
- >>> pipe.enable_model_cpu_offload()
-
- >>> # generate image
- >>> generator = torch.manual_seed(0)
- >>> image = pipe(
- ... "futuristic-looking woman",
- ... num_inference_steps=20,
- ... generator=generator,
- ... image=image,
- ... control_image=canny_image,
- ... ).images[0]
- ```
-"""
-
-
-def prepare_image(image):
- if isinstance(image, torch.Tensor):
- # Batch single image
- if image.ndim == 3:
- image = image.unsqueeze(0)
-
- image = image.to(dtype=torch.float32)
- else:
- # preprocess image
- if isinstance(image, (PIL.Image.Image, np.ndarray)):
- image = [image]
-
- if isinstance(image, list) and isinstance(image[0], PIL.Image.Image):
- image = [np.array(i.convert("RGB"))[None, :] for i in image]
- image = np.concatenate(image, axis=0)
- elif isinstance(image, list) and isinstance(image[0], np.ndarray):
- image = np.concatenate([i[None, :] for i in image], axis=0)
-
- image = image.transpose(0, 3, 1, 2)
- image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0
-
- return image
-
-
-class OnnxStableDiffusionControlNetImg2ImgPipeline(DiffusionPipeline):
- vae_encoder: OnnxRuntimeModel
- vae_decoder: OnnxRuntimeModel
- text_encoder: OnnxRuntimeModel
- tokenizer: CLIPTokenizer
- unet: OnnxRuntimeModel
- scheduler: KarrasDiffusionSchedulers
-
- def __init__(
- self,
- vae_encoder: OnnxRuntimeModel,
- vae_decoder: OnnxRuntimeModel,
- text_encoder: OnnxRuntimeModel,
- tokenizer: CLIPTokenizer,
- unet: OnnxRuntimeModel,
- scheduler: KarrasDiffusionSchedulers,
- ):
- super().__init__()
-
- self.register_modules(
- vae_encoder=vae_encoder,
- vae_decoder=vae_decoder,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- )
- self.vae_scale_factor = 2 ** (4 - 1)
- self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor, do_convert_rgb=True)
- self.control_image_processor = VaeImageProcessor(
- vae_scale_factor=self.vae_scale_factor, do_convert_rgb=True, do_normalize=False
- )
-
- def _encode_prompt(
- self,
- prompt: Union[str, List[str]],
- num_images_per_prompt: Optional[int],
- do_classifier_free_guidance: bool,
- negative_prompt: Optional[str],
- prompt_embeds: Optional[np.ndarray] = None,
- negative_prompt_embeds: Optional[np.ndarray] = None,
- ):
- r"""
- Encodes the prompt into text encoder hidden states.
-
- Args:
- prompt (`str` or `List[str]`):
- prompt to be encoded
- num_images_per_prompt (`int`):
- number of images that should be generated per prompt
- do_classifier_free_guidance (`bool`):
- whether to use classifier free guidance or not
- negative_prompt (`str` or `List[str]`):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- prompt_embeds (`np.ndarray`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`np.ndarray`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- """
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- if prompt_embeds is None:
- # get prompt text embeddings
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="np",
- )
- text_input_ids = text_inputs.input_ids
- untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="np").input_ids
-
- if not np.array_equal(text_input_ids, untruncated_ids):
- removed_text = self.tokenizer.batch_decode(
- untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
- )
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
-
- prompt_embeds = self.text_encoder(input_ids=text_input_ids.astype(np.int32))[0]
-
- prompt_embeds = np.repeat(prompt_embeds, num_images_per_prompt, axis=0)
-
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance and negative_prompt_embeds is None:
- uncond_tokens: List[str]
- if negative_prompt is None:
- uncond_tokens = [""] * batch_size
- elif type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt] * batch_size
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = negative_prompt
-
- max_length = prompt_embeds.shape[1]
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="np",
- )
- negative_prompt_embeds = self.text_encoder(input_ids=uncond_input.input_ids.astype(np.int32))[0]
-
- if do_classifier_free_guidance:
- negative_prompt_embeds = np.repeat(negative_prompt_embeds, num_images_per_prompt, axis=0)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- prompt_embeds = np.concatenate([negative_prompt_embeds, prompt_embeds])
-
- return prompt_embeds
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents
- def decode_latents(self, latents):
- warnings.warn(
- "The decode_latents method is deprecated and will be removed in a future version. Please"
- " use VaeImageProcessor instead",
- FutureWarning,
- )
- latents = 1 / self.vae.config.scaling_factor * latents
- image = self.vae.decode(latents, return_dict=False)[0]
- image = (image / 2 + 0.5).clamp(0, 1)
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
- return image
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
- def prepare_extra_step_kwargs(self, generator, eta):
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
-
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- # check if the scheduler accepts generator
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
- if accepts_generator:
- extra_step_kwargs["generator"] = generator
- return extra_step_kwargs
-
- def check_inputs(
- self,
- num_controlnet,
- prompt,
- image,
- callback_steps,
- negative_prompt=None,
- prompt_embeds=None,
- negative_prompt_embeds=None,
- controlnet_conditioning_scale=1.0,
- control_guidance_start=0.0,
- control_guidance_end=1.0,
- ):
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- if prompt is not None and prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
- " only forward one of the two."
- )
- elif prompt is None and prompt_embeds is None:
- raise ValueError(
- "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
- )
- elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if negative_prompt is not None and negative_prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
- f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
- )
-
- if prompt_embeds is not None and negative_prompt_embeds is not None:
- if prompt_embeds.shape != negative_prompt_embeds.shape:
- raise ValueError(
- "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
- f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
- f" {negative_prompt_embeds.shape}."
- )
-
- # Check `image`
- if num_controlnet == 1:
- self.check_image(image, prompt, prompt_embeds)
- elif num_controlnet > 1:
- if not isinstance(image, list):
- raise TypeError("For multiple controlnets: `image` must be type `list`")
-
- # When `image` is a nested list:
- # (e.g. [[canny_image_1, pose_image_1], [canny_image_2, pose_image_2]])
- elif any(isinstance(i, list) for i in image):
- raise ValueError("A single batch of multiple conditionings are supported at the moment.")
- elif len(image) != num_controlnet:
- raise ValueError(
- f"For multiple controlnets: `image` must have the same length as the number of controlnets, but got {len(image)} images and {num_controlnet} ControlNets."
- )
-
- for image_ in image:
- self.check_image(image_, prompt, prompt_embeds)
- else:
- assert False
-
- # Check `controlnet_conditioning_scale`
- if num_controlnet == 1:
- if not isinstance(controlnet_conditioning_scale, float):
- raise TypeError("For single controlnet: `controlnet_conditioning_scale` must be type `float`.")
- elif num_controlnet > 1:
- if isinstance(controlnet_conditioning_scale, list):
- if any(isinstance(i, list) for i in controlnet_conditioning_scale):
- raise ValueError("A single batch of multiple conditionings are supported at the moment.")
- elif (
- isinstance(controlnet_conditioning_scale, list)
- and len(controlnet_conditioning_scale) != num_controlnet
- ):
- raise ValueError(
- "For multiple controlnets: When `controlnet_conditioning_scale` is specified as `list`, it must have"
- " the same length as the number of controlnets"
- )
- else:
- assert False
-
- if len(control_guidance_start) != len(control_guidance_end):
- raise ValueError(
- f"`control_guidance_start` has {len(control_guidance_start)} elements, but `control_guidance_end` has {len(control_guidance_end)} elements. Make sure to provide the same number of elements to each list."
- )
-
- if num_controlnet > 1:
- if len(control_guidance_start) != num_controlnet:
- raise ValueError(
- f"`control_guidance_start`: {control_guidance_start} has {len(control_guidance_start)} elements but there are {num_controlnet} controlnets available. Make sure to provide {num_controlnet}."
- )
-
- for start, end in zip(control_guidance_start, control_guidance_end):
- if start >= end:
- raise ValueError(
- f"control guidance start: {start} cannot be larger or equal to control guidance end: {end}."
- )
- if start < 0.0:
- raise ValueError(f"control guidance start: {start} can't be smaller than 0.")
- if end > 1.0:
- raise ValueError(f"control guidance end: {end} can't be larger than 1.0.")
-
- # Copied from diffusers.pipelines.controlnet.pipeline_controlnet.StableDiffusionControlNetPipeline.check_image
- def check_image(self, image, prompt, prompt_embeds):
- image_is_pil = isinstance(image, PIL.Image.Image)
- image_is_tensor = isinstance(image, torch.Tensor)
- image_is_np = isinstance(image, np.ndarray)
- image_is_pil_list = isinstance(image, list) and isinstance(image[0], PIL.Image.Image)
- image_is_tensor_list = isinstance(image, list) and isinstance(image[0], torch.Tensor)
- image_is_np_list = isinstance(image, list) and isinstance(image[0], np.ndarray)
-
- if (
- not image_is_pil
- and not image_is_tensor
- and not image_is_np
- and not image_is_pil_list
- and not image_is_tensor_list
- and not image_is_np_list
- ):
- raise TypeError(
- f"image must be passed and be one of PIL image, numpy array, torch tensor, list of PIL images, list of numpy arrays or list of torch tensors, but is {type(image)}"
- )
-
- if image_is_pil:
- image_batch_size = 1
- else:
- image_batch_size = len(image)
-
- if prompt is not None and isinstance(prompt, str):
- prompt_batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- prompt_batch_size = len(prompt)
- elif prompt_embeds is not None:
- prompt_batch_size = prompt_embeds.shape[0]
-
- if image_batch_size != 1 and image_batch_size != prompt_batch_size:
- raise ValueError(
- f"If image batch size is not 1, image batch size must be same as prompt batch size. image batch size: {image_batch_size}, prompt batch size: {prompt_batch_size}"
- )
-
- # Copied from diffusers.pipelines.controlnet.pipeline_controlnet.StableDiffusionControlNetPipeline.prepare_image
- def prepare_control_image(
- self,
- image,
- width,
- height,
- batch_size,
- num_images_per_prompt,
- device,
- dtype,
- do_classifier_free_guidance=False,
- guess_mode=False,
- ):
- image = self.control_image_processor.preprocess(image, height=height, width=width).to(dtype=torch.float32)
- image_batch_size = image.shape[0]
-
- if image_batch_size == 1:
- repeat_by = batch_size
- else:
- # image batch size is the same as prompt batch size
- repeat_by = num_images_per_prompt
-
- image = image.repeat_interleave(repeat_by, dim=0)
-
- image = image.to(device=device, dtype=dtype)
-
- if do_classifier_free_guidance and not guess_mode:
- image = torch.cat([image] * 2)
-
- return image
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.StableDiffusionImg2ImgPipeline.get_timesteps
- def get_timesteps(self, num_inference_steps, strength, device):
- # get the original timestep using init_timestep
- init_timestep = min(int(num_inference_steps * strength), num_inference_steps)
-
- t_start = max(num_inference_steps - init_timestep, 0)
- timesteps = self.scheduler.timesteps[t_start * self.scheduler.order :]
-
- return timesteps, num_inference_steps - t_start
-
- def prepare_latents(self, image, timestep, batch_size, num_images_per_prompt, dtype, device, generator=None):
- if not isinstance(image, (torch.Tensor, PIL.Image.Image, list)):
- raise ValueError(
- f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or list but is {type(image)}"
- )
-
- image = image.to(device=device, dtype=dtype)
-
- batch_size = batch_size * num_images_per_prompt
-
- if image.shape[1] == 4:
- init_latents = image
-
- else:
- _image = image.cpu().detach().numpy()
- init_latents = self.vae_encoder(sample=_image)[0]
- init_latents = torch.from_numpy(init_latents).to(device=device, dtype=dtype)
- init_latents = 0.18215 * init_latents
-
- if batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] == 0:
- # expand init_latents for batch_size
- deprecation_message = (
- f"You have passed {batch_size} text prompts (`prompt`), but only {init_latents.shape[0]} initial"
- " images (`image`). Initial images are now duplicating to match the number of text prompts. Note"
- " that this behavior is deprecated and will be removed in a version 1.0.0. Please make sure to update"
- " your script to pass as many initial images as text prompts to suppress this warning."
- )
- deprecate("len(prompt) != len(image)", "1.0.0", deprecation_message, standard_warn=False)
- additional_image_per_prompt = batch_size // init_latents.shape[0]
- init_latents = torch.cat([init_latents] * additional_image_per_prompt, dim=0)
- elif batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] != 0:
- raise ValueError(
- f"Cannot duplicate `image` of batch size {init_latents.shape[0]} to {batch_size} text prompts."
- )
- else:
- init_latents = torch.cat([init_latents], dim=0)
-
- shape = init_latents.shape
- noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
-
- # get latents
- init_latents = self.scheduler.add_noise(init_latents, noise, timestep)
- latents = init_latents
-
- return latents
-
- @torch.no_grad()
- @replace_example_docstring(EXAMPLE_DOC_STRING)
- def __call__(
- self,
- num_controlnet: int,
- fp16: bool = True,
- prompt: Union[str, List[str]] = None,
- image: Union[
- torch.FloatTensor,
- PIL.Image.Image,
- np.ndarray,
- List[torch.FloatTensor],
- List[PIL.Image.Image],
- List[np.ndarray],
- ] = None,
- control_image: Union[
- torch.FloatTensor,
- PIL.Image.Image,
- np.ndarray,
- List[torch.FloatTensor],
- List[PIL.Image.Image],
- List[np.ndarray],
- ] = None,
- height: Optional[int] = None,
- width: Optional[int] = None,
- strength: float = 0.8,
- num_inference_steps: int = 50,
- guidance_scale: float = 7.5,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: float = 0.0,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- latents: Optional[torch.FloatTensor] = None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- controlnet_conditioning_scale: Union[float, List[float]] = 0.8,
- guess_mode: bool = False,
- control_guidance_start: Union[float, List[float]] = 0.0,
- control_guidance_end: Union[float, List[float]] = 1.0,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
- instead.
- image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`,:
- `List[List[torch.FloatTensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
- The initial image will be used as the starting point for the image generation process. Can also accpet
- image latents as `image`, if passing latents directly, it will not be encoded again.
- control_image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`,:
- `List[List[torch.FloatTensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
- The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If
- the type is specified as `Torch.FloatTensor`, it is passed to ControlNet as is. `PIL.Image.Image` can
- also be accepted as an image. The dimensions of the output image defaults to `image`'s dimensions. If
- height and/or width are passed, `image` is resized according to them. If multiple ControlNets are
- specified in init, images must be passed as a list such that each element of the list can be correctly
- batched for input to a single controlnet.
- height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
- less than `1`).
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
- to make generation deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
- plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
- cross_attention_kwargs (`dict`, *optional*):
- A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
- `self.processor` in
- [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
- controlnet_conditioning_scale (`float` or `List[float]`, *optional*, defaults to 1.0):
- The outputs of the controlnet are multiplied by `controlnet_conditioning_scale` before they are added
- to the residual in the original unet. If multiple ControlNets are specified in init, you can set the
- corresponding scale as a list. Note that by default, we use a smaller conditioning scale for inpainting
- than for [`~StableDiffusionControlNetPipeline.__call__`].
- guess_mode (`bool`, *optional*, defaults to `False`):
- In this mode, the ControlNet encoder will try best to recognize the content of the input image even if
- you remove all prompts. The `guidance_scale` between 3.0 and 5.0 is recommended.
- control_guidance_start (`float` or `List[float]`, *optional*, defaults to 0.0):
- The percentage of total steps at which the controlnet starts applying.
- control_guidance_end (`float` or `List[float]`, *optional*, defaults to 1.0):
- The percentage of total steps at which the controlnet stops applying.
-
- Examples:
-
- Returns:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
- When returning a tuple, the first element is a list with the generated images, and the second element is a
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content, according to the `safety_checker`.
- """
- if fp16:
- torch_dtype = torch.float16
- np_dtype = np.float16
- else:
- torch_dtype = torch.float32
- np_dtype = np.float32
-
- # align format for control guidance
- if not isinstance(control_guidance_start, list) and isinstance(control_guidance_end, list):
- control_guidance_start = len(control_guidance_end) * [control_guidance_start]
- elif not isinstance(control_guidance_end, list) and isinstance(control_guidance_start, list):
- control_guidance_end = len(control_guidance_start) * [control_guidance_end]
- elif not isinstance(control_guidance_start, list) and not isinstance(control_guidance_end, list):
- mult = num_controlnet
- control_guidance_start, control_guidance_end = mult * [control_guidance_start], mult * [
- control_guidance_end
- ]
-
- # 1. Check inputs. Raise error if not correct
- self.check_inputs(
- num_controlnet,
- prompt,
- control_image,
- callback_steps,
- negative_prompt,
- prompt_embeds,
- negative_prompt_embeds,
- controlnet_conditioning_scale,
- control_guidance_start,
- control_guidance_end,
- )
-
- # 2. Define call parameters
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- device = self._execution_device
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- if num_controlnet > 1 and isinstance(controlnet_conditioning_scale, float):
- controlnet_conditioning_scale = [controlnet_conditioning_scale] * num_controlnet
-
- # 3. Encode input prompt
- prompt_embeds = self._encode_prompt(
- prompt,
- num_images_per_prompt,
- do_classifier_free_guidance,
- negative_prompt,
- prompt_embeds=prompt_embeds,
- negative_prompt_embeds=negative_prompt_embeds,
- )
- # 4. Prepare image
- image = self.image_processor.preprocess(image).to(dtype=torch.float32)
-
- # 5. Prepare controlnet_conditioning_image
- if num_controlnet == 1:
- control_image = self.prepare_control_image(
- image=control_image,
- width=width,
- height=height,
- batch_size=batch_size * num_images_per_prompt,
- num_images_per_prompt=num_images_per_prompt,
- device=device,
- dtype=torch_dtype,
- do_classifier_free_guidance=do_classifier_free_guidance,
- guess_mode=guess_mode,
- )
- elif num_controlnet > 1:
- control_images = []
-
- for control_image_ in control_image:
- control_image_ = self.prepare_control_image(
- image=control_image_,
- width=width,
- height=height,
- batch_size=batch_size * num_images_per_prompt,
- num_images_per_prompt=num_images_per_prompt,
- device=device,
- dtype=torch_dtype,
- do_classifier_free_guidance=do_classifier_free_guidance,
- guess_mode=guess_mode,
- )
-
- control_images.append(control_image_)
-
- control_image = control_images
- else:
- assert False
-
- # 5. Prepare timesteps
- self.scheduler.set_timesteps(num_inference_steps, device=device)
- timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device)
- latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt)
-
- # 6. Prepare latent variables
- latents = self.prepare_latents(
- image,
- latent_timestep,
- batch_size,
- num_images_per_prompt,
- torch_dtype,
- device,
- generator,
- )
-
- # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
-
- # 7.1 Create tensor stating which controlnets to keep
- controlnet_keep = []
- for i in range(len(timesteps)):
- keeps = [
- 1.0 - float(i / len(timesteps) < s or (i + 1) / len(timesteps) > e)
- for s, e in zip(control_guidance_start, control_guidance_end)
- ]
- controlnet_keep.append(keeps[0] if num_controlnet == 1 else keeps)
-
- # 8. Denoising loop
- num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
- with self.progress_bar(total=num_inference_steps) as progress_bar:
- for i, t in enumerate(timesteps):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- if isinstance(controlnet_keep[i], list):
- cond_scale = [c * s for c, s in zip(controlnet_conditioning_scale, controlnet_keep[i])]
- else:
- controlnet_cond_scale = controlnet_conditioning_scale
- if isinstance(controlnet_cond_scale, list):
- controlnet_cond_scale = controlnet_cond_scale[0]
- cond_scale = controlnet_cond_scale * controlnet_keep[i]
-
- # predict the noise residual
- _latent_model_input = latent_model_input.cpu().detach().numpy()
- _prompt_embeds = np.array(prompt_embeds, dtype=np_dtype)
- _t = np.array([t.cpu().detach().numpy()], dtype=np_dtype)
-
- if num_controlnet == 1:
- control_images = np.array([control_image], dtype=np_dtype)
- else:
- control_images = []
- for _control_img in control_image:
- _control_img = _control_img.cpu().detach().numpy()
- control_images.append(_control_img)
- control_images = np.array(control_images, dtype=np_dtype)
-
- control_scales = np.array(cond_scale, dtype=np_dtype)
- control_scales = np.resize(control_scales, (num_controlnet, 1))
-
- noise_pred = self.unet(
- sample=_latent_model_input,
- timestep=_t,
- encoder_hidden_states=_prompt_embeds,
- controlnet_conds=control_images,
- conditioning_scales=control_scales,
- )[0]
- noise_pred = torch.from_numpy(noise_pred).to(device)
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
-
- # call the callback, if provided
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
- progress_bar.update()
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- if not output_type == "latent":
- _latents = latents.cpu().detach().numpy() / 0.18215
- _latents = np.array(_latents, dtype=np_dtype)
- image = self.vae_decoder(latent_sample=_latents)[0]
- image = torch.from_numpy(image).to(device, dtype=torch.float32)
- has_nsfw_concept = None
- else:
- image = latents
- has_nsfw_concept = None
-
- if has_nsfw_concept is None:
- do_denormalize = [True] * image.shape[0]
- else:
- do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept]
-
- image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize)
-
- if not return_dict:
- return (image, has_nsfw_concept)
-
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
-
- parser.add_argument(
- "--sd_model",
- type=str,
- required=True,
- help="Path to the `diffusers` checkpoint to convert (either a local directory or on the Hub).",
- )
-
- parser.add_argument(
- "--onnx_model_dir",
- type=str,
- required=True,
- help="Path to the ONNX directory",
- )
-
- parser.add_argument("--qr_img_path", type=str, required=True, help="Path to the qr code image")
-
- args = parser.parse_args()
-
- qr_image = Image.open(args.qr_img_path)
- qr_image = qr_image.resize((512, 512))
-
- # init stable diffusion pipeline
- pipeline = StableDiffusionImg2ImgPipeline.from_pretrained(args.sd_model)
- pipeline.scheduler = UniPCMultistepScheduler.from_config(pipeline.scheduler.config)
-
- provider = ["CUDAExecutionProvider", "CPUExecutionProvider"]
- onnx_pipeline = OnnxStableDiffusionControlNetImg2ImgPipeline(
- vae_encoder=OnnxRuntimeModel.from_pretrained(
- os.path.join(args.onnx_model_dir, "vae_encoder"), provider=provider
- ),
- vae_decoder=OnnxRuntimeModel.from_pretrained(
- os.path.join(args.onnx_model_dir, "vae_decoder"), provider=provider
- ),
- text_encoder=OnnxRuntimeModel.from_pretrained(
- os.path.join(args.onnx_model_dir, "text_encoder"), provider=provider
- ),
- tokenizer=pipeline.tokenizer,
- unet=OnnxRuntimeModel.from_pretrained(os.path.join(args.onnx_model_dir, "unet"), provider=provider),
- scheduler=pipeline.scheduler,
- )
- onnx_pipeline = onnx_pipeline.to("cuda")
-
- prompt = "a cute cat fly to the moon"
- negative_prompt = "paintings, sketches, worst quality, low quality, normal quality, lowres, normal quality, monochrome, grayscale, skin spots, acnes, skin blemishes, age spot, glans, nsfw, nipples, necklace, worst quality, low quality, watermark, username, signature, multiple breasts, lowres, bad anatomy, bad hands, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, bad feet, single color, ugly, duplicate, morbid, mutilated, tranny, trans, trannsexual, hermaphrodite, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, disfigured, bad anatomy, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, long neck, bad body perspect"
-
- for i in range(10):
- start_time = time.time()
- image = onnx_pipeline(
- num_controlnet=2,
- prompt=prompt,
- negative_prompt=negative_prompt,
- image=qr_image,
- control_image=[qr_image, qr_image],
- width=512,
- height=512,
- strength=0.75,
- num_inference_steps=20,
- num_images_per_prompt=1,
- controlnet_conditioning_scale=[0.8, 0.8],
- control_guidance_start=[0.3, 0.3],
- control_guidance_end=[0.9, 0.9],
- ).images[0]
- print(time.time() - start_time)
- image.save("output_qr_code.png")
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/instruct_pix2pix/README_sdxl.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/instruct_pix2pix/README_sdxl.md
deleted file mode 100644
index b8c2ffdc817526ca88a05f21117fff82ba31a9c0..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/instruct_pix2pix/README_sdxl.md
+++ /dev/null
@@ -1,197 +0,0 @@
-# InstructPix2Pix SDXL training example
-
-***This is based on the original InstructPix2Pix training example.***
-
-[Stable Diffusion XL](https://huggingface.co/papers/2307.01952) (or SDXL) is the latest image generation model that is tailored towards more photorealistic outputs with more detailed imagery and composition compared to previous SD models. It leverages a three times larger UNet backbone. The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder.
-
-The `train_instruct_pix2pix_sdxl.py` script shows how to implement the training procedure and adapt it for Stable Diffusion XL.
-
-***Disclaimer: Even though `train_instruct_pix2pix_sdxl.py` implements the InstructPix2Pix
-training procedure while being faithful to the [original implementation](https://github.com/timothybrooks/instruct-pix2pix) we have only tested it on a [small-scale dataset](https://huggingface.co/datasets/fusing/instructpix2pix-1000-samples). This can impact the end results. For better results, we recommend longer training runs with a larger dataset. [Here](https://huggingface.co/datasets/timbrooks/instructpix2pix-clip-filtered) you can find a large dataset for InstructPix2Pix training.***
-
-## Running locally with PyTorch
-
-### Installing the dependencies
-
-Refer to the original InstructPix2Pix training example for installing the dependencies.
-
-You will also need to get access of SDXL by filling the [form](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0).
-
-### Toy example
-
-As mentioned before, we'll use a [small toy dataset](https://huggingface.co/datasets/fusing/instructpix2pix-1000-samples) for training. The dataset
-is a smaller version of the [original dataset](https://huggingface.co/datasets/timbrooks/instructpix2pix-clip-filtered) used in the InstructPix2Pix paper.
-
-Configure environment variables such as the dataset identifier and the Stable Diffusion
-checkpoint:
-
-```bash
-export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"
-export DATASET_ID="fusing/instructpix2pix-1000-samples"
-```
-
-Now, we can launch training:
-
-```bash
-accelerate launch train_instruct_pix2pix_sdxl.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --dataset_name=$DATASET_ID \
- --enable_xformers_memory_efficient_attention \
- --resolution=256 --random_flip \
- --train_batch_size=4 --gradient_accumulation_steps=4 --gradient_checkpointing \
- --max_train_steps=15000 \
- --checkpointing_steps=5000 --checkpoints_total_limit=1 \
- --learning_rate=5e-05 --max_grad_norm=1 --lr_warmup_steps=0 \
- --conditioning_dropout_prob=0.05 \
- --seed=42 \
- --push_to_hub
-```
-
-Additionally, we support performing validation inference to monitor training progress
-with Weights and Biases. You can enable this feature with `report_to="wandb"`:
-
-```bash
-accelerate launch train_instruct_pix2pix_sdxl.py \
- --pretrained_model_name_or_path=stabilityai/stable-diffusion-xl-base-1.0 \
- --dataset_name=$DATASET_ID \
- --use_ema \
- --enable_xformers_memory_efficient_attention \
- --resolution=512 --random_flip \
- --train_batch_size=4 --gradient_accumulation_steps=4 --gradient_checkpointing \
- --max_train_steps=15000 \
- --checkpointing_steps=5000 --checkpoints_total_limit=1 \
- --learning_rate=5e-05 --lr_warmup_steps=0 \
- --conditioning_dropout_prob=0.05 \
- --seed=42 \
- --val_image_url_or_path="https://datasets-server.huggingface.co/assets/fusing/instructpix2pix-1000-samples/--/fusing--instructpix2pix-1000-samples/train/23/input_image/image.jpg" \
- --validation_prompt="make it in japan" \
- --report_to=wandb \
- --push_to_hub
- ```
-
- We recommend this type of validation as it can be useful for model debugging. Note that you need `wandb` installed to use this. You can install `wandb` by running `pip install wandb`.
-
- [Here](https://wandb.ai/sayakpaul/instruct-pix2pix/runs/ctr3kovq), you can find an example training run that includes some validation samples and the training hyperparameters.
-
- ***Note: In the original paper, the authors observed that even when the model is trained with an image resolution of 256x256, it generalizes well to bigger resolutions such as 512x512. This is likely because of the larger dataset they used during training.***
-
- ## Training with multiple GPUs
-
-`accelerate` allows for seamless multi-GPU training. Follow the instructions [here](https://huggingface.co/docs/accelerate/basic_tutorials/launch)
-for running distributed training with `accelerate`. Here is an example command:
-
-```bash
-accelerate launch --mixed_precision="fp16" --multi_gpu train_instruct_pix2pix_sdxl.py \
- --pretrained_model_name_or_path=stabilityai/stable-diffusion-xl-base-1.0 \
- --dataset_name=$DATASET_ID \
- --use_ema \
- --enable_xformers_memory_efficient_attention \
- --resolution=512 --random_flip \
- --train_batch_size=4 --gradient_accumulation_steps=4 --gradient_checkpointing \
- --max_train_steps=15000 \
- --checkpointing_steps=5000 --checkpoints_total_limit=1 \
- --learning_rate=5e-05 --lr_warmup_steps=0 \
- --conditioning_dropout_prob=0.05 \
- --seed=42 \
- --val_image_url_or_path="https://datasets-server.huggingface.co/assets/fusing/instructpix2pix-1000-samples/--/fusing--instructpix2pix-1000-samples/train/23/input_image/image.jpg" \
- --validation_prompt="make it in japan" \
- --report_to=wandb \
- --push_to_hub
-```
-
- ## Inference
-
- Once training is complete, we can perform inference:
-
- ```python
-import PIL
-import requests
-import torch
-from diffusers import StableDiffusionXLInstructPix2PixPipeline
-
-model_id = "your_model_id" # <- replace this
-pipe = StableDiffusionXLInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
-generator = torch.Generator("cuda").manual_seed(0)
-
-url = "https://datasets-server.huggingface.co/assets/fusing/instructpix2pix-1000-samples/--/fusing--instructpix2pix-1000-samples/train/23/input_image/image.jpg"
-
-
-def download_image(url):
- image = PIL.Image.open(requests.get(url, stream=True).raw)
- image = PIL.ImageOps.exif_transpose(image)
- image = image.convert("RGB")
- return image
-
-image = download_image(url)
-prompt = "make it Japan"
-num_inference_steps = 20
-image_guidance_scale = 1.5
-guidance_scale = 10
-
-edited_image = pipe(prompt,
- image=image,
- num_inference_steps=num_inference_steps,
- image_guidance_scale=image_guidance_scale,
- guidance_scale=guidance_scale,
- generator=generator,
-).images[0]
-edited_image.save("edited_image.png")
-```
-
-We encourage you to play with the following three parameters to control
-speed and quality during performance:
-
-* `num_inference_steps`
-* `image_guidance_scale`
-* `guidance_scale`
-
-Particularly, `image_guidance_scale` and `guidance_scale` can have a profound impact
-on the generated ("edited") image (see [here](https://twitter.com/RisingSayak/status/1628392199196151808?s=20) for an example).
-
-If you're looking for some interesting ways to use the InstructPix2Pix training methodology, we welcome you to check out this blog post: [Instruction-tuning Stable Diffusion with InstructPix2Pix](https://huggingface.co/blog/instruction-tuning-sd).
-
-## Compare between SD and SDXL
-
-We aim to understand the differences resulting from the use of SD-1.5 and SDXL-0.9 as pretrained models. To achieve this, we trained on the [small toy dataset](https://huggingface.co/datasets/fusing/instructpix2pix-1000-samples) using both of these pretrained models. The training script is as follows:
-
-```bash
-export MODEL_NAME="runwayml/stable-diffusion-v1-5" or "stabilityai/stable-diffusion-xl-base-0.9"
-export DATASET_ID="fusing/instructpix2pix-1000-samples"
-
-accelerate launch train_instruct_pix2pix.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --dataset_name=$DATASET_ID \
- --use_ema \
- --enable_xformers_memory_efficient_attention \
- --resolution=512 --random_flip \
- --train_batch_size=4 --gradient_accumulation_steps=4 --gradient_checkpointing \
- --max_train_steps=15000 \
- --checkpointing_steps=5000 --checkpoints_total_limit=1 \
- --learning_rate=5e-05 --lr_warmup_steps=0 \
- --conditioning_dropout_prob=0.05 \
- --seed=42 \
- --val_image_url="https://datasets-server.huggingface.co/assets/fusing/instructpix2pix-1000-samples/--/fusing--instructpix2pix-1000-samples/train/23/input_image/image.jpg" \
- --validation_prompt="make it in Japan" \
- --report_to=wandb \
- --push_to_hub
-```
-
-We discovered that compared to training with SD-1.5 as the pretrained model, SDXL-0.9 results in a lower training loss value (SD-1.5 yields 0.0599, SDXL scores 0.0254). Moreover, from a visual perspective, the results obtained using SDXL demonstrated fewer artifacts and a richer detail. Notably, SDXL starts to preserve the structure of the original image earlier on.
-
-The following two GIFs provide intuitive visual results. We observed, for each step, what kind of results could be achieved using the image
-
-
-
-with "make it in Japan” as the prompt. It can be seen that SDXL starts preserving the details of the original image earlier, resulting in higher fidelity outcomes sooner.
-
-* SD-1.5: https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sd_ip2p_training_val_img_progress.gif
-
-
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-9a36a7ca.css b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-9a36a7ca.css
deleted file mode 100644
index cca598778f233cad73ec7066b69bb4e609c35cb2..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-9a36a7ca.css
+++ /dev/null
@@ -1 +0,0 @@
-input.svelte-q8uklq{position:absolute;top:var(--size-2);right:var(--size-2);bottom:var(--size-2);left:var(--size-2);flex:1 1 0%;transform:translate(-.1px);outline:none;border:none;background:transparent}span.svelte-q8uklq{flex:1 1 0%;outline:none;padding:var(--size-2)}.header.svelte-q8uklq{transform:translate(0);font:var(--weight-bold)}.edit.svelte-q8uklq{opacity:0;pointer-events:none}table.svelte-1jok1de.svelte-1jok1de{position:relative;overflow-y:scroll;overflow-x:scroll;-webkit-overflow-scrolling:touch;max-height:100vh;box-sizing:border-box;display:block;padding:0;margin:0;color:var(--body-text-color);font-size:var(--input-text-size);line-height:var(--line-md);font-family:var(--font-mono);border-spacing:0;width:100%;scroll-snap-type:x proximity}table.svelte-1jok1de .svelte-1jok1de:is(thead,tfoot,tbody){display:table;table-layout:fixed;width:100%;box-sizing:border-box}tbody.svelte-1jok1de.svelte-1jok1de{overflow-x:scroll;overflow-y:hidden}table.svelte-1jok1de tbody.svelte-1jok1de{padding-top:var(--bw-svt-p-top);padding-bottom:var(--bw-svt-p-bottom)}tbody.svelte-1jok1de.svelte-1jok1de{position:relative;box-sizing:border-box;border:0px solid currentColor}tbody.svelte-1jok1de>tr:last-child{border:none}table.svelte-1jok1de td{scroll-snap-align:start}tbody.svelte-1jok1de>tr:nth-child(2n){background:var(--table-even-background-fill)}thead.svelte-1jok1de.svelte-1jok1de{position:sticky;top:0;left:0;z-index:var(--layer-1);box-shadow:var(--shadow-drop)}.button-wrap.svelte-1bvc1p0:hover svg.svelte-1bvc1p0.svelte-1bvc1p0{color:var(--color-accent)}.button-wrap.svelte-1bvc1p0 svg.svelte-1bvc1p0.svelte-1bvc1p0{margin-right:var(--size-1);margin-left:-5px}.label.svelte-1bvc1p0 p.svelte-1bvc1p0.svelte-1bvc1p0{position:relative;z-index:var(--layer-4);margin-bottom:var(--size-2);color:var(--block-label-text-color);font-size:var(--block-label-text-size)}.table-wrap.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{position:relative;transition:.15s;border:1px solid var(--border-color-primary);border-radius:var(--table-radius);overflow:hidden}.table-wrap.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0:focus-within{outline:none;background-color:none}.dragging.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{border-color:var(--color-accent)}.no-wrap.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{white-space:nowrap}table.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{position:absolute;opacity:0;transition:.15s;width:var(--size-full);table-layout:auto;color:var(--body-text-color);font-size:var(--input-text-size);line-height:var(--line-md);font-family:var(--font-mono);border-spacing:0}div.svelte-1bvc1p0:not(.no-wrap) td.svelte-1bvc1p0.svelte-1bvc1p0{overflow-wrap:anywhere}div.no-wrap.svelte-1bvc1p0 td.svelte-1bvc1p0.svelte-1bvc1p0{overflow-x:hidden}table.fixed-layout.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{table-layout:fixed}thead.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{position:sticky;top:0;left:0;z-index:var(--layer-1);box-shadow:var(--shadow-drop)}tr.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{border-bottom:1px solid var(--border-color-primary);text-align:left}tr.svelte-1bvc1p0>.svelte-1bvc1p0+.svelte-1bvc1p0{border-right-width:0px;border-left-width:1px;border-style:solid;border-color:var(--border-color-primary)}th.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0,td.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{--ring-color:transparent;position:relative;outline:none;box-shadow:inset 0 0 0 1px var(--ring-color);padding:0}th.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0:first-child{border-top-left-radius:var(--table-radius)}th.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0:last-child{border-top-right-radius:var(--table-radius)}th.focus.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0,td.focus.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{--ring-color:var(--color-accent)}tr.svelte-1bvc1p0:last-child td.svelte-1bvc1p0.svelte-1bvc1p0:first-child{border-bottom-left-radius:var(--table-radius)}tr.svelte-1bvc1p0:last-child td.svelte-1bvc1p0.svelte-1bvc1p0:last-child{border-bottom-right-radius:var(--table-radius)}tr.svelte-1bvc1p0 th.svelte-1bvc1p0.svelte-1bvc1p0{background:var(--table-even-background-fill)}th.svelte-1bvc1p0 svg.svelte-1bvc1p0.svelte-1bvc1p0{fill:currentColor;font-size:10px}.sort-button.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{display:flex;flex:none;justify-content:center;align-items:center;transition:.15s;cursor:pointer;padding:var(--size-2);color:var(--body-text-color-subdued);line-height:var(--text-sm)}.sort-button.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0:hover{color:var(--body-text-color)}.des.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{transform:scaleY(-1)}.sort-button.sorted.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{color:var(--color-accent)}.editing.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{background:var(--table-editing)}.cell-wrap.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{display:flex;align-items:center;outline:none;height:var(--size-full);min-height:var(--size-9)}.controls-wrap.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{display:flex;justify-content:flex-end;padding-top:var(--size-2)}.controls-wrap.svelte-1bvc1p0>.svelte-1bvc1p0+.svelte-1bvc1p0{margin-left:var(--size-1)}.row_odd.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{background:var(--table-odd-background-fill)}.row_odd.focus.svelte-1bvc1p0.svelte-1bvc1p0.svelte-1bvc1p0{background:var(--background-fill-primary)}
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/func2subr.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/func2subr.py
deleted file mode 100644
index 2eedc0ade85e8b2ffa99d376fd20cd4eaf2772b0..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/func2subr.py
+++ /dev/null
@@ -1,327 +0,0 @@
-#!/usr/bin/env python3
-"""
-
-Rules for building C/API module with f2py2e.
-
-Copyright 1999,2000 Pearu Peterson all rights reserved,
-Pearu Peterson
-Permission to use, modify, and distribute this software is given under the
-terms of the NumPy License.
-
-NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK.
-$Date: 2004/11/26 11:13:06 $
-Pearu Peterson
-
-"""
-import copy
-
-from .auxfuncs import (
- getfortranname, isexternal, isfunction, isfunction_wrap, isintent_in,
- isintent_out, islogicalfunction, ismoduleroutine, isscalar,
- issubroutine, issubroutine_wrap, outmess, show
-)
-
-from ._isocbind import isoc_kindmap
-
-def var2fixfortran(vars, a, fa=None, f90mode=None):
- if fa is None:
- fa = a
- if a not in vars:
- show(vars)
- outmess('var2fixfortran: No definition for argument "%s".\n' % a)
- return ''
- if 'typespec' not in vars[a]:
- show(vars[a])
- outmess('var2fixfortran: No typespec for argument "%s".\n' % a)
- return ''
- vardef = vars[a]['typespec']
- if vardef == 'type' and 'typename' in vars[a]:
- vardef = '%s(%s)' % (vardef, vars[a]['typename'])
- selector = {}
- lk = ''
- if 'kindselector' in vars[a]:
- selector = vars[a]['kindselector']
- lk = 'kind'
- elif 'charselector' in vars[a]:
- selector = vars[a]['charselector']
- lk = 'len'
- if '*' in selector:
- if f90mode:
- if selector['*'] in ['*', ':', '(*)']:
- vardef = '%s(len=*)' % (vardef)
- else:
- vardef = '%s(%s=%s)' % (vardef, lk, selector['*'])
- else:
- if selector['*'] in ['*', ':']:
- vardef = '%s*(%s)' % (vardef, selector['*'])
- else:
- vardef = '%s*%s' % (vardef, selector['*'])
- else:
- if 'len' in selector:
- vardef = '%s(len=%s' % (vardef, selector['len'])
- if 'kind' in selector:
- vardef = '%s,kind=%s)' % (vardef, selector['kind'])
- else:
- vardef = '%s)' % (vardef)
- elif 'kind' in selector:
- vardef = '%s(kind=%s)' % (vardef, selector['kind'])
-
- vardef = '%s %s' % (vardef, fa)
- if 'dimension' in vars[a]:
- vardef = '%s(%s)' % (vardef, ','.join(vars[a]['dimension']))
- return vardef
-
-def useiso_c_binding(rout):
- useisoc = False
- for key, value in rout['vars'].items():
- kind_value = value.get('kindselector', {}).get('kind')
- if kind_value in isoc_kindmap:
- return True
- return useisoc
-
-def createfuncwrapper(rout, signature=0):
- assert isfunction(rout)
-
- extra_args = []
- vars = rout['vars']
- for a in rout['args']:
- v = rout['vars'][a]
- for i, d in enumerate(v.get('dimension', [])):
- if d == ':':
- dn = 'f2py_%s_d%s' % (a, i)
- dv = dict(typespec='integer', intent=['hide'])
- dv['='] = 'shape(%s, %s)' % (a, i)
- extra_args.append(dn)
- vars[dn] = dv
- v['dimension'][i] = dn
- rout['args'].extend(extra_args)
- need_interface = bool(extra_args)
-
- ret = ['']
-
- def add(line, ret=ret):
- ret[0] = '%s\n %s' % (ret[0], line)
- name = rout['name']
- fortranname = getfortranname(rout)
- f90mode = ismoduleroutine(rout)
- newname = '%sf2pywrap' % (name)
-
- if newname not in vars:
- vars[newname] = vars[name]
- args = [newname] + rout['args'][1:]
- else:
- args = [newname] + rout['args']
-
- l_tmpl = var2fixfortran(vars, name, '@@@NAME@@@', f90mode)
- if l_tmpl[:13] == 'character*(*)':
- if f90mode:
- l_tmpl = 'character(len=10)' + l_tmpl[13:]
- else:
- l_tmpl = 'character*10' + l_tmpl[13:]
- charselect = vars[name]['charselector']
- if charselect.get('*', '') == '(*)':
- charselect['*'] = '10'
-
- l1 = l_tmpl.replace('@@@NAME@@@', newname)
- rl = None
-
- useisoc = useiso_c_binding(rout)
- sargs = ', '.join(args)
- if f90mode:
- # gh-23598 fix warning
- # Essentially, this gets called again with modules where the name of the
- # function is added to the arguments, which is not required, and removed
- sargs = sargs.replace(f"{name}, ", '')
- args = [arg for arg in args if arg != name]
- rout['args'] = args
- add('subroutine f2pywrap_%s_%s (%s)' %
- (rout['modulename'], name, sargs))
- if not signature:
- add('use %s, only : %s' % (rout['modulename'], fortranname))
- if useisoc:
- add('use iso_c_binding')
- else:
- add('subroutine f2pywrap%s (%s)' % (name, sargs))
- if useisoc:
- add('use iso_c_binding')
- if not need_interface:
- add('external %s' % (fortranname))
- rl = l_tmpl.replace('@@@NAME@@@', '') + ' ' + fortranname
-
- if need_interface:
- for line in rout['saved_interface'].split('\n'):
- if line.lstrip().startswith('use ') and '__user__' not in line:
- add(line)
-
- args = args[1:]
- dumped_args = []
- for a in args:
- if isexternal(vars[a]):
- add('external %s' % (a))
- dumped_args.append(a)
- for a in args:
- if a in dumped_args:
- continue
- if isscalar(vars[a]):
- add(var2fixfortran(vars, a, f90mode=f90mode))
- dumped_args.append(a)
- for a in args:
- if a in dumped_args:
- continue
- if isintent_in(vars[a]):
- add(var2fixfortran(vars, a, f90mode=f90mode))
- dumped_args.append(a)
- for a in args:
- if a in dumped_args:
- continue
- add(var2fixfortran(vars, a, f90mode=f90mode))
-
- add(l1)
- if rl is not None:
- add(rl)
-
- if need_interface:
- if f90mode:
- # f90 module already defines needed interface
- pass
- else:
- add('interface')
- add(rout['saved_interface'].lstrip())
- add('end interface')
-
- sargs = ', '.join([a for a in args if a not in extra_args])
-
- if not signature:
- if islogicalfunction(rout):
- add('%s = .not.(.not.%s(%s))' % (newname, fortranname, sargs))
- else:
- add('%s = %s(%s)' % (newname, fortranname, sargs))
- if f90mode:
- add('end subroutine f2pywrap_%s_%s' % (rout['modulename'], name))
- else:
- add('end')
- return ret[0]
-
-
-def createsubrwrapper(rout, signature=0):
- assert issubroutine(rout)
-
- extra_args = []
- vars = rout['vars']
- for a in rout['args']:
- v = rout['vars'][a]
- for i, d in enumerate(v.get('dimension', [])):
- if d == ':':
- dn = 'f2py_%s_d%s' % (a, i)
- dv = dict(typespec='integer', intent=['hide'])
- dv['='] = 'shape(%s, %s)' % (a, i)
- extra_args.append(dn)
- vars[dn] = dv
- v['dimension'][i] = dn
- rout['args'].extend(extra_args)
- need_interface = bool(extra_args)
-
- ret = ['']
-
- def add(line, ret=ret):
- ret[0] = '%s\n %s' % (ret[0], line)
- name = rout['name']
- fortranname = getfortranname(rout)
- f90mode = ismoduleroutine(rout)
-
- args = rout['args']
-
- useisoc = useiso_c_binding(rout)
- sargs = ', '.join(args)
- if f90mode:
- add('subroutine f2pywrap_%s_%s (%s)' %
- (rout['modulename'], name, sargs))
- if useisoc:
- add('use iso_c_binding')
- if not signature:
- add('use %s, only : %s' % (rout['modulename'], fortranname))
- else:
- add('subroutine f2pywrap%s (%s)' % (name, sargs))
- if useisoc:
- add('use iso_c_binding')
- if not need_interface:
- add('external %s' % (fortranname))
-
- if need_interface:
- for line in rout['saved_interface'].split('\n'):
- if line.lstrip().startswith('use ') and '__user__' not in line:
- add(line)
-
- dumped_args = []
- for a in args:
- if isexternal(vars[a]):
- add('external %s' % (a))
- dumped_args.append(a)
- for a in args:
- if a in dumped_args:
- continue
- if isscalar(vars[a]):
- add(var2fixfortran(vars, a, f90mode=f90mode))
- dumped_args.append(a)
- for a in args:
- if a in dumped_args:
- continue
- add(var2fixfortran(vars, a, f90mode=f90mode))
-
- if need_interface:
- if f90mode:
- # f90 module already defines needed interface
- pass
- else:
- add('interface')
- for line in rout['saved_interface'].split('\n'):
- if line.lstrip().startswith('use ') and '__user__' in line:
- continue
- add(line)
- add('end interface')
-
- sargs = ', '.join([a for a in args if a not in extra_args])
-
- if not signature:
- add('call %s(%s)' % (fortranname, sargs))
- if f90mode:
- add('end subroutine f2pywrap_%s_%s' % (rout['modulename'], name))
- else:
- add('end')
- return ret[0]
-
-
-def assubr(rout):
- if isfunction_wrap(rout):
- fortranname = getfortranname(rout)
- name = rout['name']
- outmess('\t\tCreating wrapper for Fortran function "%s"("%s")...\n' % (
- name, fortranname))
- rout = copy.copy(rout)
- fname = name
- rname = fname
- if 'result' in rout:
- rname = rout['result']
- rout['vars'][fname] = rout['vars'][rname]
- fvar = rout['vars'][fname]
- if not isintent_out(fvar):
- if 'intent' not in fvar:
- fvar['intent'] = []
- fvar['intent'].append('out')
- flag = 1
- for i in fvar['intent']:
- if i.startswith('out='):
- flag = 0
- break
- if flag:
- fvar['intent'].append('out=%s' % (rname))
- rout['args'][:] = [fname] + rout['args']
- return rout, createfuncwrapper(rout)
- if issubroutine_wrap(rout):
- fortranname = getfortranname(rout)
- name = rout['name']
- outmess('\t\tCreating wrapper for Fortran subroutine "%s"("%s")...\n'
- % (name, fortranname))
- rout = copy.copy(rout)
- return rout, createsubrwrapper(rout)
- return rout, ''
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/timedeltas/test_constructors.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/timedeltas/test_constructors.py
deleted file mode 100644
index 3a076a6828a9829efe8b70af99dd3d997dc51d32..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/timedeltas/test_constructors.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import numpy as np
-import pytest
-
-from pandas.core.arrays import TimedeltaArray
-
-
-class TestTimedeltaArrayConstructor:
- def test_only_1dim_accepted(self):
- # GH#25282
- arr = np.array([0, 1, 2, 3], dtype="m8[h]").astype("m8[ns]")
-
- with pytest.raises(ValueError, match="Only 1-dimensional"):
- # 3-dim, we allow 2D to sneak in for ops purposes GH#29853
- TimedeltaArray(arr.reshape(2, 2, 1))
-
- with pytest.raises(ValueError, match="Only 1-dimensional"):
- # 0-dim
- TimedeltaArray(arr[[0]].squeeze())
-
- def test_freq_validation(self):
- # ensure that the public constructor cannot create an invalid instance
- arr = np.array([0, 0, 1], dtype=np.int64) * 3600 * 10**9
-
- msg = (
- "Inferred frequency None from passed values does not "
- "conform to passed frequency D"
- )
- with pytest.raises(ValueError, match=msg):
- TimedeltaArray(arr.view("timedelta64[ns]"), freq="D")
-
- def test_non_array_raises(self):
- with pytest.raises(ValueError, match="list"):
- TimedeltaArray([1, 2, 3])
-
- def test_other_type_raises(self):
- with pytest.raises(ValueError, match="dtype bool cannot be converted"):
- TimedeltaArray(np.array([1, 2, 3], dtype="bool"))
-
- def test_incorrect_dtype_raises(self):
- # TODO: why TypeError for 'category' but ValueError for i8?
- with pytest.raises(
- ValueError, match=r"category cannot be converted to timedelta64\[ns\]"
- ):
- TimedeltaArray(np.array([1, 2, 3], dtype="i8"), dtype="category")
-
- with pytest.raises(
- ValueError, match=r"dtype int64 cannot be converted to timedelta64\[ns\]"
- ):
- TimedeltaArray(np.array([1, 2, 3], dtype="i8"), dtype=np.dtype("int64"))
-
- def test_copy(self):
- data = np.array([1, 2, 3], dtype="m8[ns]")
- arr = TimedeltaArray(data, copy=False)
- assert arr._ndarray is data
-
- arr = TimedeltaArray(data, copy=True)
- assert arr._ndarray is not data
- assert arr._ndarray.base is not data
-
- def test_from_sequence_dtype(self):
- msg = "dtype .*object.* cannot be converted to timedelta64"
- with pytest.raises(ValueError, match=msg):
- TimedeltaArray._from_sequence([], dtype=object)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/groupby/test_any_all.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/groupby/test_any_all.py
deleted file mode 100644
index 57a83335be849c86adcefb9188d125ee08e30a78..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/groupby/test_any_all.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import builtins
-
-import numpy as np
-import pytest
-
-import pandas as pd
-from pandas import (
- DataFrame,
- Index,
- Series,
- isna,
-)
-import pandas._testing as tm
-
-
-@pytest.mark.parametrize("agg_func", ["any", "all"])
-@pytest.mark.parametrize(
- "vals",
- [
- ["foo", "bar", "baz"],
- ["foo", "", ""],
- ["", "", ""],
- [1, 2, 3],
- [1, 0, 0],
- [0, 0, 0],
- [1.0, 2.0, 3.0],
- [1.0, 0.0, 0.0],
- [0.0, 0.0, 0.0],
- [True, True, True],
- [True, False, False],
- [False, False, False],
- [np.nan, np.nan, np.nan],
- ],
-)
-def test_groupby_bool_aggs(skipna, agg_func, vals):
- df = DataFrame({"key": ["a"] * 3 + ["b"] * 3, "val": vals * 2})
-
- # Figure out expectation using Python builtin
- exp = getattr(builtins, agg_func)(vals)
-
- # edge case for missing data with skipna and 'any'
- if skipna and all(isna(vals)) and agg_func == "any":
- exp = False
-
- expected = DataFrame(
- [exp] * 2, columns=["val"], index=Index(["a", "b"], name="key")
- )
- result = getattr(df.groupby("key"), agg_func)(skipna=skipna)
- tm.assert_frame_equal(result, expected)
-
-
-def test_any():
- df = DataFrame(
- [[1, 2, "foo"], [1, np.nan, "bar"], [3, np.nan, "baz"]],
- columns=["A", "B", "C"],
- )
- expected = DataFrame(
- [[True, True], [False, True]], columns=["B", "C"], index=[1, 3]
- )
- expected.index.name = "A"
- result = df.groupby("A").any()
- tm.assert_frame_equal(result, expected)
-
-
-@pytest.mark.parametrize("bool_agg_func", ["any", "all"])
-def test_bool_aggs_dup_column_labels(bool_agg_func):
- # GH#21668
- df = DataFrame([[True, True]], columns=["a", "a"])
- grp_by = df.groupby([0])
- result = getattr(grp_by, bool_agg_func)()
-
- expected = df.set_axis(np.array([0]))
- tm.assert_frame_equal(result, expected)
-
-
-@pytest.mark.parametrize("bool_agg_func", ["any", "all"])
-@pytest.mark.parametrize(
- "data",
- [
- [False, False, False],
- [True, True, True],
- [pd.NA, pd.NA, pd.NA],
- [False, pd.NA, False],
- [True, pd.NA, True],
- [True, pd.NA, False],
- ],
-)
-def test_masked_kleene_logic(bool_agg_func, skipna, data):
- # GH#37506
- ser = Series(data, dtype="boolean")
-
- # The result should match aggregating on the whole series. Correctness
- # there is verified in test_reductions.py::test_any_all_boolean_kleene_logic
- expected_data = getattr(ser, bool_agg_func)(skipna=skipna)
- expected = Series(expected_data, index=np.array([0]), dtype="boolean")
-
- result = ser.groupby([0, 0, 0]).agg(bool_agg_func, skipna=skipna)
- tm.assert_series_equal(result, expected)
-
-
-@pytest.mark.parametrize(
- "dtype1,dtype2,exp_col1,exp_col2",
- [
- (
- "float",
- "Float64",
- np.array([True], dtype=bool),
- pd.array([pd.NA], dtype="boolean"),
- ),
- (
- "Int64",
- "float",
- pd.array([pd.NA], dtype="boolean"),
- np.array([True], dtype=bool),
- ),
- (
- "Int64",
- "Int64",
- pd.array([pd.NA], dtype="boolean"),
- pd.array([pd.NA], dtype="boolean"),
- ),
- (
- "Float64",
- "boolean",
- pd.array([pd.NA], dtype="boolean"),
- pd.array([pd.NA], dtype="boolean"),
- ),
- ],
-)
-def test_masked_mixed_types(dtype1, dtype2, exp_col1, exp_col2):
- # GH#37506
- data = [1.0, np.nan]
- df = DataFrame(
- {"col1": pd.array(data, dtype=dtype1), "col2": pd.array(data, dtype=dtype2)}
- )
- result = df.groupby([1, 1]).agg("all", skipna=False)
-
- expected = DataFrame({"col1": exp_col1, "col2": exp_col2}, index=np.array([1]))
- tm.assert_frame_equal(result, expected)
-
-
-@pytest.mark.parametrize("bool_agg_func", ["any", "all"])
-@pytest.mark.parametrize("dtype", ["Int64", "Float64", "boolean"])
-def test_masked_bool_aggs_skipna(bool_agg_func, dtype, skipna, frame_or_series):
- # GH#40585
- obj = frame_or_series([pd.NA, 1], dtype=dtype)
- expected_res = True
- if not skipna and bool_agg_func == "all":
- expected_res = pd.NA
- expected = frame_or_series([expected_res], index=np.array([1]), dtype="boolean")
-
- result = obj.groupby([1, 1]).agg(bool_agg_func, skipna=skipna)
- tm.assert_equal(result, expected)
-
-
-@pytest.mark.parametrize(
- "bool_agg_func,data,expected_res",
- [
- ("any", [pd.NA, np.nan], False),
- ("any", [pd.NA, 1, np.nan], True),
- ("all", [pd.NA, pd.NaT], True),
- ("all", [pd.NA, False, pd.NaT], False),
- ],
-)
-def test_object_type_missing_vals(bool_agg_func, data, expected_res, frame_or_series):
- # GH#37501
- obj = frame_or_series(data, dtype=object)
- result = obj.groupby([1] * len(data)).agg(bool_agg_func)
- expected = frame_or_series([expected_res], index=np.array([1]), dtype="bool")
- tm.assert_equal(result, expected)
-
-
-@pytest.mark.parametrize("bool_agg_func", ["any", "all"])
-def test_object_NA_raises_with_skipna_false(bool_agg_func):
- # GH#37501
- ser = Series([pd.NA], dtype=object)
- with pytest.raises(TypeError, match="boolean value of NA is ambiguous"):
- ser.groupby([1]).agg(bool_agg_func, skipna=False)
-
-
-@pytest.mark.parametrize("bool_agg_func", ["any", "all"])
-def test_empty(frame_or_series, bool_agg_func):
- # GH 45231
- kwargs = {"columns": ["a"]} if frame_or_series is DataFrame else {"name": "a"}
- obj = frame_or_series(**kwargs, dtype=object)
- result = getattr(obj.groupby(obj.index), bool_agg_func)()
- expected = frame_or_series(**kwargs, dtype=bool)
- tm.assert_equal(result, expected)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/groupby/test_groupby_shift_diff.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/groupby/test_groupby_shift_diff.py
deleted file mode 100644
index bb4b9aa866ac9e2f897b6ce8ffd08cfda0c9a491..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/groupby/test_groupby_shift_diff.py
+++ /dev/null
@@ -1,254 +0,0 @@
-import numpy as np
-import pytest
-
-from pandas import (
- DataFrame,
- NaT,
- Series,
- Timedelta,
- Timestamp,
- date_range,
-)
-import pandas._testing as tm
-
-
-def test_group_shift_with_null_key():
- # This test is designed to replicate the segfault in issue #13813.
- n_rows = 1200
-
- # Generate a moderately large dataframe with occasional missing
- # values in column `B`, and then group by [`A`, `B`]. This should
- # force `-1` in `labels` array of `g.grouper.group_info` exactly
- # at those places, where the group-by key is partially missing.
- df = DataFrame(
- [(i % 12, i % 3 if i % 3 else np.nan, i) for i in range(n_rows)],
- dtype=float,
- columns=["A", "B", "Z"],
- index=None,
- )
- g = df.groupby(["A", "B"])
-
- expected = DataFrame(
- [(i + 12 if i % 3 and i < n_rows - 12 else np.nan) for i in range(n_rows)],
- dtype=float,
- columns=["Z"],
- index=None,
- )
- result = g.shift(-1)
-
- tm.assert_frame_equal(result, expected)
-
-
-def test_group_shift_with_fill_value():
- # GH #24128
- n_rows = 24
- df = DataFrame(
- [(i % 12, i % 3, i) for i in range(n_rows)],
- dtype=float,
- columns=["A", "B", "Z"],
- index=None,
- )
- g = df.groupby(["A", "B"])
-
- expected = DataFrame(
- [(i + 12 if i < n_rows - 12 else 0) for i in range(n_rows)],
- dtype=float,
- columns=["Z"],
- index=None,
- )
- result = g.shift(-1, fill_value=0)
-
- tm.assert_frame_equal(result, expected)
-
-
-def test_group_shift_lose_timezone():
- # GH 30134
- now_dt = Timestamp.utcnow().as_unit("ns")
- df = DataFrame({"a": [1, 1], "date": now_dt})
- result = df.groupby("a").shift(0).iloc[0]
- expected = Series({"date": now_dt}, name=result.name)
- tm.assert_series_equal(result, expected)
-
-
-def test_group_diff_real_series(any_real_numpy_dtype):
- df = DataFrame(
- {"a": [1, 2, 3, 3, 2], "b": [1, 2, 3, 4, 5]},
- dtype=any_real_numpy_dtype,
- )
- result = df.groupby("a")["b"].diff()
- exp_dtype = "float"
- if any_real_numpy_dtype in ["int8", "int16", "float32"]:
- exp_dtype = "float32"
- expected = Series([np.nan, np.nan, np.nan, 1.0, 3.0], dtype=exp_dtype, name="b")
- tm.assert_series_equal(result, expected)
-
-
-def test_group_diff_real_frame(any_real_numpy_dtype):
- df = DataFrame(
- {
- "a": [1, 2, 3, 3, 2],
- "b": [1, 2, 3, 4, 5],
- "c": [1, 2, 3, 4, 6],
- },
- dtype=any_real_numpy_dtype,
- )
- result = df.groupby("a").diff()
- exp_dtype = "float"
- if any_real_numpy_dtype in ["int8", "int16", "float32"]:
- exp_dtype = "float32"
- expected = DataFrame(
- {
- "b": [np.nan, np.nan, np.nan, 1.0, 3.0],
- "c": [np.nan, np.nan, np.nan, 1.0, 4.0],
- },
- dtype=exp_dtype,
- )
- tm.assert_frame_equal(result, expected)
-
-
-@pytest.mark.parametrize(
- "data",
- [
- [
- Timestamp("2013-01-01"),
- Timestamp("2013-01-02"),
- Timestamp("2013-01-03"),
- ],
- [Timedelta("5 days"), Timedelta("6 days"), Timedelta("7 days")],
- ],
-)
-def test_group_diff_datetimelike(data):
- df = DataFrame({"a": [1, 2, 2], "b": data})
- result = df.groupby("a")["b"].diff()
- expected = Series([NaT, NaT, Timedelta("1 days")], name="b")
- tm.assert_series_equal(result, expected)
-
-
-def test_group_diff_bool():
- df = DataFrame({"a": [1, 2, 3, 3, 2], "b": [True, True, False, False, True]})
- result = df.groupby("a")["b"].diff()
- expected = Series([np.nan, np.nan, np.nan, False, False], name="b")
- tm.assert_series_equal(result, expected)
-
-
-def test_group_diff_object_raises(object_dtype):
- df = DataFrame(
- {"a": ["foo", "bar", "bar"], "b": ["baz", "foo", "foo"]}, dtype=object_dtype
- )
- with pytest.raises(TypeError, match=r"unsupported operand type\(s\) for -"):
- df.groupby("a")["b"].diff()
-
-
-def test_empty_shift_with_fill():
- # GH 41264, single-index check
- df = DataFrame(columns=["a", "b", "c"])
- shifted = df.groupby(["a"]).shift(1)
- shifted_with_fill = df.groupby(["a"]).shift(1, fill_value=0)
- tm.assert_frame_equal(shifted, shifted_with_fill)
- tm.assert_index_equal(shifted.index, shifted_with_fill.index)
-
-
-def test_multindex_empty_shift_with_fill():
- # GH 41264, multi-index check
- df = DataFrame(columns=["a", "b", "c"])
- shifted = df.groupby(["a", "b"]).shift(1)
- shifted_with_fill = df.groupby(["a", "b"]).shift(1, fill_value=0)
- tm.assert_frame_equal(shifted, shifted_with_fill)
- tm.assert_index_equal(shifted.index, shifted_with_fill.index)
-
-
-def test_shift_periods_freq():
- # GH 54093
- data = {"a": [1, 2, 3, 4, 5, 6], "b": [0, 0, 0, 1, 1, 1]}
- df = DataFrame(data, index=date_range(start="20100101", periods=6))
- result = df.groupby(df.index).shift(periods=-2, freq="D")
- expected = DataFrame(data, index=date_range(start="2009-12-30", periods=6))
- tm.assert_frame_equal(result, expected)
-
-
-def test_shift_deprecate_freq_and_fill_value():
- # GH 53832
- data = {"a": [1, 2, 3, 4, 5, 6], "b": [0, 0, 0, 1, 1, 1]}
- df = DataFrame(data, index=date_range(start="20100101", periods=6))
- msg = (
- "Passing a 'freq' together with a 'fill_value' silently ignores the fill_value"
- )
- with tm.assert_produces_warning(FutureWarning, match=msg):
- df.groupby(df.index).shift(periods=-2, freq="D", fill_value="1")
-
-
-def test_shift_disallow_suffix_if_periods_is_int():
- # GH#44424
- data = {"a": [1, 2, 3, 4, 5, 6], "b": [0, 0, 0, 1, 1, 1]}
- df = DataFrame(data)
- msg = "Cannot specify `suffix` if `periods` is an int."
- with pytest.raises(ValueError, match=msg):
- df.groupby("b").shift(1, suffix="fails")
-
-
-def test_group_shift_with_multiple_periods():
- # GH#44424
- df = DataFrame({"a": [1, 2, 3, 3, 2], "b": [True, True, False, False, True]})
-
- shifted_df = df.groupby("b")[["a"]].shift([0, 1])
- expected_df = DataFrame(
- {"a_0": [1, 2, 3, 3, 2], "a_1": [np.nan, 1.0, np.nan, 3.0, 2.0]}
- )
- tm.assert_frame_equal(shifted_df, expected_df)
-
- # series
- shifted_series = df.groupby("b")["a"].shift([0, 1])
- tm.assert_frame_equal(shifted_series, expected_df)
-
-
-def test_group_shift_with_multiple_periods_and_freq():
- # GH#44424
- df = DataFrame(
- {"a": [1, 2, 3, 4, 5], "b": [True, True, False, False, True]},
- index=date_range("1/1/2000", periods=5, freq="H"),
- )
- shifted_df = df.groupby("b")[["a"]].shift(
- [0, 1],
- freq="H",
- )
- expected_df = DataFrame(
- {
- "a_0": [1.0, 2.0, 3.0, 4.0, 5.0, np.nan],
- "a_1": [
- np.nan,
- 1.0,
- 2.0,
- 3.0,
- 4.0,
- 5.0,
- ],
- },
- index=date_range("1/1/2000", periods=6, freq="H"),
- )
- tm.assert_frame_equal(shifted_df, expected_df)
-
-
-def test_group_shift_with_multiple_periods_and_fill_value():
- # GH#44424
- df = DataFrame(
- {"a": [1, 2, 3, 4, 5], "b": [True, True, False, False, True]},
- )
- shifted_df = df.groupby("b")[["a"]].shift([0, 1], fill_value=-1)
- expected_df = DataFrame(
- {"a_0": [1, 2, 3, 4, 5], "a_1": [-1, 1, -1, 3, 2]},
- )
- tm.assert_frame_equal(shifted_df, expected_df)
-
-
-def test_group_shift_with_multiple_periods_and_both_fill_and_freq_deprecated():
- # GH#44424
- df = DataFrame(
- {"a": [1, 2, 3, 4, 5], "b": [True, True, False, False, True]},
- index=date_range("1/1/2000", periods=5, freq="H"),
- )
- msg = (
- "Passing a 'freq' together with a 'fill_value' silently ignores the "
- "fill_value"
- )
- with tm.assert_produces_warning(FutureWarning, match=msg):
- df.groupby("b")[["a"]].shift([1, 2], fill_value=1, freq="H")
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/categorical/test_equals.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/categorical/test_equals.py
deleted file mode 100644
index a8353f301a3c39a50b2a0c5541722551ff660e30..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/categorical/test_equals.py
+++ /dev/null
@@ -1,96 +0,0 @@
-import numpy as np
-import pytest
-
-from pandas import (
- Categorical,
- CategoricalIndex,
- Index,
- MultiIndex,
-)
-
-
-class TestEquals:
- def test_equals_categorical(self):
- ci1 = CategoricalIndex(["a", "b"], categories=["a", "b"], ordered=True)
- ci2 = CategoricalIndex(["a", "b"], categories=["a", "b", "c"], ordered=True)
-
- assert ci1.equals(ci1)
- assert not ci1.equals(ci2)
- assert ci1.equals(ci1.astype(object))
- assert ci1.astype(object).equals(ci1)
-
- assert (ci1 == ci1).all()
- assert not (ci1 != ci1).all()
- assert not (ci1 > ci1).all()
- assert not (ci1 < ci1).all()
- assert (ci1 <= ci1).all()
- assert (ci1 >= ci1).all()
-
- assert not (ci1 == 1).all()
- assert (ci1 == Index(["a", "b"])).all()
- assert (ci1 == ci1.values).all()
-
- # invalid comparisons
- with pytest.raises(ValueError, match="Lengths must match"):
- ci1 == Index(["a", "b", "c"])
-
- msg = "Categoricals can only be compared if 'categories' are the same"
- with pytest.raises(TypeError, match=msg):
- ci1 == ci2
- with pytest.raises(TypeError, match=msg):
- ci1 == Categorical(ci1.values, ordered=False)
- with pytest.raises(TypeError, match=msg):
- ci1 == Categorical(ci1.values, categories=list("abc"))
-
- # tests
- # make sure that we are testing for category inclusion properly
- ci = CategoricalIndex(list("aabca"), categories=["c", "a", "b"])
- assert not ci.equals(list("aabca"))
- # Same categories, but different order
- # Unordered
- assert ci.equals(CategoricalIndex(list("aabca")))
- # Ordered
- assert not ci.equals(CategoricalIndex(list("aabca"), ordered=True))
- assert ci.equals(ci.copy())
-
- ci = CategoricalIndex(list("aabca") + [np.nan], categories=["c", "a", "b"])
- assert not ci.equals(list("aabca"))
- assert not ci.equals(CategoricalIndex(list("aabca")))
- assert ci.equals(ci.copy())
-
- ci = CategoricalIndex(list("aabca") + [np.nan], categories=["c", "a", "b"])
- assert not ci.equals(list("aabca") + [np.nan])
- assert ci.equals(CategoricalIndex(list("aabca") + [np.nan]))
- assert not ci.equals(CategoricalIndex(list("aabca") + [np.nan], ordered=True))
- assert ci.equals(ci.copy())
-
- def test_equals_categorical_unordered(self):
- # https://github.com/pandas-dev/pandas/issues/16603
- a = CategoricalIndex(["A"], categories=["A", "B"])
- b = CategoricalIndex(["A"], categories=["B", "A"])
- c = CategoricalIndex(["C"], categories=["B", "A"])
- assert a.equals(b)
- assert not a.equals(c)
- assert not b.equals(c)
-
- def test_equals_non_category(self):
- # GH#37667 Case where other contains a value not among ci's
- # categories ("D") and also contains np.nan
- ci = CategoricalIndex(["A", "B", np.nan, np.nan])
- other = Index(["A", "B", "D", np.nan])
-
- assert not ci.equals(other)
-
- def test_equals_multiindex(self):
- # dont raise NotImplementedError when calling is_dtype_compat
-
- mi = MultiIndex.from_arrays([["A", "B", "C", "D"], range(4)])
- ci = mi.to_flat_index().astype("category")
-
- assert not ci.equals(mi)
-
- def test_equals_string_dtype(self, any_string_dtype):
- # GH#55364
- idx = CategoricalIndex(list("abc"), name="B")
- other = Index(["a", "b", "c"], name="B", dtype=any_string_dtype)
- assert idx.equals(other)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/msgpack/fallback.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/msgpack/fallback.py
deleted file mode 100644
index b27acb2951539443ee54f1bfd1a291e58d08bf5f..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/msgpack/fallback.py
+++ /dev/null
@@ -1,1012 +0,0 @@
-"""Fallback pure Python implementation of msgpack"""
-from datetime import datetime as _DateTime
-import sys
-import struct
-
-
-PY2 = sys.version_info[0] == 2
-if PY2:
- int_types = (int, long)
-
- def dict_iteritems(d):
- return d.iteritems()
-
-
-else:
- int_types = int
- unicode = str
- xrange = range
-
- def dict_iteritems(d):
- return d.items()
-
-
-if sys.version_info < (3, 5):
- # Ugly hack...
- RecursionError = RuntimeError
-
- def _is_recursionerror(e):
- return (
- len(e.args) == 1
- and isinstance(e.args[0], str)
- and e.args[0].startswith("maximum recursion depth exceeded")
- )
-
-
-else:
-
- def _is_recursionerror(e):
- return True
-
-
-if hasattr(sys, "pypy_version_info"):
- # StringIO is slow on PyPy, StringIO is faster. However: PyPy's own
- # StringBuilder is fastest.
- from __pypy__ import newlist_hint
-
- try:
- from __pypy__.builders import BytesBuilder as StringBuilder
- except ImportError:
- from __pypy__.builders import StringBuilder
- USING_STRINGBUILDER = True
-
- class StringIO(object):
- def __init__(self, s=b""):
- if s:
- self.builder = StringBuilder(len(s))
- self.builder.append(s)
- else:
- self.builder = StringBuilder()
-
- def write(self, s):
- if isinstance(s, memoryview):
- s = s.tobytes()
- elif isinstance(s, bytearray):
- s = bytes(s)
- self.builder.append(s)
-
- def getvalue(self):
- return self.builder.build()
-
-
-else:
- USING_STRINGBUILDER = False
- from io import BytesIO as StringIO
-
- newlist_hint = lambda size: []
-
-
-from .exceptions import BufferFull, OutOfData, ExtraData, FormatError, StackError
-
-from .ext import ExtType, Timestamp
-
-
-EX_SKIP = 0
-EX_CONSTRUCT = 1
-EX_READ_ARRAY_HEADER = 2
-EX_READ_MAP_HEADER = 3
-
-TYPE_IMMEDIATE = 0
-TYPE_ARRAY = 1
-TYPE_MAP = 2
-TYPE_RAW = 3
-TYPE_BIN = 4
-TYPE_EXT = 5
-
-DEFAULT_RECURSE_LIMIT = 511
-
-
-def _check_type_strict(obj, t, type=type, tuple=tuple):
- if type(t) is tuple:
- return type(obj) in t
- else:
- return type(obj) is t
-
-
-def _get_data_from_buffer(obj):
- view = memoryview(obj)
- if view.itemsize != 1:
- raise ValueError("cannot unpack from multi-byte object")
- return view
-
-
-def unpackb(packed, **kwargs):
- """
- Unpack an object from `packed`.
-
- Raises ``ExtraData`` when *packed* contains extra bytes.
- Raises ``ValueError`` when *packed* is incomplete.
- Raises ``FormatError`` when *packed* is not valid msgpack.
- Raises ``StackError`` when *packed* contains too nested.
- Other exceptions can be raised during unpacking.
-
- See :class:`Unpacker` for options.
- """
- unpacker = Unpacker(None, max_buffer_size=len(packed), **kwargs)
- unpacker.feed(packed)
- try:
- ret = unpacker._unpack()
- except OutOfData:
- raise ValueError("Unpack failed: incomplete input")
- except RecursionError as e:
- if _is_recursionerror(e):
- raise StackError
- raise
- if unpacker._got_extradata():
- raise ExtraData(ret, unpacker._get_extradata())
- return ret
-
-
-if sys.version_info < (2, 7, 6):
-
- def _unpack_from(f, b, o=0):
- """Explicit type cast for legacy struct.unpack_from"""
- return struct.unpack_from(f, bytes(b), o)
-
-
-else:
- _unpack_from = struct.unpack_from
-
-_NO_FORMAT_USED = ""
-_MSGPACK_HEADERS = {
- 0xC4: (1, _NO_FORMAT_USED, TYPE_BIN),
- 0xC5: (2, ">H", TYPE_BIN),
- 0xC6: (4, ">I", TYPE_BIN),
- 0xC7: (2, "Bb", TYPE_EXT),
- 0xC8: (3, ">Hb", TYPE_EXT),
- 0xC9: (5, ">Ib", TYPE_EXT),
- 0xCA: (4, ">f"),
- 0xCB: (8, ">d"),
- 0xCC: (1, _NO_FORMAT_USED),
- 0xCD: (2, ">H"),
- 0xCE: (4, ">I"),
- 0xCF: (8, ">Q"),
- 0xD0: (1, "b"),
- 0xD1: (2, ">h"),
- 0xD2: (4, ">i"),
- 0xD3: (8, ">q"),
- 0xD4: (1, "b1s", TYPE_EXT),
- 0xD5: (2, "b2s", TYPE_EXT),
- 0xD6: (4, "b4s", TYPE_EXT),
- 0xD7: (8, "b8s", TYPE_EXT),
- 0xD8: (16, "b16s", TYPE_EXT),
- 0xD9: (1, _NO_FORMAT_USED, TYPE_RAW),
- 0xDA: (2, ">H", TYPE_RAW),
- 0xDB: (4, ">I", TYPE_RAW),
- 0xDC: (2, ">H", TYPE_ARRAY),
- 0xDD: (4, ">I", TYPE_ARRAY),
- 0xDE: (2, ">H", TYPE_MAP),
- 0xDF: (4, ">I", TYPE_MAP),
-}
-
-
-class Unpacker(object):
- """Streaming unpacker.
-
- Arguments:
-
- :param file_like:
- File-like object having `.read(n)` method.
- If specified, unpacker reads serialized data from it and :meth:`feed()` is not usable.
-
- :param int read_size:
- Used as `file_like.read(read_size)`. (default: `min(16*1024, max_buffer_size)`)
-
- :param bool use_list:
- If true, unpack msgpack array to Python list.
- Otherwise, unpack to Python tuple. (default: True)
-
- :param bool raw:
- If true, unpack msgpack raw to Python bytes.
- Otherwise, unpack to Python str by decoding with UTF-8 encoding (default).
-
- :param int timestamp:
- Control how timestamp type is unpacked:
-
- 0 - Timestamp
- 1 - float (Seconds from the EPOCH)
- 2 - int (Nanoseconds from the EPOCH)
- 3 - datetime.datetime (UTC). Python 2 is not supported.
-
- :param bool strict_map_key:
- If true (default), only str or bytes are accepted for map (dict) keys.
-
- :param callable object_hook:
- When specified, it should be callable.
- Unpacker calls it with a dict argument after unpacking msgpack map.
- (See also simplejson)
-
- :param callable object_pairs_hook:
- When specified, it should be callable.
- Unpacker calls it with a list of key-value pairs after unpacking msgpack map.
- (See also simplejson)
-
- :param str unicode_errors:
- The error handler for decoding unicode. (default: 'strict')
- This option should be used only when you have msgpack data which
- contains invalid UTF-8 string.
-
- :param int max_buffer_size:
- Limits size of data waiting unpacked. 0 means 2**32-1.
- The default value is 100*1024*1024 (100MiB).
- Raises `BufferFull` exception when it is insufficient.
- You should set this parameter when unpacking data from untrusted source.
-
- :param int max_str_len:
- Deprecated, use *max_buffer_size* instead.
- Limits max length of str. (default: max_buffer_size)
-
- :param int max_bin_len:
- Deprecated, use *max_buffer_size* instead.
- Limits max length of bin. (default: max_buffer_size)
-
- :param int max_array_len:
- Limits max length of array.
- (default: max_buffer_size)
-
- :param int max_map_len:
- Limits max length of map.
- (default: max_buffer_size//2)
-
- :param int max_ext_len:
- Deprecated, use *max_buffer_size* instead.
- Limits max size of ext type. (default: max_buffer_size)
-
- Example of streaming deserialize from file-like object::
-
- unpacker = Unpacker(file_like)
- for o in unpacker:
- process(o)
-
- Example of streaming deserialize from socket::
-
- unpacker = Unpacker()
- while True:
- buf = sock.recv(1024**2)
- if not buf:
- break
- unpacker.feed(buf)
- for o in unpacker:
- process(o)
-
- Raises ``ExtraData`` when *packed* contains extra bytes.
- Raises ``OutOfData`` when *packed* is incomplete.
- Raises ``FormatError`` when *packed* is not valid msgpack.
- Raises ``StackError`` when *packed* contains too nested.
- Other exceptions can be raised during unpacking.
- """
-
- def __init__(
- self,
- file_like=None,
- read_size=0,
- use_list=True,
- raw=False,
- timestamp=0,
- strict_map_key=True,
- object_hook=None,
- object_pairs_hook=None,
- list_hook=None,
- unicode_errors=None,
- max_buffer_size=100 * 1024 * 1024,
- ext_hook=ExtType,
- max_str_len=-1,
- max_bin_len=-1,
- max_array_len=-1,
- max_map_len=-1,
- max_ext_len=-1,
- ):
- if unicode_errors is None:
- unicode_errors = "strict"
-
- if file_like is None:
- self._feeding = True
- else:
- if not callable(file_like.read):
- raise TypeError("`file_like.read` must be callable")
- self.file_like = file_like
- self._feeding = False
-
- #: array of bytes fed.
- self._buffer = bytearray()
- #: Which position we currently reads
- self._buff_i = 0
-
- # When Unpacker is used as an iterable, between the calls to next(),
- # the buffer is not "consumed" completely, for efficiency sake.
- # Instead, it is done sloppily. To make sure we raise BufferFull at
- # the correct moments, we have to keep track of how sloppy we were.
- # Furthermore, when the buffer is incomplete (that is: in the case
- # we raise an OutOfData) we need to rollback the buffer to the correct
- # state, which _buf_checkpoint records.
- self._buf_checkpoint = 0
-
- if not max_buffer_size:
- max_buffer_size = 2 ** 31 - 1
- if max_str_len == -1:
- max_str_len = max_buffer_size
- if max_bin_len == -1:
- max_bin_len = max_buffer_size
- if max_array_len == -1:
- max_array_len = max_buffer_size
- if max_map_len == -1:
- max_map_len = max_buffer_size // 2
- if max_ext_len == -1:
- max_ext_len = max_buffer_size
-
- self._max_buffer_size = max_buffer_size
- if read_size > self._max_buffer_size:
- raise ValueError("read_size must be smaller than max_buffer_size")
- self._read_size = read_size or min(self._max_buffer_size, 16 * 1024)
- self._raw = bool(raw)
- self._strict_map_key = bool(strict_map_key)
- self._unicode_errors = unicode_errors
- self._use_list = use_list
- if not (0 <= timestamp <= 3):
- raise ValueError("timestamp must be 0..3")
- self._timestamp = timestamp
- self._list_hook = list_hook
- self._object_hook = object_hook
- self._object_pairs_hook = object_pairs_hook
- self._ext_hook = ext_hook
- self._max_str_len = max_str_len
- self._max_bin_len = max_bin_len
- self._max_array_len = max_array_len
- self._max_map_len = max_map_len
- self._max_ext_len = max_ext_len
- self._stream_offset = 0
-
- if list_hook is not None and not callable(list_hook):
- raise TypeError("`list_hook` is not callable")
- if object_hook is not None and not callable(object_hook):
- raise TypeError("`object_hook` is not callable")
- if object_pairs_hook is not None and not callable(object_pairs_hook):
- raise TypeError("`object_pairs_hook` is not callable")
- if object_hook is not None and object_pairs_hook is not None:
- raise TypeError(
- "object_pairs_hook and object_hook are mutually " "exclusive"
- )
- if not callable(ext_hook):
- raise TypeError("`ext_hook` is not callable")
-
- def feed(self, next_bytes):
- assert self._feeding
- view = _get_data_from_buffer(next_bytes)
- if len(self._buffer) - self._buff_i + len(view) > self._max_buffer_size:
- raise BufferFull
-
- # Strip buffer before checkpoint before reading file.
- if self._buf_checkpoint > 0:
- del self._buffer[: self._buf_checkpoint]
- self._buff_i -= self._buf_checkpoint
- self._buf_checkpoint = 0
-
- # Use extend here: INPLACE_ADD += doesn't reliably typecast memoryview in jython
- self._buffer.extend(view)
-
- def _consume(self):
- """Gets rid of the used parts of the buffer."""
- self._stream_offset += self._buff_i - self._buf_checkpoint
- self._buf_checkpoint = self._buff_i
-
- def _got_extradata(self):
- return self._buff_i < len(self._buffer)
-
- def _get_extradata(self):
- return self._buffer[self._buff_i :]
-
- def read_bytes(self, n):
- ret = self._read(n, raise_outofdata=False)
- self._consume()
- return ret
-
- def _read(self, n, raise_outofdata=True):
- # (int) -> bytearray
- self._reserve(n, raise_outofdata=raise_outofdata)
- i = self._buff_i
- ret = self._buffer[i : i + n]
- self._buff_i = i + len(ret)
- return ret
-
- def _reserve(self, n, raise_outofdata=True):
- remain_bytes = len(self._buffer) - self._buff_i - n
-
- # Fast path: buffer has n bytes already
- if remain_bytes >= 0:
- return
-
- if self._feeding:
- self._buff_i = self._buf_checkpoint
- raise OutOfData
-
- # Strip buffer before checkpoint before reading file.
- if self._buf_checkpoint > 0:
- del self._buffer[: self._buf_checkpoint]
- self._buff_i -= self._buf_checkpoint
- self._buf_checkpoint = 0
-
- # Read from file
- remain_bytes = -remain_bytes
- while remain_bytes > 0:
- to_read_bytes = max(self._read_size, remain_bytes)
- read_data = self.file_like.read(to_read_bytes)
- if not read_data:
- break
- assert isinstance(read_data, bytes)
- self._buffer += read_data
- remain_bytes -= len(read_data)
-
- if len(self._buffer) < n + self._buff_i and raise_outofdata:
- self._buff_i = 0 # rollback
- raise OutOfData
-
- def _read_header(self):
- typ = TYPE_IMMEDIATE
- n = 0
- obj = None
- self._reserve(1)
- b = self._buffer[self._buff_i]
- self._buff_i += 1
- if b & 0b10000000 == 0:
- obj = b
- elif b & 0b11100000 == 0b11100000:
- obj = -1 - (b ^ 0xFF)
- elif b & 0b11100000 == 0b10100000:
- n = b & 0b00011111
- typ = TYPE_RAW
- if n > self._max_str_len:
- raise ValueError("%s exceeds max_str_len(%s)" % (n, self._max_str_len))
- obj = self._read(n)
- elif b & 0b11110000 == 0b10010000:
- n = b & 0b00001111
- typ = TYPE_ARRAY
- if n > self._max_array_len:
- raise ValueError(
- "%s exceeds max_array_len(%s)" % (n, self._max_array_len)
- )
- elif b & 0b11110000 == 0b10000000:
- n = b & 0b00001111
- typ = TYPE_MAP
- if n > self._max_map_len:
- raise ValueError("%s exceeds max_map_len(%s)" % (n, self._max_map_len))
- elif b == 0xC0:
- obj = None
- elif b == 0xC2:
- obj = False
- elif b == 0xC3:
- obj = True
- elif 0xC4 <= b <= 0xC6:
- size, fmt, typ = _MSGPACK_HEADERS[b]
- self._reserve(size)
- if len(fmt) > 0:
- n = _unpack_from(fmt, self._buffer, self._buff_i)[0]
- else:
- n = self._buffer[self._buff_i]
- self._buff_i += size
- if n > self._max_bin_len:
- raise ValueError("%s exceeds max_bin_len(%s)" % (n, self._max_bin_len))
- obj = self._read(n)
- elif 0xC7 <= b <= 0xC9:
- size, fmt, typ = _MSGPACK_HEADERS[b]
- self._reserve(size)
- L, n = _unpack_from(fmt, self._buffer, self._buff_i)
- self._buff_i += size
- if L > self._max_ext_len:
- raise ValueError("%s exceeds max_ext_len(%s)" % (L, self._max_ext_len))
- obj = self._read(L)
- elif 0xCA <= b <= 0xD3:
- size, fmt = _MSGPACK_HEADERS[b]
- self._reserve(size)
- if len(fmt) > 0:
- obj = _unpack_from(fmt, self._buffer, self._buff_i)[0]
- else:
- obj = self._buffer[self._buff_i]
- self._buff_i += size
- elif 0xD4 <= b <= 0xD8:
- size, fmt, typ = _MSGPACK_HEADERS[b]
- if self._max_ext_len < size:
- raise ValueError(
- "%s exceeds max_ext_len(%s)" % (size, self._max_ext_len)
- )
- self._reserve(size + 1)
- n, obj = _unpack_from(fmt, self._buffer, self._buff_i)
- self._buff_i += size + 1
- elif 0xD9 <= b <= 0xDB:
- size, fmt, typ = _MSGPACK_HEADERS[b]
- self._reserve(size)
- if len(fmt) > 0:
- (n,) = _unpack_from(fmt, self._buffer, self._buff_i)
- else:
- n = self._buffer[self._buff_i]
- self._buff_i += size
- if n > self._max_str_len:
- raise ValueError("%s exceeds max_str_len(%s)" % (n, self._max_str_len))
- obj = self._read(n)
- elif 0xDC <= b <= 0xDD:
- size, fmt, typ = _MSGPACK_HEADERS[b]
- self._reserve(size)
- (n,) = _unpack_from(fmt, self._buffer, self._buff_i)
- self._buff_i += size
- if n > self._max_array_len:
- raise ValueError(
- "%s exceeds max_array_len(%s)" % (n, self._max_array_len)
- )
- elif 0xDE <= b <= 0xDF:
- size, fmt, typ = _MSGPACK_HEADERS[b]
- self._reserve(size)
- (n,) = _unpack_from(fmt, self._buffer, self._buff_i)
- self._buff_i += size
- if n > self._max_map_len:
- raise ValueError("%s exceeds max_map_len(%s)" % (n, self._max_map_len))
- else:
- raise FormatError("Unknown header: 0x%x" % b)
- return typ, n, obj
-
- def _unpack(self, execute=EX_CONSTRUCT):
- typ, n, obj = self._read_header()
-
- if execute == EX_READ_ARRAY_HEADER:
- if typ != TYPE_ARRAY:
- raise ValueError("Expected array")
- return n
- if execute == EX_READ_MAP_HEADER:
- if typ != TYPE_MAP:
- raise ValueError("Expected map")
- return n
- # TODO should we eliminate the recursion?
- if typ == TYPE_ARRAY:
- if execute == EX_SKIP:
- for i in xrange(n):
- # TODO check whether we need to call `list_hook`
- self._unpack(EX_SKIP)
- return
- ret = newlist_hint(n)
- for i in xrange(n):
- ret.append(self._unpack(EX_CONSTRUCT))
- if self._list_hook is not None:
- ret = self._list_hook(ret)
- # TODO is the interaction between `list_hook` and `use_list` ok?
- return ret if self._use_list else tuple(ret)
- if typ == TYPE_MAP:
- if execute == EX_SKIP:
- for i in xrange(n):
- # TODO check whether we need to call hooks
- self._unpack(EX_SKIP)
- self._unpack(EX_SKIP)
- return
- if self._object_pairs_hook is not None:
- ret = self._object_pairs_hook(
- (self._unpack(EX_CONSTRUCT), self._unpack(EX_CONSTRUCT))
- for _ in xrange(n)
- )
- else:
- ret = {}
- for _ in xrange(n):
- key = self._unpack(EX_CONSTRUCT)
- if self._strict_map_key and type(key) not in (unicode, bytes):
- raise ValueError(
- "%s is not allowed for map key" % str(type(key))
- )
- if not PY2 and type(key) is str:
- key = sys.intern(key)
- ret[key] = self._unpack(EX_CONSTRUCT)
- if self._object_hook is not None:
- ret = self._object_hook(ret)
- return ret
- if execute == EX_SKIP:
- return
- if typ == TYPE_RAW:
- if self._raw:
- obj = bytes(obj)
- else:
- obj = obj.decode("utf_8", self._unicode_errors)
- return obj
- if typ == TYPE_BIN:
- return bytes(obj)
- if typ == TYPE_EXT:
- if n == -1: # timestamp
- ts = Timestamp.from_bytes(bytes(obj))
- if self._timestamp == 1:
- return ts.to_unix()
- elif self._timestamp == 2:
- return ts.to_unix_nano()
- elif self._timestamp == 3:
- return ts.to_datetime()
- else:
- return ts
- else:
- return self._ext_hook(n, bytes(obj))
- assert typ == TYPE_IMMEDIATE
- return obj
-
- def __iter__(self):
- return self
-
- def __next__(self):
- try:
- ret = self._unpack(EX_CONSTRUCT)
- self._consume()
- return ret
- except OutOfData:
- self._consume()
- raise StopIteration
- except RecursionError:
- raise StackError
-
- next = __next__
-
- def skip(self):
- self._unpack(EX_SKIP)
- self._consume()
-
- def unpack(self):
- try:
- ret = self._unpack(EX_CONSTRUCT)
- except RecursionError:
- raise StackError
- self._consume()
- return ret
-
- def read_array_header(self):
- ret = self._unpack(EX_READ_ARRAY_HEADER)
- self._consume()
- return ret
-
- def read_map_header(self):
- ret = self._unpack(EX_READ_MAP_HEADER)
- self._consume()
- return ret
-
- def tell(self):
- return self._stream_offset
-
-
-class Packer(object):
- """
- MessagePack Packer
-
- Usage::
-
- packer = Packer()
- astream.write(packer.pack(a))
- astream.write(packer.pack(b))
-
- Packer's constructor has some keyword arguments:
-
- :param callable default:
- Convert user type to builtin type that Packer supports.
- See also simplejson's document.
-
- :param bool use_single_float:
- Use single precision float type for float. (default: False)
-
- :param bool autoreset:
- Reset buffer after each pack and return its content as `bytes`. (default: True).
- If set this to false, use `bytes()` to get content and `.reset()` to clear buffer.
-
- :param bool use_bin_type:
- Use bin type introduced in msgpack spec 2.0 for bytes.
- It also enables str8 type for unicode. (default: True)
-
- :param bool strict_types:
- If set to true, types will be checked to be exact. Derived classes
- from serializable types will not be serialized and will be
- treated as unsupported type and forwarded to default.
- Additionally tuples will not be serialized as lists.
- This is useful when trying to implement accurate serialization
- for python types.
-
- :param bool datetime:
- If set to true, datetime with tzinfo is packed into Timestamp type.
- Note that the tzinfo is stripped in the timestamp.
- You can get UTC datetime with `timestamp=3` option of the Unpacker.
- (Python 2 is not supported).
-
- :param str unicode_errors:
- The error handler for encoding unicode. (default: 'strict')
- DO NOT USE THIS!! This option is kept for very specific usage.
-
- Example of streaming deserialize from file-like object::
-
- unpacker = Unpacker(file_like)
- for o in unpacker:
- process(o)
-
- Example of streaming deserialize from socket::
-
- unpacker = Unpacker()
- while True:
- buf = sock.recv(1024**2)
- if not buf:
- break
- unpacker.feed(buf)
- for o in unpacker:
- process(o)
-
- Raises ``ExtraData`` when *packed* contains extra bytes.
- Raises ``OutOfData`` when *packed* is incomplete.
- Raises ``FormatError`` when *packed* is not valid msgpack.
- Raises ``StackError`` when *packed* contains too nested.
- Other exceptions can be raised during unpacking.
- """
-
- def __init__(
- self,
- default=None,
- use_single_float=False,
- autoreset=True,
- use_bin_type=True,
- strict_types=False,
- datetime=False,
- unicode_errors=None,
- ):
- self._strict_types = strict_types
- self._use_float = use_single_float
- self._autoreset = autoreset
- self._use_bin_type = use_bin_type
- self._buffer = StringIO()
- if PY2 and datetime:
- raise ValueError("datetime is not supported in Python 2")
- self._datetime = bool(datetime)
- self._unicode_errors = unicode_errors or "strict"
- if default is not None:
- if not callable(default):
- raise TypeError("default must be callable")
- self._default = default
-
- def _pack(
- self,
- obj,
- nest_limit=DEFAULT_RECURSE_LIMIT,
- check=isinstance,
- check_type_strict=_check_type_strict,
- ):
- default_used = False
- if self._strict_types:
- check = check_type_strict
- list_types = list
- else:
- list_types = (list, tuple)
- while True:
- if nest_limit < 0:
- raise ValueError("recursion limit exceeded")
- if obj is None:
- return self._buffer.write(b"\xc0")
- if check(obj, bool):
- if obj:
- return self._buffer.write(b"\xc3")
- return self._buffer.write(b"\xc2")
- if check(obj, int_types):
- if 0 <= obj < 0x80:
- return self._buffer.write(struct.pack("B", obj))
- if -0x20 <= obj < 0:
- return self._buffer.write(struct.pack("b", obj))
- if 0x80 <= obj <= 0xFF:
- return self._buffer.write(struct.pack("BB", 0xCC, obj))
- if -0x80 <= obj < 0:
- return self._buffer.write(struct.pack(">Bb", 0xD0, obj))
- if 0xFF < obj <= 0xFFFF:
- return self._buffer.write(struct.pack(">BH", 0xCD, obj))
- if -0x8000 <= obj < -0x80:
- return self._buffer.write(struct.pack(">Bh", 0xD1, obj))
- if 0xFFFF < obj <= 0xFFFFFFFF:
- return self._buffer.write(struct.pack(">BI", 0xCE, obj))
- if -0x80000000 <= obj < -0x8000:
- return self._buffer.write(struct.pack(">Bi", 0xD2, obj))
- if 0xFFFFFFFF < obj <= 0xFFFFFFFFFFFFFFFF:
- return self._buffer.write(struct.pack(">BQ", 0xCF, obj))
- if -0x8000000000000000 <= obj < -0x80000000:
- return self._buffer.write(struct.pack(">Bq", 0xD3, obj))
- if not default_used and self._default is not None:
- obj = self._default(obj)
- default_used = True
- continue
- raise OverflowError("Integer value out of range")
- if check(obj, (bytes, bytearray)):
- n = len(obj)
- if n >= 2 ** 32:
- raise ValueError("%s is too large" % type(obj).__name__)
- self._pack_bin_header(n)
- return self._buffer.write(obj)
- if check(obj, unicode):
- obj = obj.encode("utf-8", self._unicode_errors)
- n = len(obj)
- if n >= 2 ** 32:
- raise ValueError("String is too large")
- self._pack_raw_header(n)
- return self._buffer.write(obj)
- if check(obj, memoryview):
- n = len(obj) * obj.itemsize
- if n >= 2 ** 32:
- raise ValueError("Memoryview is too large")
- self._pack_bin_header(n)
- return self._buffer.write(obj)
- if check(obj, float):
- if self._use_float:
- return self._buffer.write(struct.pack(">Bf", 0xCA, obj))
- return self._buffer.write(struct.pack(">Bd", 0xCB, obj))
- if check(obj, (ExtType, Timestamp)):
- if check(obj, Timestamp):
- code = -1
- data = obj.to_bytes()
- else:
- code = obj.code
- data = obj.data
- assert isinstance(code, int)
- assert isinstance(data, bytes)
- L = len(data)
- if L == 1:
- self._buffer.write(b"\xd4")
- elif L == 2:
- self._buffer.write(b"\xd5")
- elif L == 4:
- self._buffer.write(b"\xd6")
- elif L == 8:
- self._buffer.write(b"\xd7")
- elif L == 16:
- self._buffer.write(b"\xd8")
- elif L <= 0xFF:
- self._buffer.write(struct.pack(">BB", 0xC7, L))
- elif L <= 0xFFFF:
- self._buffer.write(struct.pack(">BH", 0xC8, L))
- else:
- self._buffer.write(struct.pack(">BI", 0xC9, L))
- self._buffer.write(struct.pack("b", code))
- self._buffer.write(data)
- return
- if check(obj, list_types):
- n = len(obj)
- self._pack_array_header(n)
- for i in xrange(n):
- self._pack(obj[i], nest_limit - 1)
- return
- if check(obj, dict):
- return self._pack_map_pairs(
- len(obj), dict_iteritems(obj), nest_limit - 1
- )
-
- if self._datetime and check(obj, _DateTime) and obj.tzinfo is not None:
- obj = Timestamp.from_datetime(obj)
- default_used = 1
- continue
-
- if not default_used and self._default is not None:
- obj = self._default(obj)
- default_used = 1
- continue
-
- if self._datetime and check(obj, _DateTime):
- raise ValueError("Cannot serialize %r where tzinfo=None" % (obj,))
-
- raise TypeError("Cannot serialize %r" % (obj,))
-
- def pack(self, obj):
- try:
- self._pack(obj)
- except:
- self._buffer = StringIO() # force reset
- raise
- if self._autoreset:
- ret = self._buffer.getvalue()
- self._buffer = StringIO()
- return ret
-
- def pack_map_pairs(self, pairs):
- self._pack_map_pairs(len(pairs), pairs)
- if self._autoreset:
- ret = self._buffer.getvalue()
- self._buffer = StringIO()
- return ret
-
- def pack_array_header(self, n):
- if n >= 2 ** 32:
- raise ValueError
- self._pack_array_header(n)
- if self._autoreset:
- ret = self._buffer.getvalue()
- self._buffer = StringIO()
- return ret
-
- def pack_map_header(self, n):
- if n >= 2 ** 32:
- raise ValueError
- self._pack_map_header(n)
- if self._autoreset:
- ret = self._buffer.getvalue()
- self._buffer = StringIO()
- return ret
-
- def pack_ext_type(self, typecode, data):
- if not isinstance(typecode, int):
- raise TypeError("typecode must have int type.")
- if not 0 <= typecode <= 127:
- raise ValueError("typecode should be 0-127")
- if not isinstance(data, bytes):
- raise TypeError("data must have bytes type")
- L = len(data)
- if L > 0xFFFFFFFF:
- raise ValueError("Too large data")
- if L == 1:
- self._buffer.write(b"\xd4")
- elif L == 2:
- self._buffer.write(b"\xd5")
- elif L == 4:
- self._buffer.write(b"\xd6")
- elif L == 8:
- self._buffer.write(b"\xd7")
- elif L == 16:
- self._buffer.write(b"\xd8")
- elif L <= 0xFF:
- self._buffer.write(b"\xc7" + struct.pack("B", L))
- elif L <= 0xFFFF:
- self._buffer.write(b"\xc8" + struct.pack(">H", L))
- else:
- self._buffer.write(b"\xc9" + struct.pack(">I", L))
- self._buffer.write(struct.pack("B", typecode))
- self._buffer.write(data)
-
- def _pack_array_header(self, n):
- if n <= 0x0F:
- return self._buffer.write(struct.pack("B", 0x90 + n))
- if n <= 0xFFFF:
- return self._buffer.write(struct.pack(">BH", 0xDC, n))
- if n <= 0xFFFFFFFF:
- return self._buffer.write(struct.pack(">BI", 0xDD, n))
- raise ValueError("Array is too large")
-
- def _pack_map_header(self, n):
- if n <= 0x0F:
- return self._buffer.write(struct.pack("B", 0x80 + n))
- if n <= 0xFFFF:
- return self._buffer.write(struct.pack(">BH", 0xDE, n))
- if n <= 0xFFFFFFFF:
- return self._buffer.write(struct.pack(">BI", 0xDF, n))
- raise ValueError("Dict is too large")
-
- def _pack_map_pairs(self, n, pairs, nest_limit=DEFAULT_RECURSE_LIMIT):
- self._pack_map_header(n)
- for (k, v) in pairs:
- self._pack(k, nest_limit - 1)
- self._pack(v, nest_limit - 1)
-
- def _pack_raw_header(self, n):
- if n <= 0x1F:
- self._buffer.write(struct.pack("B", 0xA0 + n))
- elif self._use_bin_type and n <= 0xFF:
- self._buffer.write(struct.pack(">BB", 0xD9, n))
- elif n <= 0xFFFF:
- self._buffer.write(struct.pack(">BH", 0xDA, n))
- elif n <= 0xFFFFFFFF:
- self._buffer.write(struct.pack(">BI", 0xDB, n))
- else:
- raise ValueError("Raw is too large")
-
- def _pack_bin_header(self, n):
- if not self._use_bin_type:
- return self._pack_raw_header(n)
- elif n <= 0xFF:
- return self._buffer.write(struct.pack(">BB", 0xC4, n))
- elif n <= 0xFFFF:
- return self._buffer.write(struct.pack(">BH", 0xC5, n))
- elif n <= 0xFFFFFFFF:
- return self._buffer.write(struct.pack(">BI", 0xC6, n))
- else:
- raise ValueError("Bin is too large")
-
- def bytes(self):
- """Return internal buffer contents as bytes object"""
- return self._buffer.getvalue()
-
- def reset(self):
- """Reset internal buffer.
-
- This method is useful only when autoreset=False.
- """
- self._buffer = StringIO()
-
- def getbuffer(self):
- """Return view of internal buffer."""
- if USING_STRINGBUILDER or PY2:
- return memoryview(self.bytes())
- else:
- return self._buffer.getbuffer()
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/smithy.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/smithy.py
deleted file mode 100644
index 3f48bfa455d1f76c95105fca68f94c4b29ed95f7..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/smithy.py
+++ /dev/null
@@ -1,78 +0,0 @@
-"""
- pygments.lexers.smithy
- ~~~~~~~~~~~~~~~~~~~~~~
-
- Lexers for the Smithy IDL.
-
- :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-from pygments.lexer import RegexLexer, bygroups, words
-from pygments.token import Text, Comment, Keyword, Name, String, \
- Number, Whitespace, Punctuation
-
-__all__ = ['SmithyLexer']
-
-
-class SmithyLexer(RegexLexer):
- """
- For Smithy IDL
-
- .. versionadded:: 2.10
- """
- name = 'Smithy'
- url = 'https://awslabs.github.io/smithy/'
- filenames = ['*.smithy']
- aliases = ['smithy']
-
- unquoted = r'[A-Za-z0-9_\.#$-]+'
- identifier = r"[A-Za-z0-9_\.#$-]+"
-
- simple_shapes = (
- 'use', 'byte', 'short', 'integer', 'long', 'float', 'document',
- 'double', 'bigInteger', 'bigDecimal', 'boolean', 'blob', 'string',
- 'timestamp',
- )
-
- aggregate_shapes = (
- 'apply', 'list', 'map', 'set', 'structure', 'union', 'resource',
- 'operation', 'service', 'trait'
- )
-
- tokens = {
- 'root': [
- (r'///.*$', Comment.Multiline),
- (r'//.*$', Comment),
- (r'@[0-9a-zA-Z\.#-]*', Name.Decorator),
- (r'(=)', Name.Decorator),
- (r'^(\$version)(:)(.+)',
- bygroups(Keyword.Declaration, Name.Decorator, Name.Class)),
- (r'^(namespace)(\s+' + identifier + r')\b',
- bygroups(Keyword.Declaration, Name.Class)),
- (words(simple_shapes,
- prefix=r'^', suffix=r'(\s+' + identifier + r')\b'),
- bygroups(Keyword.Declaration, Name.Class)),
- (words(aggregate_shapes,
- prefix=r'^', suffix=r'(\s+' + identifier + r')'),
- bygroups(Keyword.Declaration, Name.Class)),
- (r'^(metadata)(\s+)((?:\S+)|(?:\"[^"]+\"))(\s*)(=)',
- bygroups(Keyword.Declaration, Whitespace, Name.Class,
- Whitespace, Name.Decorator)),
- (r"(true|false|null)", Keyword.Constant),
- (r"(-?(?:0|[1-9]\d*)(?:\.\d+)?(?:[eE][+-]?\d+)?)", Number),
- (identifier + ":", Name.Label),
- (identifier, Name.Variable.Class),
- (r'\[', Text, "#push"),
- (r'\]', Text, "#pop"),
- (r'\(', Text, "#push"),
- (r'\)', Text, "#pop"),
- (r'\{', Text, "#push"),
- (r'\}', Text, "#pop"),
- (r'"{3}(\\\\|\n|\\")*"{3}', String.Doc),
- (r'"(\\\\|\n|\\"|[^"])*"', String.Double),
- (r"'(\\\\|\n|\\'|[^'])*'", String.Single),
- (r'[:,]+', Punctuation),
- (r'\s+', Whitespace),
- ]
- }
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/version.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/version.py
deleted file mode 100644
index c33bebaed26aeead3a97b48dcd4f34308ca3976e..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/version.py
+++ /dev/null
@@ -1,347 +0,0 @@
-#
-# distutils/version.py
-#
-# Implements multiple version numbering conventions for the
-# Python Module Distribution Utilities.
-#
-# $Id$
-#
-
-"""Provides classes to represent module version numbers (one class for
-each style of version numbering). There are currently two such classes
-implemented: StrictVersion and LooseVersion.
-
-Every version number class implements the following interface:
- * the 'parse' method takes a string and parses it to some internal
- representation; if the string is an invalid version number,
- 'parse' raises a ValueError exception
- * the class constructor takes an optional string argument which,
- if supplied, is passed to 'parse'
- * __str__ reconstructs the string that was passed to 'parse' (or
- an equivalent string -- ie. one that will generate an equivalent
- version number instance)
- * __repr__ generates Python code to recreate the version number instance
- * _cmp compares the current instance with either another instance
- of the same class or a string (which will be parsed to an instance
- of the same class, thus must follow the same rules)
-"""
-
-import re
-
-class Version:
- """Abstract base class for version numbering classes. Just provides
- constructor (__init__) and reproducer (__repr__), because those
- seem to be the same for all version numbering classes; and route
- rich comparisons to _cmp.
- """
-
- def __init__ (self, vstring=None):
- if vstring:
- self.parse(vstring)
-
- def __repr__ (self):
- return "%s ('%s')" % (self.__class__.__name__, str(self))
-
- def __eq__(self, other):
- c = self._cmp(other)
- if c is NotImplemented:
- return c
- return c == 0
-
- def __lt__(self, other):
- c = self._cmp(other)
- if c is NotImplemented:
- return c
- return c < 0
-
- def __le__(self, other):
- c = self._cmp(other)
- if c is NotImplemented:
- return c
- return c <= 0
-
- def __gt__(self, other):
- c = self._cmp(other)
- if c is NotImplemented:
- return c
- return c > 0
-
- def __ge__(self, other):
- c = self._cmp(other)
- if c is NotImplemented:
- return c
- return c >= 0
-
-
-# Interface for version-number classes -- must be implemented
-# by the following classes (the concrete ones -- Version should
-# be treated as an abstract class).
-# __init__ (string) - create and take same action as 'parse'
-# (string parameter is optional)
-# parse (string) - convert a string representation to whatever
-# internal representation is appropriate for
-# this style of version numbering
-# __str__ (self) - convert back to a string; should be very similar
-# (if not identical to) the string supplied to parse
-# __repr__ (self) - generate Python code to recreate
-# the instance
-# _cmp (self, other) - compare two version numbers ('other' may
-# be an unparsed version string, or another
-# instance of your version class)
-
-
-class StrictVersion (Version):
-
- """Version numbering for anal retentives and software idealists.
- Implements the standard interface for version number classes as
- described above. A version number consists of two or three
- dot-separated numeric components, with an optional "pre-release" tag
- on the end. The pre-release tag consists of the letter 'a' or 'b'
- followed by a number. If the numeric components of two version
- numbers are equal, then one with a pre-release tag will always
- be deemed earlier (lesser) than one without.
-
- The following are valid version numbers (shown in the order that
- would be obtained by sorting according to the supplied cmp function):
-
- 0.4 0.4.0 (these two are equivalent)
- 0.4.1
- 0.5a1
- 0.5b3
- 0.5
- 0.9.6
- 1.0
- 1.0.4a3
- 1.0.4b1
- 1.0.4
-
- The following are examples of invalid version numbers:
-
- 1
- 2.7.2.2
- 1.3.a4
- 1.3pl1
- 1.3c4
-
- The rationale for this version numbering system will be explained
- in the distutils documentation.
- """
-
- version_re = re.compile(r'^(\d+) \. (\d+) (\. (\d+))? ([ab](\d+))?$',
- re.VERBOSE | re.ASCII)
-
-
- def parse (self, vstring):
- match = self.version_re.match(vstring)
- if not match:
- raise ValueError("invalid version number '%s'" % vstring)
-
- (major, minor, patch, prerelease, prerelease_num) = \
- match.group(1, 2, 4, 5, 6)
-
- if patch:
- self.version = tuple(map(int, [major, minor, patch]))
- else:
- self.version = tuple(map(int, [major, minor])) + (0,)
-
- if prerelease:
- self.prerelease = (prerelease[0], int(prerelease_num))
- else:
- self.prerelease = None
-
-
- def __str__ (self):
-
- if self.version[2] == 0:
- vstring = '.'.join(map(str, self.version[0:2]))
- else:
- vstring = '.'.join(map(str, self.version))
-
- if self.prerelease:
- vstring = vstring + self.prerelease[0] + str(self.prerelease[1])
-
- return vstring
-
-
- def _cmp (self, other):
- if isinstance(other, str):
- other = StrictVersion(other)
- elif not isinstance(other, StrictVersion):
- return NotImplemented
-
- if self.version != other.version:
- # numeric versions don't match
- # prerelease stuff doesn't matter
- if self.version < other.version:
- return -1
- else:
- return 1
-
- # have to compare prerelease
- # case 1: neither has prerelease; they're equal
- # case 2: self has prerelease, other doesn't; other is greater
- # case 3: self doesn't have prerelease, other does: self is greater
- # case 4: both have prerelease: must compare them!
-
- if (not self.prerelease and not other.prerelease):
- return 0
- elif (self.prerelease and not other.prerelease):
- return -1
- elif (not self.prerelease and other.prerelease):
- return 1
- elif (self.prerelease and other.prerelease):
- if self.prerelease == other.prerelease:
- return 0
- elif self.prerelease < other.prerelease:
- return -1
- else:
- return 1
- else:
- assert False, "never get here"
-
-# end class StrictVersion
-
-
-# The rules according to Greg Stein:
-# 1) a version number has 1 or more numbers separated by a period or by
-# sequences of letters. If only periods, then these are compared
-# left-to-right to determine an ordering.
-# 2) sequences of letters are part of the tuple for comparison and are
-# compared lexicographically
-# 3) recognize the numeric components may have leading zeroes
-#
-# The LooseVersion class below implements these rules: a version number
-# string is split up into a tuple of integer and string components, and
-# comparison is a simple tuple comparison. This means that version
-# numbers behave in a predictable and obvious way, but a way that might
-# not necessarily be how people *want* version numbers to behave. There
-# wouldn't be a problem if people could stick to purely numeric version
-# numbers: just split on period and compare the numbers as tuples.
-# However, people insist on putting letters into their version numbers;
-# the most common purpose seems to be:
-# - indicating a "pre-release" version
-# ('alpha', 'beta', 'a', 'b', 'pre', 'p')
-# - indicating a post-release patch ('p', 'pl', 'patch')
-# but of course this can't cover all version number schemes, and there's
-# no way to know what a programmer means without asking him.
-#
-# The problem is what to do with letters (and other non-numeric
-# characters) in a version number. The current implementation does the
-# obvious and predictable thing: keep them as strings and compare
-# lexically within a tuple comparison. This has the desired effect if
-# an appended letter sequence implies something "post-release":
-# eg. "0.99" < "0.99pl14" < "1.0", and "5.001" < "5.001m" < "5.002".
-#
-# However, if letters in a version number imply a pre-release version,
-# the "obvious" thing isn't correct. Eg. you would expect that
-# "1.5.1" < "1.5.2a2" < "1.5.2", but under the tuple/lexical comparison
-# implemented here, this just isn't so.
-#
-# Two possible solutions come to mind. The first is to tie the
-# comparison algorithm to a particular set of semantic rules, as has
-# been done in the StrictVersion class above. This works great as long
-# as everyone can go along with bondage and discipline. Hopefully a
-# (large) subset of Python module programmers will agree that the
-# particular flavour of bondage and discipline provided by StrictVersion
-# provides enough benefit to be worth using, and will submit their
-# version numbering scheme to its domination. The free-thinking
-# anarchists in the lot will never give in, though, and something needs
-# to be done to accommodate them.
-#
-# Perhaps a "moderately strict" version class could be implemented that
-# lets almost anything slide (syntactically), and makes some heuristic
-# assumptions about non-digits in version number strings. This could
-# sink into special-case-hell, though; if I was as talented and
-# idiosyncratic as Larry Wall, I'd go ahead and implement a class that
-# somehow knows that "1.2.1" < "1.2.2a2" < "1.2.2" < "1.2.2pl3", and is
-# just as happy dealing with things like "2g6" and "1.13++". I don't
-# think I'm smart enough to do it right though.
-#
-# In any case, I've coded the test suite for this module (see
-# ../test/test_version.py) specifically to fail on things like comparing
-# "1.2a2" and "1.2". That's not because the *code* is doing anything
-# wrong, it's because the simple, obvious design doesn't match my
-# complicated, hairy expectations for real-world version numbers. It
-# would be a snap to fix the test suite to say, "Yep, LooseVersion does
-# the Right Thing" (ie. the code matches the conception). But I'd rather
-# have a conception that matches common notions about version numbers.
-
-class LooseVersion (Version):
-
- """Version numbering for anarchists and software realists.
- Implements the standard interface for version number classes as
- described above. A version number consists of a series of numbers,
- separated by either periods or strings of letters. When comparing
- version numbers, the numeric components will be compared
- numerically, and the alphabetic components lexically. The following
- are all valid version numbers, in no particular order:
-
- 1.5.1
- 1.5.2b2
- 161
- 3.10a
- 8.02
- 3.4j
- 1996.07.12
- 3.2.pl0
- 3.1.1.6
- 2g6
- 11g
- 0.960923
- 2.2beta29
- 1.13++
- 5.5.kw
- 2.0b1pl0
-
- In fact, there is no such thing as an invalid version number under
- this scheme; the rules for comparison are simple and predictable,
- but may not always give the results you want (for some definition
- of "want").
- """
-
- component_re = re.compile(r'(\d+ | [a-z]+ | \.)', re.VERBOSE)
-
- def __init__ (self, vstring=None):
- if vstring:
- self.parse(vstring)
-
-
- def parse (self, vstring):
- # I've given up on thinking I can reconstruct the version string
- # from the parsed tuple -- so I just store the string here for
- # use by __str__
- self.vstring = vstring
- components = [x for x in self.component_re.split(vstring)
- if x and x != '.']
- for i, obj in enumerate(components):
- try:
- components[i] = int(obj)
- except ValueError:
- pass
-
- self.version = components
-
-
- def __str__ (self):
- return self.vstring
-
-
- def __repr__ (self):
- return "LooseVersion ('%s')" % str(self)
-
-
- def _cmp (self, other):
- if isinstance(other, str):
- other = LooseVersion(other)
- elif not isinstance(other, LooseVersion):
- return NotImplemented
-
- if self.version == other.version:
- return 0
- if self.version < other.version:
- return -1
- if self.version > other.version:
- return 1
-
-
-# end class LooseVersion
diff --git a/spaces/qingxu98/gpt-academic/crazy_functions/live_audio/aliyunASR.py b/spaces/qingxu98/gpt-academic/crazy_functions/live_audio/aliyunASR.py
deleted file mode 100644
index ed67fcd3fb391409d7e6aced033d46585e62a858..0000000000000000000000000000000000000000
--- a/spaces/qingxu98/gpt-academic/crazy_functions/live_audio/aliyunASR.py
+++ /dev/null
@@ -1,129 +0,0 @@
-import time, logging, json
-
-
-class AliyunASR():
-
- def test_on_sentence_begin(self, message, *args):
- # print("test_on_sentence_begin:{}".format(message))
- pass
-
- def test_on_sentence_end(self, message, *args):
- # print("test_on_sentence_end:{}".format(message))
- message = json.loads(message)
- self.parsed_sentence = message['payload']['result']
- self.event_on_entence_end.set()
- # print(self.parsed_sentence)
-
- def test_on_start(self, message, *args):
- # print("test_on_start:{}".format(message))
- pass
-
- def test_on_error(self, message, *args):
- logging.error("on_error args=>{}".format(args))
- pass
-
- def test_on_close(self, *args):
- self.aliyun_service_ok = False
- pass
-
- def test_on_result_chg(self, message, *args):
- # print("test_on_chg:{}".format(message))
- message = json.loads(message)
- self.parsed_text = message['payload']['result']
- self.event_on_result_chg.set()
-
- def test_on_completed(self, message, *args):
- # print("on_completed:args=>{} message=>{}".format(args, message))
- pass
-
- def audio_convertion_thread(self, uuid):
- # 在一个异步线程中采集音频
- import nls # pip install git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
- import tempfile
- from scipy import io
- from toolbox import get_conf
- from .audio_io import change_sample_rate
- from .audio_io import RealtimeAudioDistribution
- NEW_SAMPLERATE = 16000
- rad = RealtimeAudioDistribution()
- rad.clean_up()
- temp_folder = tempfile.gettempdir()
- TOKEN, APPKEY = get_conf('ALIYUN_TOKEN', 'ALIYUN_APPKEY')
- if len(TOKEN) == 0:
- TOKEN = self.get_token()
- self.aliyun_service_ok = True
- URL="wss://nls-gateway.aliyuncs.com/ws/v1"
- sr = nls.NlsSpeechTranscriber(
- url=URL,
- token=TOKEN,
- appkey=APPKEY,
- on_sentence_begin=self.test_on_sentence_begin,
- on_sentence_end=self.test_on_sentence_end,
- on_start=self.test_on_start,
- on_result_changed=self.test_on_result_chg,
- on_completed=self.test_on_completed,
- on_error=self.test_on_error,
- on_close=self.test_on_close,
- callback_args=[uuid.hex]
- )
-
- r = sr.start(aformat="pcm",
- enable_intermediate_result=True,
- enable_punctuation_prediction=True,
- enable_inverse_text_normalization=True)
-
- while not self.stop:
- # time.sleep(self.capture_interval)
- audio = rad.read(uuid.hex)
- if audio is not None:
- # convert to pcm file
- temp_file = f'{temp_folder}/{uuid.hex}.pcm' #
- dsdata = change_sample_rate(audio, rad.rate, NEW_SAMPLERATE) # 48000 --> 16000
- io.wavfile.write(temp_file, NEW_SAMPLERATE, dsdata)
- # read pcm binary
- with open(temp_file, "rb") as f: data = f.read()
- # print('audio len:', len(audio), '\t ds len:', len(dsdata), '\t need n send:', len(data)//640)
- slices = zip(*(iter(data),) * 640) # 640个字节为一组
- for i in slices: sr.send_audio(bytes(i))
- else:
- time.sleep(0.1)
-
- if not self.aliyun_service_ok:
- self.stop = True
- self.stop_msg = 'Aliyun音频服务异常,请检查ALIYUN_TOKEN和ALIYUN_APPKEY是否过期。'
- r = sr.stop()
-
- def get_token(self):
- from toolbox import get_conf
- import json
- from aliyunsdkcore.request import CommonRequest
- from aliyunsdkcore.client import AcsClient
- AccessKey_ID, AccessKey_secret = get_conf('ALIYUN_ACCESSKEY', 'ALIYUN_SECRET')
-
- # 创建AcsClient实例
- client = AcsClient(
- AccessKey_ID,
- AccessKey_secret,
- "cn-shanghai"
- )
-
- # 创建request,并设置参数。
- request = CommonRequest()
- request.set_method('POST')
- request.set_domain('nls-meta.cn-shanghai.aliyuncs.com')
- request.set_version('2019-02-28')
- request.set_action_name('CreateToken')
-
- try:
- response = client.do_action_with_exception(request)
- print(response)
- jss = json.loads(response)
- if 'Token' in jss and 'Id' in jss['Token']:
- token = jss['Token']['Id']
- expireTime = jss['Token']['ExpireTime']
- print("token = " + token)
- print("expireTime = " + str(expireTime))
- except Exception as e:
- print(e)
-
- return token
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Awarapan 1 Download 720p Movie.md b/spaces/quidiaMuxgu/Expedit-SAM/Awarapan 1 Download 720p Movie.md
deleted file mode 100644
index 0d8e5d266ebc3de071dad86361d9a81c2e6cf15b..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Awarapan 1 Download 720p Movie.md
+++ /dev/null
@@ -1,13 +0,0 @@
-
-
Awarapan 1: A Neo-Noir Action Thriller That You Can Download in 720p
-
Awarapan 1 is a 2007 Indian Hindi-language movie directed by Mohit Suri and produced by Mukesh Bhatt. It stars Emraan Hashmi as Shivam, a loyal gangster who falls in love with a girl named Reema, played by Shriya Saran. The movie is a remake of the Korean film A Bittersweet Life and explores themes of love, betrayal, redemption and vagrancy.
-
The movie was praised for its cinematography, music, performances and action sequences. It also received several awards and nominations, including the Filmfare Award for Best Editing. The movie has a cult following among fans of Emraan Hashmi and neo-noir cinema.
If you are looking for a way to download Awarapan 1 in 720p quality, you can find several options online. However, you should be careful about the sources you choose, as some of them may contain viruses, malware or illegal content. You should also respect the copyrights of the creators and distributors of the movie and avoid piracy.
-
One of the websites that claims to offer Awarapan 1 download in 720p is YTS.MX. This website provides torrent files for various movies in different languages and genres. You can download Awarapan 1 from this website by using a torrent client such as BitTorrent or uTorrent. However, you should be aware that this website is not authorized by the makers or owners of the movie and may violate their rights. You should also use a VPN service to protect your privacy and security while downloading torrents.
-
Another website that claims to offer Awarapan 1 download in 720p is OlaMovies.Cloud. This website provides direct links for various movies in different formats and qualities. You can download Awarapan 1 from this website by using a browser or a downloader such as IDM or JDownloader. However, you should be aware that this website may also contain unauthorized or illegal content and may expose you to ads or pop-ups that may harm your device or data. You should also use an ad-blocker or a malware scanner to prevent any unwanted issues while downloading movies.
-
A third website that claims to offer Awarapan 1 download in 720p is Dailymotion.Com. This website provides video streaming for various movies and shows in different languages and genres. You can watch Awarapan 1 on this website by using a browser or an app on your device. However, you should be aware that this website may not have the full movie or the best quality available and may also have ads or interruptions that may affect your viewing experience. You should also use a video downloader such as Video DownloadHelper or SaveFrom.Net to download the movie from this website.
-
-
These are some of the possible ways to download Awarapan 1 in 720p quality. However, we recommend that you watch the movie legally on platforms such as Amazon Prime Video, Netflix or YouTube where you can enjoy the movie in high definition and support the creators and distributors of the movie.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Camtasia Studio 2019.0.8 Crack With License Key !FULL! Free Download [New].md b/spaces/quidiaMuxgu/Expedit-SAM/Camtasia Studio 2019.0.8 Crack With License Key !FULL! Free Download [New].md
deleted file mode 100644
index 251add2c4de27b3038fdc58a0cdf7a8a78cb61d7..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Camtasia Studio 2019.0.8 Crack With License Key !FULL! Free Download [New].md
+++ /dev/null
@@ -1,10 +0,0 @@
-
Camtasia Studio 2019.0.8 Crack With License Key Free Download [New]
-
-6 days ago - fixed a crash that could occur when creating a new project. . Fixed an issue where Camtasia could ask for a license key when it's already . Fixed an issue where Camtasia could prompt for a license key when it has already been registered.
-Also fixed a bug that caused the program to crash when updating Camtasia.
-Fixed a bug due to which the license was not checked during the installation of the application.
-Now Camtasia will check the license on every installation.Fixed a bug due to which files from some directories were not saved.
-Now Camtasia saves all files. 8a78ff9644
-
-
-
diff --git a/spaces/radames/Real-Time-Latent-Consistency-Model/README.md b/spaces/radames/Real-Time-Latent-Consistency-Model/README.md
deleted file mode 100644
index 068d31d6882455d3302700ea36cc8e9c5fe33396..0000000000000000000000000000000000000000
--- a/spaces/radames/Real-Time-Latent-Consistency-Model/README.md
+++ /dev/null
@@ -1,75 +0,0 @@
----
-title: Real-Time Latent Consistency Model Image-to-Image
-emoji: 🖼️🖼️
-colorFrom: gray
-colorTo: indigo
-sdk: docker
-pinned: false
-suggested_hardware: a10g-small
----
-
-# Real-Time Latent Consistency Model
-
-This demo showcases [Latent Consistency Model (LCM)](https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7) using [Diffusers](https://github.com/huggingface/diffusers/tree/main/examples/community#latent-consistency-pipeline) with a MJPEG stream server.
-
-You need a webcam to run this demo. 🤗
-
-## Running Locally
-
-You need CUDA and Python 3.10, Mac with an M1/M2/M3 chip or Intel Arc GPU
-
-`TIMEOUT`: limit user session timeout
-`SAFETY_CHECKER`: disabled if you want NSFW filter off
-`MAX_QUEUE_SIZE`: limit number of users on current app instance
-
-### image to image
-
-```bash
-python -m venv venv
-source venv/bin/activate
-pip3 install -r requirements.txt
-uvicorn "app-img2img:app" --host 0.0.0.0 --port 7860 --reload
-```
-
-### text to image
-
-```bash
-python -m venv venv
-source venv/bin/activate
-pip3 install -r requirements.txt
-uvicorn "app-txt2img:app" --host 0.0.0.0 --port 7860 --reload
-```
-
-or with environment variables
-
-```bash
-TIMEOUT=120 SAFETY_CHECKER=True MAX_QUEUE_SIZE=4 uvicorn "app-img2img:app" --host 0.0.0.0 --port 7860 --reload
-```
-
-If you're running locally and want to test it on Mobile Safari, the webserver needs to be served over HTTPS.
-
-```bash
-openssl req -newkey rsa:4096 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem
-uvicorn "app-img2img:app" --host 0.0.0.0 --port 7860 --reload --log-level info --ssl-certfile=certificate.pem --ssl-keyfile=key.pem
-```
-
-## Docker
-
-You need NVIDIA Container Toolkit for Docker
-
-```bash
-docker build -t lcm-live .
-docker run -ti -p 7860:7860 --gpus all lcm-live
-```
-
-or with environment variables
-
-```bash
-docker run -ti -e TIMEOUT=0 -e SAFETY_CHECKER=False -p 7860:7860 --gpus all lcm-live
-```
-
-# Demo on Hugging Face
-
-https://huggingface.co/spaces/radames/Real-Time-Latent-Consistency-Model
-
-https://github.com/radames/Real-Time-Latent-Consistency-Model/assets/102277/c4003ac5-e7ff-44c0-97d3-464bb659de70
diff --git a/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/demo/visualize/__init__.py b/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/demo/visualize/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/AVA De-Esser v2.0.1 Incl Patched and Keygen-R2R The Complete Guide to De-Essing Your Vocals with Ease.md b/spaces/raedeXanto/academic-chatgpt-beta/AVA De-Esser v2.0.1 Incl Patched and Keygen-R2R The Complete Guide to De-Essing Your Vocals with Ease.md
deleted file mode 100644
index b44a89fb8a78088ffd153afaa84248ca35a0476e..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/AVA De-Esser v2.0.1 Incl Patched and Keygen-R2R The Complete Guide to De-Essing Your Vocals with Ease.md
+++ /dev/null
@@ -1,91 +0,0 @@
-
-
AVA De-Esser v2.0.1 Incl Patched and Keygen-R2R: A Review
-
If you are looking for a smooth and transparent de-esser that can handle any sibilant source without affecting the natural sound, you might want to check out AVA De-Esser by Harrison Consoles. This plugin is the latest implementation of Harrison's renowned de-esser algorithm, used in ultra high-end post & film facilities worldwide. In this article, we will review the main features, pros and cons, and comparison of AVA De-Esser with other de-essers on the market.
Introduction: What is AVA De-Esser and why you need it
-
De-essing is a process of reducing or removing excessive sibilance from audio signals. Sibilance is a type of high-frequency noise that occurs when pronouncing consonants such as "s", "z", "sh", "ch", etc. It can cause harshness, distortion, or masking in your vocals, instruments, or mixes.
-
De-essing is important for audio production because it can improve the clarity, intelligibility, and quality of your sound. It can also prevent ear fatigue or damage when listening to loud or bright sounds.
-
AVA De-Esser is a plugin that can help you achieve smooth and transparent de-essing for any sibilant source. It is based on Harrison's fourth-generation algorithm that has been continuously tweaked by their customers in high-end music, film, broadcast, and video-post production.
-
AVA De-Esser differs from other de-essers in several ways. First, it uses an intelligent algorithm that operates on harsh sibilance while ignoring other fricatives. This means that it can preserve the natural sound of your source without affecting other consonants or vowels.
-
AVA De-Esser v2.0.1 crack download
-AVA De-Esser v2.0.1 free license key
-AVA De-Esser v2.0.1 patched and keygen mac
-AVA De-Esser v2.0.1 full version windows
-AVA De-Esser v2.0.1 review and tutorial
-AVA De-Esser v2.0.1 best settings for vocals
-AVA De-Esser v2.0.1 vs other de-essers
-AVA De-Esser v2.0.1 how to install and activate
-AVA De-Esser v2.0.1 features and benefits
-AVA De-Esser v2.0.1 discount code and coupon
-AVA De-Esser v2.0.1 alternative and competitor
-AVA De-Esser v2.0.1 compatibility and system requirements
-AVA De-Esser v2.0.1 update and changelog
-AVA De-Esser v2.0.1 trial and demo version
-AVA De-Esser v2.0.1 support and customer service
-AVA De-Esser v2.0.1 testimonials and feedback
-AVA De-Esser v2.0.1 pros and cons
-AVA De-Esser v2.0.1 tips and tricks
-AVA De-Esser v2.0.1 plugin for DAWs
-AVA De-Esser v2.0.1 audio processing tool
-AVA De-Esser v2.0.1 reduce sibilance and harshness
-AVA De-Esser v2.0.1 improve vocal clarity and quality
-AVA De-Esser v2.0.1 frequency selective compression
-AVA De-Esser v2.0.1 sidechain and listen modes
-AVA De-Esser v2.0.1 spectrum analyzer and gain reduction meter
-AVA De-Esser v2.0.1 intuitive and user-friendly interface
-AVA De-Esser v2.0.1 customizable and flexible parameters
-AVA De-Esser v2.0.1 presets and user manual
-AVA De-Esser v2.0.1 by Harrison Consoles
-AVA De-Esser v2.0.1 RAR file download link
-AVA De-Esser v2 torrent magnet link
-How to use AVA De-Esser v2 in FL Studio
-How to use AVA De-Esser v2 in Logic Pro X
-How to use AVA De-Esser v2 in Ableton Live
-How to use AVA De-Esser v2 in Pro Tools
-How to use AVA De-Esser v2 in Cubase
-How to use AVA De-Esser v2 in Reaper
-How to use AVA De-Esser v2 in Studio One
-How to use AVA De-Esser v2 in GarageBand
-How to use AVA De-Esser v2 in Audacity
-How to use AVA De-Esser v2 in Adobe Audition
-How to use AVA De-Esser v2 in Reason
-How to use AVA De-Esser v2 in Bitwig Studio
-How to use AVA De-Esser v2 in Cakewalk by BandLab
-How to use AVA De-Esser v2 in Mixcraft 9 Pro Studio
-How to use AVA De-Esser v2 in Samplitude Pro X6 Suite
-How to use AVA De-Esser v2 in Nuendo 11
-How to use AVA De-Esser v2 in WaveLab Pro 10
-
Second, it has a fast and easy-to-use interface with 6 control dimensions accessible in the main graph. You can adjust the threshold, depth, frequency range, bandwidth, output gain, and bypass with simple sliders or knobs.
-
Third, it has a zero-latency processing that makes it suitable for live use. You can apply it to your vocals or instruments without worrying about latency or sync issues.
-
Fourth, it has a gain-reduction meter that appears on Pro Tools (AAX) and Studio One (VST3) mixer strip. You can monitor how much de-essing is applied to your signal without opening the plugin window.
-
Fifth, it has a band solo and auto-solo feature that helps you dial-in the sibilant frequency range. You can solo the band that is being processed by clicking on the graph or use the auto-solo button to automatically solo the band when adjusting the frequency range.
-
Sixth, it has a low CPU usage that allows you to use multiple instances without affecting your system performance.
-
Seventh, it has an affordable price that makes it accessible to anyone who needs a quality de-esser.
-
Features: What are the main features of AVA De-Esser and how they work
-
As mentioned above, AVA De-Esser has several features that make it stand out from other de-essers. Here are some of them in more detail:
-
-
Intelligent algorithm: The core of AVA De-Esser is an intelligent algorithm that detects and reduces harsh sibilance while preserving other fricatives. It does this by analyzing the spectral content of your signal and applying a dynamic filter that attenuates only the problematic frequencies. This way, you can achieve smooth and transparent de-essing without affecting the natural sound of your source.
-
Band Solo and Auto-Solo: These features help you dial-in the sibilant frequency range by allowing you to solo the band that is being processed by clicking on the graph or using the auto-solo button to automatically solo the band when adjusting the frequency range. This way, you can hear exactly what frequencies are being affected by the de-esser and fine-tune them accordingly.
-
Adjustable threshold and depth: These parameters allow you to control how much de-essing is applied to your signal. The threshold determines when the de-esser kicks in based on the input level. The depth determines how much attenuation is applied to the sibilant frequencies once they cross the threshold. You can adjust these parameters with simple sliders or knobs on the interface.
-
Fast and easy-to-use interface: The interface of AVA De-Esser is designed to be fast and easy-to-use with 6 control dimensions accessible in the main graph. You can adjust all parameters with simple sliders or knobs without opening any menus or sub-windows. You can also resize the plugin window to fit your screen size.
-
Zero-latency processing: The processing of AVA De-Esser is done in real-time without introducing any latency or sync issues. This makes it suitable for live use as well as studio use.
-
Gain-reduction meter: The gain-reduction meter shows how much de-essing is applied to your signal in decibels. It appears on Pro Tools (AAX) and Studio One (VST3) mixer strip so you can monitor it without opening the plugin window.
-
-
Pros and cons: What are the advantages and disadvantages of AVA De-Esser
-
Like any plugin, AVA De-Esser has its pros and cons depending on your needs and preferences. Here are some of them:
-
-
-
Pros
-
Cons
-
-
-
- Smooth and transparent de-essing
-
- No presets
-
-
-
- Versatile and flexible for any sibilant source
-
-
If you want to get more information about AVA De-Esser, you can visit Harrison's website where you can find the product page, the user manual, the video tutorials, the support forum, and the contact details. You can also check out some online reviews and testimonials from other users who have tried AVA De-Esser and shared their opinions and experiences.
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/AutoCAD Electrical 2017 (64bit) (Product key and Xforce keygen) .rar The Ultimate Guide for Electrical Engineers.md b/spaces/raedeXanto/academic-chatgpt-beta/AutoCAD Electrical 2017 (64bit) (Product key and Xforce keygen) .rar The Ultimate Guide for Electrical Engineers.md
deleted file mode 100644
index dceabf3fca4565d29e84a442efa2a2cff4718aeb..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/AutoCAD Electrical 2017 (64bit) (Product key and Xforce keygen) .rar The Ultimate Guide for Electrical Engineers.md
+++ /dev/null
@@ -1,159 +0,0 @@
-
-
AutoCAD Electrical 2017: A Comprehensive Guide
-
If you are an electrical engineer, designer, or drafter, you probably know how important it is to have a reliable and powerful software tool for creating and managing electrical schematics, circuits, and projects. That's why you should consider using AutoCAD Electrical 2017, the latest version of the industry-leading software for electrical design and documentation.
-
In this article, we will give you a comprehensive guide on what AutoCAD Electrical 2017 is, what features and benefits it offers, how to install and use it, how to optimize your workflow with tips and tricks, and how to troubleshoot and get support for any issues you may encounter. By the end of this article, you will have a clear understanding of how AutoCAD Electrical 2017 can help you achieve your electrical design goals faster and easier.
-
AutoCAD Electrical 2017 (64bit) (Product key and Xforce keygen) .rar
AutoCAD Electrical 2017 is a specialized software application that runs on top of the standard AutoCAD platform. It is designed specifically for electrical engineers, designers, and drafters who need to create and manage electrical schematics, circuits, and projects. It is part of the Autodesk Product Design Collection, which also includes other software tools for mechanical, product, and factory design.
-
AutoCAD Electrical 2017 provides a comprehensive set of features and functions that enable you to create accurate and consistent electrical drawings and documentation. It also allows you to automate common tasks, such as wire numbering, component tagging, error checking, cross-referencing, and bill of materials generation. It also integrates with other Autodesk products, such as Inventor, Revit, Vault, and Fusion 360, to facilitate collaboration and data exchange across different disciplines.
-
Features and Benefits of AutoCAD Electrical 2017
-
AutoCAD Electrical 2017 offers many features and benefits that make it a superior choice for electrical design and documentation. Some of the main ones are:
-
-
Electrical-specific tools: AutoCAD Electrical 2017 provides a rich set of tools that are tailored for electrical design and documentation. These include schematic symbols, wires, cables, terminals, panels, PLCs, circuit builders, ladder diagrams, point-to-point wiring diagrams, one-line diagrams, and more. You can also create custom symbols and components using the Symbol Builder tool.
-
Electrical-specific libraries: AutoCAD Electrical 2017 comes with thousands of pre-drawn electrical symbols and components that are compliant with various standards, such as ANSI, IEC, NFPA, JIC, GB, AS/NZS, etc. You can also access online catalogs from leading manufacturers to find the exact parts you need for your design.
-
Electrical-specific workflows: AutoCAD Electrical 2017 streamlines your workflow by automating common tasks and processes. For example, you can use the Wire Numbering tool to automatically assign wire numbers based on your preferences. You can also use the Component Tagging tool to automatically assign component tags based on your naming conventions. You can also use the Error Checking tool to identify and correct any errors or inconsistencies in your design.
-
Data management: AutoCAD Electrical 2017 helps you manage your data efficiently and effectively. You can use the Project Manager tool to organize your drawings and files into logical folders and subfolders. You can also use the Data Extraction tool to extract relevant information from your drawings into various formats, such as Excel spreadsheets or XML files. You can also use the Report Generation tool to create various reports based on your data extraction settings.
-
Collaboration: AutoCAD Electrical 2017 enables you to collaborate with other users across different disciplines and platforms. You can use the Autodesk Desktop Connector tool to sync your local files with cloud storage services such as Autodesk Drive or BIM 360 Docs. You can also use the Autodesk A360 Viewer tool to view and share your drawings online without installing any software. You can also use the Autodesk Design Review tool to review and markup your drawings with other stakeholders.
-
-
How to Install AutoCAD Electrical 2017
-
To install AutoCAD Electrical 2017 on your computer, you need to follow these steps:
-
-
Download the installation file from the Autodesk website or use the media provided by Autodesk.
-
Run the installation file as an administrator.
-
Select your preferred language and click Install.
-
Accept the license agreement and click Next.
-
Select the products you want to install (you can choose between AutoCAD Electrical 2017 only or the entire Product Design Collection) and click Next.
-
Select the installation type (you can choose between Typical or Custom) and click Next.
-
Select the installation location (you can change it if you want) and click Next.
-
Select the configuration options (you can change them if you want) and click Next.
-
Enter the product key (you can find it on your Autodesk account or on the media provided by Autodesk) and click Next.
-
Enter the serial number (you can find it on your Autodesk account or on the media provided by Autodesk) and click Next.
-
Click Install to start the installation process.
-
Wait for the installation process to complete (it may take some time depending on your system specifications).
-
Click Finish to exit the installation wizard.
-
-
How to Use AutoCAD Electrical 2017
-
To use AutoCAD Electrical 2017 effectively, you need to follow these steps:
-
-
Launch AutoCAD Electrical 2017 from your desktop or start menu.
-
Create a new project or open an existing one using the Project Manager tool.
-
Create a new drawing or open an existing one using the New Drawing or Open Drawing tools.
-
Select a drawing template or format using the Template Selection dialog box.
-
Select a drawing standard using the Standard Selection dialog box.
-
Add schematic symbols or components using the Insert Component tool or the Catalog Browser tool.
-
Add wires or cables using the Insert Wire tool or the Insert Cable tool.
-
Add terminals using the Insert Terminal tool or the Terminal Strip Editor tool.
-
Add panels using the Insert Panel tool or the Panel Layout tab.
-
Add PLCs using the Insert PLC tool or the PLC Database File Editor tool.
-
Edit your schematic symbols or components using the Edit Component tool or the Symbol Builder tool.
-
Edit your wires or cables using the Edit Wire tool or the Wire Properties dialog box.
-
Edit your terminals using the Edit Terminal tool or the Terminal Strip Editor tool.
-
Edit your panels using the Edit Panel tool or the Panel Layout tab.
-
Edit your PLCs using the Edit PLC tool or the PLC Database File Editor tool.
-
-
You can also use other tools such as Circuit Builder, Ladder Diagrams, Point-to-Point Wiring Diagrams, One-Line Diagrams, and more to create and edit your electrical schematics and circuits.
-
-
AutoCAD Electrical 2017 system requirements
-AutoCAD Electrical 2017 full crack download
-AutoCAD Electrical 2017 hướng dẫn cài đặt
-AutoCAD Electrical 2017 free trial
-AutoCAD Electrical 2017 features and benefits
-AutoCAD Electrical 2017 library of symbols
-AutoCAD Electrical 2017 tutorial for beginners
-AutoCAD Electrical 2017 vs AutoCAD 2017
-AutoCAD Electrical 2017 license activation
-AutoCAD Electrical 2017 update and patch
-AutoCAD Electrical 2017 for Windows 10
-AutoCAD Electrical 2017 design software for electrical engineers
-AutoCAD Electrical 2017 how to create schematics
-AutoCAD Electrical 2017 tips and tricks
-AutoCAD Electrical 2017 online course
-AutoCAD Electrical 2017 user manual pdf
-AutoCAD Electrical 2017 serial number and product key generator
-AutoCAD Electrical 2017 best practices and standards
-AutoCAD Electrical 2017 error and troubleshooting
-AutoCAD Electrical 2017 review and comparison
-AutoCAD Electrical 2017 keyboard shortcuts and commands
-AutoCAD Electrical 2017 price and subscription options
-AutoCAD Electrical 2017 how to use toolsets and panels
-AutoCAD Electrical 2017 project management and collaboration
-AutoCAD Electrical 2017 how to import and export data
-AutoCAD Electrical 2017 customization and configuration
-AutoCAD Electrical 2017 how to draw wiring diagrams
-AutoCAD Electrical 2017 support and help center
-AutoCAD Electrical 2017 new features and enhancements
-AutoCAD Electrical 2017 how to generate reports and documentation
-AutoCAD Electrical 2017 how to automate tasks and workflows
-AutoCAD Electrical 2017 training and certification
-AutoCAD Electrical 2017 how to install xforce keygen
-AutoCAD Electrical 2017 how to use electrical symbols builder
-AutoCAD Electrical 2017 how to create panel layouts
-AutoCAD Electrical 2017 how to use PLC I/O tools
-AutoCAD Electrical 2017 how to use circuit builder
-AutoCAD Electrical 2017 how to use wire numbering and tagging
-AutoCAD Electrical 2017 how to use schematic design tools
-AutoCAD Electrical 2017 how to use catalog browser and lookup tools
-AutoCAD Electrical 2017 how to use location view tab and folders
-AutoCAD Electrical 2017 how to use terminal strip editor and report generator
-AutoCAD Electrical 2017 how to use cable and harness tools
-AutoCAD Electrical 2017 how to use point-to-point wiring tools
-AutoCAD Electrical 2017 how to use surfacing tools for wire routing
-AutoCAD Electrical 2017 how to use data extraction tools for BOMs and parts lists
-AutoCAD Electrical 2017 how to use drawing audit tools for error checking and validation
-AutoCAD Electrical 2017 how to use migration tools for legacy data conversion
-Autocad electrical toolset in autodesk autocad
-
Tips and Tricks for AutoCAD Electrical 2017
-
To optimize your workflow and productivity with AutoCAD Electrical 2017, you should consider using some of these tips and tricks:
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- - Use keyboard shortcuts to access common commands and functions quickly. You can find a list of keyboard shortcuts - Use the Copy to Clipboard tool to copy and paste multiple schematic symbols or components at once. You can also use the Paste Special tool to paste them with different options, such as retaining or changing wire numbers, component tags, etc. - Use the Scoot tool to move schematic symbols or components along a wire or a cable without breaking the connection. You can also use the Align tool to align them horizontally or vertically. - Use the Swap/Update Block tool to replace schematic symbols or components with different ones from the catalog or the symbol library. You can also use the Update Block tool to update them with the latest changes from the catalog or the symbol library. - Use the Move Project tool to move your entire project folder to a different location without losing any references or links. You can also use the Copy Project tool to copy your entire project folder to a different location with a new name. - Use the Project-Wide Utilities tool to perform various actions on your entire project, such as updating wire numbers, component tags, cross-references, etc. You can also use the Project-Wide Reports tool to generate various reports on your entire project, such as wire list, component list, terminal list, etc. - Use the Find and Replace tool to find and replace text or values in your drawings or project files. You can also use the Find and Replace in Project tool to find and replace text or values in your entire project. - Use the Audit tool to check your drawings for errors or inconsistencies and fix them automatically or manually. You can also use the Purge tool to remove unused objects or data from your drawings and reduce their file size. - Use the Export to DWG tool to export your drawings to standard DWG format that can be opened by other AutoCAD products or applications. You can also use the Export to PDF tool to export your drawings to PDF format that can be viewed or printed by any PDF reader or printer.
Troubleshooting and Support for AutoCAD Electrical 2017
-
If you encounter any issues or problems with AutoCAD Electrical 2017, you should try these steps:
-
-
Check the system requirements and make sure your computer meets them. You can find them in this link: https://knowledge.autodesk.com/support/autocad-electrical/troubleshooting/caas/sfdcarticles/sfdcarticles/System-requirements-for-AutoCAD-Electrical-2017.html
-
Check the installation log and make sure there are no errors or warnings during the installation process. You can find it in this location: C:\Users\\AppData\Local\Temp\Autodesk\AutoCAD Electrical 2017 Setup.log
-
Check the activation status and make sure your product is activated and licensed properly. You can do this by clicking on Help > About AutoCAD Electrical > Product Information.
-
Check the updates and patches and make sure your product is up to date with the latest fixes and enhancements. You can do this by clicking on Help > Check for Updates.
-
Check the online help and documentation and make sure you understand how to use the product features and functions correctly. You can access them by clicking on Help > AutoCAD Electrical Help.
-
Check the online forums and communities and see if other users have encountered similar issues or problems and how they solved them. You can access them by clicking on Help > Autodesk Forums.
-
Check the online knowledge base and see if there are any solutions or articles that address your issue or problem. You can access it by clicking on Help > Autodesk Knowledge Network.
-
Contact customer service and technical support and get professional assistance from Autodesk experts. You can do this by clicking on Help > Contact Support.
-
-
Conclusion
-
In conclusion, AutoCAD Electrical 2017 is a powerful and reliable software tool for electrical design and documentation. It offers many features and benefits that can help you create accurate and consistent electrical schematics, circuits, and projects. It also allows you to automate common tasks, manage your data efficiently, and collaborate with other users easily. It is part of the Autodesk Product Design Collection, which also includes other software tools for mechanical, product, and factory design.
-
If you want to learn more about AutoCAD Electrical 2017, you can visit the official website: https://www.autodesk.com/products/autocad-electrical/overview
-
If you want to download a free trial version of AutoCAD Electrical 2017, you can visit this link: https://www.autodesk.com/products/autocad-electrical/free-trial
-
If you want to buy a subscription of AutoCAD Electrical 2017, you can visit this link: https://www.autodesk.com/products/autocad-electrical/buy
-
FAQs
-
Here are some frequently asked questions about AutoCAD Electrical 2017:
-
-
What are the main differences between AutoCAD Electrical 2017 and AutoCAD 2017?
-AutoCAD Electrical 2017 is a specialized software application that runs on top of AutoCAD 2017. It provides electrical-specific tools, libraries, workflows, data management, and collaboration features that are not available in AutoCAD 2017. AutoCAD 2017 is a general-purpose software application that provides basic tools for creating and editing 2D and 3D drawings.
-
Can I use AutoCAD Electrical 2017 without AutoCAD 2017?
-No, you cannot use AutoCAD Electrical 2017 without AutoCAD 2017. AutoCAD Electrical 2017 is dependent on AutoCAD 2017 as its base platform. However, you do not need to buy both products separately. When you buy a subscription of AutoCAD Electrical 2017, you automatically get access to both products.
-
Can I use AutoCAD Electrical 2017 with other Autodesk products?
-Yes, you can use AutoCAD Electrical 2017 with other Autodesk products that are compatible with it. For example, you can use it with Inventor, Revit, Vault, Fusion 360, and more to facilitate collaboration and data exchange across different disciplines.
-
-
Can I use AutoCAD Electrical 2017 on Mac or Linux?
-No, you cannot use AutoCAD Electrical 2017 on Mac or Linux. AutoCAD Electrical 2017 is only compatible with Windows operating systems.
-
-
Can I use AutoCAD Electrical 2017 offline?
-Yes, you can use AutoCAD Electrical 2017 offline, as long as you have activated and licensed your product properly. However, you will not be able to access some online features, such as online catalogs, online help, online forums, online updates, etc.
-
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Chemdraw Full Version Free Download Windows !!TOP!!.md b/spaces/raedeXanto/academic-chatgpt-beta/Chemdraw Full Version Free Download Windows !!TOP!!.md
deleted file mode 100644
index 833b87f713fe6c0477e6e646d7760a49712c8dc7..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Chemdraw Full Version Free Download Windows !!TOP!!.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
How to Download and Install ChemDraw Pro for Windows
-
ChemDraw Pro is a powerful and popular chemical drawing software that allows you to create publication-quality drawings of molecules, reactions, biological entities, and more. ChemDraw Pro is part of the ChemOffice suite of products, which also includes Chem3D, ChemFinder, and ChemScript. ChemDraw Pro offers features such as chemical query properties, templates and nicknames, relative stereochemistry, ISIS/Draw compatibility, fragmentation tools, polymerdraw, chemprop, structure cleanup, and more.
If you are a student or researcher who needs to use ChemDraw Pro for your chemistry projects, you may be wondering how to download and install it on your Windows PC. In this article, we will show you the steps to do so.
-
Step 1: Get a ChemDraw account
-
The first thing you need to do is to get a ChemDraw account from the PerkinElmer Informatics website. You can either register for a free trial or purchase a subscription. To register for a free trial, go to https://perkinelmerinformatics.com/products/research/chemdraw and click on the "Free Trial" button. You will need to provide your name, email address, institution name, and country. You will also need to agree to the terms and conditions and privacy policy. After you submit the form, you will receive an email with a link to activate your account.
-
To purchase a subscription, go to https://perkinelmerinformatics.com/products/research/chemoffice and choose the ChemOffice+ Cloud Standard option. This option includes ChemDraw Professional as well as other cloud-based features such as Signals Notebook and ChemACX Explorer. You will need to provide your payment details and confirm your order. You will receive an email with your subscription details and a link to access your account.
-
-
Step 2: Download ChemDraw Pro
-
Once you have a ChemDraw account, you can download ChemDraw Pro from the PerkinElmer Informatics website. To do so, go to https://informatics.perkinelmer.com/sitesubscription/ and log in with your email address and password. You will see a list of available products under your subscription. Click on the "Download" button next to ChemDraw Professional. You will be redirected to another page where you can choose the version and language of ChemDraw Pro that you want to download. The latest version is 21.0 and it supports Windows 10/11 (32-bit or 64-bit). Click on the "Download" button again and save the file on your computer.
-
Step 3: Install ChemDraw Pro
-
After you have downloaded ChemDraw Pro, you can install it on your computer by following these steps:
-
-
Locate the downloaded file (usually named ChemOffice_Professional_21_Win.zip) and extract it using a program such as WinZip or WinRAR.
-
Open the extracted folder and double-click on the setup.exe file.
-
Follow the instructions on the screen to complete the installation process. You may need to accept the license agreement, choose the destination folder, select the components to install, and enter your activation code.
-
After the installation is finished, you can launch ChemDraw Pro from the Start menu or the desktop shortcut.
-
-
Congratulations! You have successfully downloaded and installed ChemDraw Pro for Windows. You can now start creating beautiful chemical drawings for your chemistry projects.
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Download Complete Reference Pc Hardware Craig Zacker Pdf for Free Learn Everything You Need to Know About PC Hardware.md b/spaces/raedeXanto/academic-chatgpt-beta/Download Complete Reference Pc Hardware Craig Zacker Pdf for Free Learn Everything You Need to Know About PC Hardware.md
deleted file mode 100644
index 4b24ebab4f3072f32169b3ca6fd50b6abf68fcd6..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Download Complete Reference Pc Hardware Craig Zacker Pdf for Free Learn Everything You Need to Know About PC Hardware.md
+++ /dev/null
@@ -1,135 +0,0 @@
-
-
Complete Reference PC Hardware Craig Zacker PDF Free Download
-
If you are interested in learning about PC Hardware, you might be looking for a reliable and comprehensive source of information. There are many books and online resources available on this topic, but not all of them are up to date, accurate, or easy to follow. That's why you need Complete Reference PC Hardware Craig Zacker PDF, a book that covers everything you need to know about PC Hardware in a simple and practical way. In this article, we will tell you what Complete Reference PC Hardware Craig Zacker PDF is, what features it has, what benefits it offers, and how you can download it for free.
-
Complete Reference Pc Hardware Craig Zacker Pdf Free Download
PC Hardware is the term used to describe the physical components of a computer system, such as the motherboard, processor, memory, hard disk, video card, sound card, keyboard, mouse, monitor, printer, scanner, etc. These components work together to perform various tasks and functions that enable us to use computers for different purposes.
-
Why do you need to learn about PC Hardware?
-
Learning about PC Hardware can help you in many ways. For example, you can:
-
-
Understand how computers work and how they communicate with each other.
-
Identify and solve common problems and issues related to PC Hardware.
-
Upgrade and customize your computer system according to your needs and preferences.
-
Choose the best components and accessories for your computer system.
-
Maintain and optimize your computer system for better performance and security.
-
Create and modify your own computer system from scratch.
-
-
What is Complete Reference PC Hardware Craig Zacker PDF?
-
Complete Reference PC Hardware Craig Zacker PDF is a book written by Craig Zacker, a renowned author and expert in the field of PC Hardware. The book provides a comprehensive and detailed guide to all aspects of PC Hardware, from the basics to the advanced topics. The book covers both theoretical and practical aspects of PC Hardware, with clear explanations, examples, illustrations, diagrams, tables, charts, etc. The book also includes useful tips and tricks for troubleshooting and optimizing your PC Hardware. The book is updated with the latest information on the newest technologies and trends in PC Hardware.
-
Features of Complete Reference PC Hardware Craig Zacker PDF
-
Comprehensive coverage of PC Hardware topics
-
The book covers all the topics related to PC Hardware that you need to know. Some of the topics include:
-
-
The history and evolution of PC Hardware.
-
The basic principles and concepts of PC Hardware.
-
The different types and categories of PC Hardware.
-
The functions and features of each component of PC Hardware.
-
The standards and specifications of PC Hardware.
-
The compatibility and interoperability of PC Hardware.
-
The installation and configuration of PC Hardware.
-
The testing and diagnosis of PC Hardware.
-
The maintenance and repair of PC Hardware.
-
The upgrading and customization of PC Hardware.
-
The security and protection of PC Hardware.
-
The future trends and developments in PC Hardware.
-
-
Easy to understand language and illustrations
-
The book uses simple and straightforward language that anyone can understand. The book also uses various illustrations such as pictures, diagrams, tables, charts, etc. to make the concepts more clear and easy to follow. The book avoids using technical jargon or complex terms that might confuse or intimidate the readers. The book also provides definitions and explanations for any unfamiliar terms or concepts that might appear in the text.
-
How to get Complete Reference Pc Hardware Craig Zacker Pdf for free
-Complete Reference Pc Hardware Craig Zacker book review and summary
-Best sites to download Complete Reference Pc Hardware Craig Zacker Pdf
-Complete Reference Pc Hardware Craig Zacker Pdf online reading and download link
-Complete Reference Pc Hardware Craig Zacker Pdf torrent and magnet link
-Complete Reference Pc Hardware Craig Zacker Pdf Google Drive and Dropbox link
-Complete Reference Pc Hardware Craig Zacker Pdf epub and mobi format
-Complete Reference Pc Hardware Craig Zacker Pdf audiobook and podcast
-Complete Reference Pc Hardware Craig Zacker course and certification
-Complete Reference Pc Hardware Craig Zacker cheat sheet and notes
-Complete Reference Pc Hardware Craig Zacker quiz and test questions
-Complete Reference Pc Hardware Craig Zacker solutions and answers
-Complete Reference Pc Hardware Craig Zacker slides and presentation
-Complete Reference Pc Hardware Craig Zacker video and audio tutorials
-Complete Reference Pc Hardware Craig Zacker case studies and examples
-Complete Reference Pc Hardware Craig Zacker projects and assignments
-Complete Reference Pc Hardware Craig Zacker tips and tricks
-Complete Reference Pc Hardware Craig Zacker best practices and guidelines
-Complete Reference Pc Hardware Craig Zacker latest edition and updates
-Complete Reference Pc Hardware Craig Zacker comparison and alternatives
-Benefits of reading Complete Reference Pc Hardware Craig Zacker Pdf
-Challenges of reading Complete Reference Pc Hardware Craig Zacker Pdf
-How to learn from Complete Reference Pc Hardware Craig Zacker Pdf
-How to teach from Complete Reference Pc Hardware Craig Zacker Pdf
-How to cite from Complete Reference Pc Hardware Craig Zacker Pdf
-How to paraphrase from Complete Reference Pc Hardware Craig Zacker Pdf
-How to summarize from Complete Reference Pc Hardware Craig Zacker Pdf
-How to quote from Complete Reference Pc Hardware Craig Zacker Pdf
-How to analyze from Complete Reference Pc Hardware Craig Zacker Pdf
-How to apply from Complete Reference Pc Hardware Craig Zacker Pdf
-How to evaluate from Complete Reference Pc Hardware Craig Zacker Pdf
-How to synthesize from Complete Reference Pc Hardware Craig Zacker Pdf
-How to create from Complete Reference Pc Hardware Craig Zacker Pdf
-How to design from Complete Reference Pc Hardware Craig Zacker Pdf
-How to build from Complete Reference Pc Hardware Craig Zacker Pdf
-How to troubleshoot from Complete Reference Pc Hardware Craig Zacker Pdf
-How to upgrade from Complete Reference Pc Hardware Craig Zacker Pdf
-How to repair from Complete Reference Pc Hardware Craig Zacker Pdf
-How to maintain from Complete Reference Pc Hardware Craig Zacker Pdf
-How to optimize from Complete Reference Pc Hardware Craig Zacker Pdf
-How to secure from Complete Reference Pc Hardware Craig Zacker Pdf
-How to network from Complete Reference Pc Hardware Craig Zacker Pdf
-How to customize from Complete Reference Pc Hardware Craig Zacker Pdf
-How to integrate from Complete Reference Pc Hardware Craig Zacker Pdf
-How to migrate from Complete Reference Pc Hardware Craig Zacker Pdf
-How to backup from Complete Reference Pc Hardware Craig Zacker Pdf
-How to restore from Complete Reference Pc Hardware Craig Zacker Pdf
-How to recover from Complete Reference Pc Hardware Craig Zacker Pdf
-How to delete from Complete Reference Pc Hardware Craig Zacker Pdf
-
Practical tips and troubleshooting advice
-
The book provides practical tips and advice on how to deal with common problems and issues related to PC Hardware. The book also provides step-by-step instructions on how to perform various tasks and functions related to PC Hardware. The book also provides troubleshooting guides that help you identify and solve any errors or malfunctions that might occur in your PC Hardware. The book also provides recommendations on how to optimize your PC Hardware for better performance and security.
-
Updated information on the latest technologies and trends
-
The book is updated with the latest information on the newest technologies and trends in PC Hardware. The book covers topics such as:
-
-
The latest processors from Intel and AMD.
-
The latest memory technologies such as DDR4 RAM.
-
The latest storage technologies such as SSDs and NVMe drives.
-
The latest video technologies such as 4K resolution and VR/AR devices.
-
The latest sound technologies such as Dolby Atmos and DTS:X.
-
The latest networking technologies such as Wi-Fi 6E and 5G.
-
The latest peripheral devices such as wireless keyboards and mice.
-
The latest operating systems such as Windows 11.
-
-
Benefits of Complete Reference PC Hardware Craig Zacker PDF
-
Enhance your knowledge and skills in PC Hardware
-
By reading Complete Reference PC Hardware Craig Zacker PDF, you can enhance your knowledge and skills in PC Hardware. You can learn new things that you might not have known before. You can also refresh your existing knowledge that you might have forgotten or overlooked. You can also improve your understanding of how things work and why they work that way. You can also develop your critical thinking and problem-solving skills by applying what you learn from the book to real-life situations.
-
Prepare for certification exams and career opportunities
-
If you are planning to take any certification exams related to PC Hardware, such as CompTIA A+, Network+, Security+, etc., Complete Reference PC Hardware Craig Zacker PDF can help you prepare for them. The book covers all the topics and objectives that are required for these exams. The book also provides practice questions and exercises that help you test your knowledge and skills. The book also provides tips and strategies on how to pass these exams successfully. If you are looking for career opportunities related to PC Hardware, such as technician, engineer, administrator, consultant, etc., Complete Reference PC Hardware Craig Zacker PDF can help you achieve them. The book provides valuable information and guidance on how to pursue these careers. The book also provides insights and advice from experts and professionals in the field. The book also provides resources and references that help you further your learning and development.
-
Save money and time by downloading the PDF for free
-
One of the best benefits of Complete Reference PC Hardware Craig Zacker PDF is that you can download it for free from the official website. You don't have to spend any money or time to buy or borrow the book from a bookstore or library. You can simply visit the website, enter your name and email address, and get access to the download link. You can then download the PDF file to your device and read it anytime and anywhere you want. You can also print or share the PDF file with others if you wish. You can also access the online version of the book from any browser or device with an internet connection. You can also enjoy other features such as bookmarks, notes, highlights, etc. that enhance your reading experience.
-
How to download Complete Reference PC Hardware Craig Zacker PDF for free?
-
If you are interested in downloading Complete Reference PC Hardware Craig Zacker PDF for free, are the steps you need to follow:
-
Step 1: Visit the official website of Complete Reference PC Hardware Craig Zacker PDF
-
The first step is to visit the official website of Complete Reference PC Hardware Craig Zacker PDF. The website is https://www.completereferencepchardware.com/. You can also click on this link to go directly to the website.
-
Step 2: Enter your name and email address to get access to the download link
-
The second step is to enter your name and email address in the form that appears on the website. You need to enter your valid name and email address so that you can receive the download link in your inbox. You also need to agree to the terms and conditions and privacy policy of the website. After entering your details, click on the "Submit" button.
-
Step 3: Click on the download link and enjoy reading the PDF on your device
-
The third and final step is to click on the download link that you will receive in your email. The download link will take you to a page where you can download the PDF file of Complete Reference PC Hardware Craig Zacker. The file size is about 50 MB and it will take a few minutes to download depending on your internet speed. Once the download is complete, you can open the PDF file on your device and start reading it. You can also save the PDF file on your device for future reference.
-
Conclusion
-
In conclusion, Complete Reference PC Hardware Craig Zacker PDF is a book that provides a comprehensive and detailed guide to all aspects of PC Hardware. The book covers both theoretical and practical aspects of PC Hardware, with clear explanations, examples, illustrations, diagrams, tables, charts, etc. The book also includes useful tips and tricks for troubleshooting and optimizing your PC Hardware. The book is updated with the latest information on the newest technologies and trends in PC Hardware. The book offers many benefits such as enhancing your knowledge and skills in PC Hardware, preparing for certification exams and career opportunities, and saving money and time by downloading the PDF for free. To download Complete Reference PC Hardware Craig Zacker PDF for free, you just need to visit the official website of the book, enter your name and email address, and click on the download link that you will receive in your inbox. You can then enjoy reading the PDF on your device anytime and anywhere you want.
-
FAQs
-
Here are some frequently asked questions about Complete Reference PC Hardware Craig Zacker PDF:
-
-
Q: Is Complete Reference PC Hardware Craig Zacker PDF suitable for beginners?
-
A: Yes, Complete Reference PC Hardware Craig Zacker PDF is suitable for beginners as well as intermediate and advanced users. The book covers all the topics related to PC Hardware from the basics to the advanced topics. The book uses simple and straightforward language that anyone can understand. The book also provides definitions and explanations for any unfamiliar terms or concepts that might appear in the text.
-
Q: How long does it take to read Complete Reference PC Hardware Craig Zacker PDF?
-
A: It depends on your reading speed and interest level, but generally it takes about 20 hours to read Complete Reference PC Hardware Craig Zacker PDF. The book has about 1000 pages and 15 chapters. Each chapter has about 10 sections and each section has about 10 pages. You can read one chapter or one section at a time depending on your preference.
-
Q: Can I use Complete Reference PC Hardware Craig Zacker PDF as a textbook or a reference book?
-
A: Yes, you can use Complete Reference PC Hardware Craig Zacker PDF as a textbook or a reference book for learning or teaching PC Hardware. The book covers all the topics related to PC Hardware that are required for various courses and certifications in PC Hardware. The book also provides practice questions and exercises that help you test your knowledge and skills. The book also provides resources and references that help you further your learning and development.
-
Q: Can I share Complete Reference PC Hardware Craig Zacker PDF with others?
-
A: Yes, you can share Complete Reference PC Hardware Craig Zacker PDF with others as long as you do not violate any copyright or intellectual property rights of the author or publisher. You can share the PDF file or the download link with others if you wish. You can also print or share the online version of the book from any browser or device with an internet connection. However, you cannot modify, edit, or sell the PDF file or the online version of the book without permission from the author or publisher.
-
Q: Where can I find more information about Complete Reference PC Hardware Craig Zacker PDF?
-
A: You can find more information about Complete Reference PC Hardware Craig Zacker PDF on the official website of the book https://www.completereferencepchardware.com/. You can also contact the author or publisher through their email addresses craigzacker@completereferencepchardware.com or publisher@completereferencepchardware.com.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/ramiin2/AutoGPT/autogpt/config/config.py b/spaces/ramiin2/AutoGPT/autogpt/config/config.py
deleted file mode 100644
index 4b53df10e8d2832be7ffb321d9036aec5a47a79d..0000000000000000000000000000000000000000
--- a/spaces/ramiin2/AutoGPT/autogpt/config/config.py
+++ /dev/null
@@ -1,251 +0,0 @@
-"""Configuration class to store the state of bools for different scripts access."""
-import os
-
-import openai
-import yaml
-from colorama import Fore
-from dotenv import load_dotenv
-
-from autogpt.config.singleton import Singleton
-
-load_dotenv(verbose=True)
-
-
-class Config(metaclass=Singleton):
- """
- Configuration class to store the state of bools for different scripts access.
- """
-
- def __init__(self) -> None:
- """Initialize the Config class"""
- self.debug_mode = False
- self.continuous_mode = False
- self.continuous_limit = 0
- self.speak_mode = False
- self.skip_reprompt = False
- self.allow_downloads = False
- self.skip_news = False
-
- self.ai_settings_file = os.getenv("AI_SETTINGS_FILE", "ai_settings.yaml")
- self.fast_llm_model = os.getenv("FAST_LLM_MODEL", "gpt-3.5-turbo")
- self.smart_llm_model = os.getenv("SMART_LLM_MODEL", "gpt-4")
- self.fast_token_limit = int(os.getenv("FAST_TOKEN_LIMIT", 4000))
- self.smart_token_limit = int(os.getenv("SMART_TOKEN_LIMIT", 8000))
- self.browse_chunk_max_length = int(os.getenv("BROWSE_CHUNK_MAX_LENGTH", 8192))
-
- self.openai_api_key = os.getenv("OPENAI_API_KEY")
- self.temperature = float(os.getenv("TEMPERATURE", "1"))
- self.use_azure = os.getenv("USE_AZURE") == "True"
- self.execute_local_commands = (
- os.getenv("EXECUTE_LOCAL_COMMANDS", "False") == "True"
- )
- self.restrict_to_workspace = (
- os.getenv("RESTRICT_TO_WORKSPACE", "True") == "True"
- )
-
- if self.use_azure:
- self.load_azure_config()
- openai.api_type = self.openai_api_type
- openai.api_base = self.openai_api_base
- openai.api_version = self.openai_api_version
-
- self.elevenlabs_api_key = os.getenv("ELEVENLABS_API_KEY")
- self.elevenlabs_voice_1_id = os.getenv("ELEVENLABS_VOICE_1_ID")
- self.elevenlabs_voice_2_id = os.getenv("ELEVENLABS_VOICE_2_ID")
-
- self.use_mac_os_tts = False
- self.use_mac_os_tts = os.getenv("USE_MAC_OS_TTS")
-
- self.use_brian_tts = False
- self.use_brian_tts = os.getenv("USE_BRIAN_TTS")
-
- self.github_api_key = os.getenv("GITHUB_API_KEY")
- self.github_username = os.getenv("GITHUB_USERNAME")
-
- self.google_api_key = os.getenv("GOOGLE_API_KEY")
- self.custom_search_engine_id = os.getenv("CUSTOM_SEARCH_ENGINE_ID")
-
- self.pinecone_api_key = os.getenv("PINECONE_API_KEY")
- self.pinecone_region = os.getenv("PINECONE_ENV")
-
- self.weaviate_host = os.getenv("WEAVIATE_HOST")
- self.weaviate_port = os.getenv("WEAVIATE_PORT")
- self.weaviate_protocol = os.getenv("WEAVIATE_PROTOCOL", "http")
- self.weaviate_username = os.getenv("WEAVIATE_USERNAME", None)
- self.weaviate_password = os.getenv("WEAVIATE_PASSWORD", None)
- self.weaviate_scopes = os.getenv("WEAVIATE_SCOPES", None)
- self.weaviate_embedded_path = os.getenv("WEAVIATE_EMBEDDED_PATH")
- self.weaviate_api_key = os.getenv("WEAVIATE_API_KEY", None)
- self.use_weaviate_embedded = (
- os.getenv("USE_WEAVIATE_EMBEDDED", "False") == "True"
- )
-
- # milvus configuration, e.g., localhost:19530.
- self.milvus_addr = os.getenv("MILVUS_ADDR", "localhost:19530")
- self.milvus_collection = os.getenv("MILVUS_COLLECTION", "autogpt")
-
- self.image_provider = os.getenv("IMAGE_PROVIDER")
- self.image_size = int(os.getenv("IMAGE_SIZE", 256))
- self.huggingface_api_token = os.getenv("HUGGINGFACE_API_TOKEN")
- self.huggingface_image_model = os.getenv(
- "HUGGINGFACE_IMAGE_MODEL", "CompVis/stable-diffusion-v1-4"
- )
- self.huggingface_audio_to_text_model = os.getenv(
- "HUGGINGFACE_AUDIO_TO_TEXT_MODEL"
- )
- self.sd_webui_url = os.getenv("SD_WEBUI_URL", "http://localhost:7860")
- self.sd_webui_auth = os.getenv("SD_WEBUI_AUTH")
-
- # Selenium browser settings
- self.selenium_web_browser = os.getenv("USE_WEB_BROWSER", "chrome")
- self.selenium_headless = os.getenv("HEADLESS_BROWSER", "True") == "True"
-
- # User agent header to use when making HTTP requests
- # Some websites might just completely deny request with an error code if
- # no user agent was found.
- self.user_agent = os.getenv(
- "USER_AGENT",
- "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36"
- " (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36",
- )
-
- self.redis_host = os.getenv("REDIS_HOST", "localhost")
- self.redis_port = os.getenv("REDIS_PORT", "6379")
- self.redis_password = os.getenv("REDIS_PASSWORD", "")
- self.wipe_redis_on_start = os.getenv("WIPE_REDIS_ON_START", "True") == "True"
- self.memory_index = os.getenv("MEMORY_INDEX", "auto-gpt")
- # Note that indexes must be created on db 0 in redis, this is not configurable.
-
- self.memory_backend = os.getenv("MEMORY_BACKEND", "local")
- # Initialize the OpenAI API client
- openai.api_key = self.openai_api_key
-
- def get_azure_deployment_id_for_model(self, model: str) -> str:
- """
- Returns the relevant deployment id for the model specified.
-
- Parameters:
- model(str): The model to map to the deployment id.
-
- Returns:
- The matching deployment id if found, otherwise an empty string.
- """
- if model == self.fast_llm_model:
- return self.azure_model_to_deployment_id_map[
- "fast_llm_model_deployment_id"
- ] # type: ignore
- elif model == self.smart_llm_model:
- return self.azure_model_to_deployment_id_map[
- "smart_llm_model_deployment_id"
- ] # type: ignore
- elif model == "text-embedding-ada-002":
- return self.azure_model_to_deployment_id_map[
- "embedding_model_deployment_id"
- ] # type: ignore
- else:
- return ""
-
- AZURE_CONFIG_FILE = os.path.join(os.path.dirname(__file__), "..", "azure.yaml")
-
- def load_azure_config(self, config_file: str = AZURE_CONFIG_FILE) -> None:
- """
- Loads the configuration parameters for Azure hosting from the specified file
- path as a yaml file.
-
- Parameters:
- config_file(str): The path to the config yaml file. DEFAULT: "../azure.yaml"
-
- Returns:
- None
- """
- try:
- with open(config_file) as file:
- config_params = yaml.load(file, Loader=yaml.FullLoader)
- except FileNotFoundError:
- config_params = {}
- self.openai_api_type = config_params.get("azure_api_type") or "azure"
- self.openai_api_base = config_params.get("azure_api_base") or ""
- self.openai_api_version = (
- config_params.get("azure_api_version") or "2023-03-15-preview"
- )
- self.azure_model_to_deployment_id_map = config_params.get("azure_model_map", [])
-
- def set_continuous_mode(self, value: bool) -> None:
- """Set the continuous mode value."""
- self.continuous_mode = value
-
- def set_continuous_limit(self, value: int) -> None:
- """Set the continuous limit value."""
- self.continuous_limit = value
-
- def set_speak_mode(self, value: bool) -> None:
- """Set the speak mode value."""
- self.speak_mode = value
-
- def set_fast_llm_model(self, value: str) -> None:
- """Set the fast LLM model value."""
- self.fast_llm_model = value
-
- def set_smart_llm_model(self, value: str) -> None:
- """Set the smart LLM model value."""
- self.smart_llm_model = value
-
- def set_fast_token_limit(self, value: int) -> None:
- """Set the fast token limit value."""
- self.fast_token_limit = value
-
- def set_smart_token_limit(self, value: int) -> None:
- """Set the smart token limit value."""
- self.smart_token_limit = value
-
- def set_browse_chunk_max_length(self, value: int) -> None:
- """Set the browse_website command chunk max length value."""
- self.browse_chunk_max_length = value
-
- def set_openai_api_key(self, value: str) -> None:
- """Set the OpenAI API key value."""
- self.openai_api_key = value
-
- def set_elevenlabs_api_key(self, value: str) -> None:
- """Set the ElevenLabs API key value."""
- self.elevenlabs_api_key = value
-
- def set_elevenlabs_voice_1_id(self, value: str) -> None:
- """Set the ElevenLabs Voice 1 ID value."""
- self.elevenlabs_voice_1_id = value
-
- def set_elevenlabs_voice_2_id(self, value: str) -> None:
- """Set the ElevenLabs Voice 2 ID value."""
- self.elevenlabs_voice_2_id = value
-
- def set_google_api_key(self, value: str) -> None:
- """Set the Google API key value."""
- self.google_api_key = value
-
- def set_custom_search_engine_id(self, value: str) -> None:
- """Set the custom search engine id value."""
- self.custom_search_engine_id = value
-
- def set_pinecone_api_key(self, value: str) -> None:
- """Set the Pinecone API key value."""
- self.pinecone_api_key = value
-
- def set_pinecone_region(self, value: str) -> None:
- """Set the Pinecone region value."""
- self.pinecone_region = value
-
- def set_debug_mode(self, value: bool) -> None:
- """Set the debug mode value."""
- self.debug_mode = value
-
-
-def check_openai_api_key() -> None:
- """Check if the OpenAI API key is set in config.py or as an environment variable."""
- cfg = Config()
- if not cfg.openai_api_key:
- print(
- Fore.RED
- + "Please set your OpenAI API key in .env or as an environment variable."
- )
- print("You can get your key from https://platform.openai.com/account/api-keys")
- exit(1)
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HHD Online Player (Aadukalam (2010) - HD Rip - 720P - X).md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HHD Online Player (Aadukalam (2010) - HD Rip - 720P - X).md
deleted file mode 100644
index 2633377c1f86f616dcbe5e1713b9cb014410b42f..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HHD Online Player (Aadukalam (2010) - HD Rip - 720P - X).md
+++ /dev/null
@@ -1,6 +0,0 @@
-
HHD Online Player (Aadukalam (2010) - HD Rip - 720P - x)
-
-264 file (=MPEG4), the web download will be far better than the original blu ray footage. ... True HD Hindi Video Songs Vol 17 Bluray 1080p x264 DTS-HDMAHon3y ... To make a DVD compatible with as many players as possible, use a value of ... 720p 1080p Resolution 426 x 240 640 x 360 854x480 1280x720 Disc ... 4d29de3e1b
-
-
-
diff --git a/spaces/rfrossard/ChatGPT-PPT-Generate/app.py b/spaces/rfrossard/ChatGPT-PPT-Generate/app.py
deleted file mode 100644
index 972fcbf731c09f0628d1e68f7cfca6bef64dcf73..0000000000000000000000000000000000000000
--- a/spaces/rfrossard/ChatGPT-PPT-Generate/app.py
+++ /dev/null
@@ -1,266 +0,0 @@
-import glob
-import os
-import random
-import re
-import string
-
-import gradio as gr
-
-import openai
-from icrawler import ImageDownloader
-from icrawler.builtin import GoogleImageCrawler, BingImageCrawler
-from uuid import uuid4
-from pptx import Presentation
-
-bad_coding_practice = ''.join(random.choice(string.ascii_uppercase + string.ascii_lowercase + string.digits) for _ in
- range(16))
-
-
-def refresh_bad_coding_practice():
- global bad_coding_practice
- bad_coding_practice = ''.join(random.choice(string.ascii_uppercase + string.ascii_lowercase + string.digits)
- for _ in range(16))
- return
-
-
-class PrefixNameDownloader(ImageDownloader):
-
- def get_filename(self, task, default_ext):
- filename = super(PrefixNameDownloader, self).get_filename(
- task, default_ext)
- print(bad_coding_practice)
- return 'prefix_' + bad_coding_practice + filename
-
-
-def generate_ppt(file, topic, slide_length, api_key):
- print(file.name)
-
- root = Presentation(file.name)
-
- openai.api_key = api_key
-
- message = f"""
- Create content for a slideshow presentation.
- The content's topic is {topic}.
- The slideshow is {slide_length} slides long.
- The content is written in the language of the content I give you above.
-
-
- You are allowed to use the following slide types:
-
- Slide types:
- Title Slide - (Title, Subtitle)
- Content Slide - (Title, Content)
- Image Slide - (Title, Content, Image)
- Thanks Slide - (Title)
-
- Put this tag before the Title Slide: [L_TS]
- Put this tag before the Content Slide: [L_CS]
- Put this tag before the Image Slide: [L_IS]
- Put this tag before the Thanks Slide: [L_THS]
-
- Put "[SLIDEBREAK]" after each slide
-
- For example:
- [L_TS]
- [TITLE]Mental Health[/TITLE]
-
- [SLIDEBREAK]
-
- [L_CS]
- [TITLE]Mental Health Definition[/TITLE]
- [CONTENT]
- 1. Definition: A person’s condition with regard to their psychological and emotional well-being
- 2. Can impact one's physical health
- 3. Stigmatized too often.
- [/CONTENT]
-
- [SLIDEBREAK]
-
- Put this tag before the Title: [TITLE]
- Put this tag after the Title: [/TITLE]
- Put this tag before the Subitle: [SUBTITLE]
- Put this tag after the Subtitle: [/SUBTITLE]
- Put this tag before the Content: [CONTENT]
- Put this tag after the Content: [/CONTENT]
- Put this tag before the Image: [IMAGE]
- Put this tag after the Image: [/IMAGE]
-
- Elaborate on the Content, provide as much information as possible.
- You put a [/CONTENT] at the end of the Content.
- Do not reply as if you are talking about the slideshow itself. (ex. "Include pictures here about...")
- Do not include any special characters (?, !, ., :, ) in the Title.
- Do not include any additional information in your response and stick to the format."""
-
- # response = openai.ChatCompletion.create(
- # model="gpt-3.5-turbo",
- # messages=[
- # {"role": "user", "content": message}
- # ]
- # )
-
- response = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
- messages=[
- {
- "role": "system",
- "content": (
- "You are a helpful assistant capable of creating clear and concise PowerPoint slide outlines used by teachers during their lessons based on a given lesson plan."
- ),
- },
- {"role": "user", "content": message},
- ],
- max_tokens=2000,
- n=1,
- stop=None,
- temperature=0.7,
- # top_p=0.9,
- )
-
- # """ Ref for slide types:
- # 0 -> title and subtitle
- # 1 -> title and content
- # 2 -> section header
- # 3 -> two content
- # 4 -> Comparison
- # 5 -> Title only
- # 6 -> Blank
- # 7 -> Content with caption
- # 8 -> Pic with caption
- # """
-
- def delete_all_slides():
- for i in range(len(root.slides) - 1, -1, -1):
- r_id = root.slides._sldIdLst[i].rId
- root.part.drop_rel(r_id)
- del root.slides._sldIdLst[i]
-
- def create_title_slide(title, subtitle):
- layout = root.slide_layouts[0]
- slide = root.slides.add_slide(layout)
- slide.shapes.title.text = title
- slide.placeholders[1].text = subtitle
-
- def create_section_header_slide(title):
- layout = root.slide_layouts[2]
- slide = root.slides.add_slide(layout)
- slide.shapes.title.text = title
-
- def create_title_and_content_slide(title, content):
- layout = root.slide_layouts[1]
- slide = root.slides.add_slide(layout)
- slide.shapes.title.text = title
- slide.placeholders[1].text = content
-
- def create_title_and_content_and_image_slide(title, content, image_query):
- layout = root.slide_layouts[8]
- slide = root.slides.add_slide(layout)
- slide.shapes.title.text = title
- slide.placeholders[2].text = content
- refresh_bad_coding_practice()
- bing_crawler = GoogleImageCrawler(downloader_cls=PrefixNameDownloader, storage={'root_dir': os.getcwd()})
- bing_crawler.crawl(keyword=image_query, max_num=1)
- dir_path = os.path.dirname(os.path.realpath(__file__))
- file_name = glob.glob(f"prefix_{bad_coding_practice}*")
- print(file_name)
- img_path = os.path.join(dir_path, file_name[0])
- slide.shapes.add_picture(img_path, slide.placeholders[1].left, slide.placeholders[1].top,
- slide.placeholders[1].width, slide.placeholders[1].height)
-
- def find_text_in_between_tags(text, start_tag, end_tag):
- start_pos = text.find(start_tag)
- end_pos = text.find(end_tag)
- result = []
- while start_pos > -1 and end_pos > -1:
- text_between_tags = text[start_pos + len(start_tag):end_pos]
- result.append(text_between_tags)
- start_pos = text.find(start_tag, end_pos + len(end_tag))
- end_pos = text.find(end_tag, start_pos)
- res1 = "".join(result)
- res2 = re.sub(r"\[IMAGE\].*?\[/IMAGE\]", '', res1)
- if len(result) > 0:
- return res2
- else:
- return ""
-
- def search_for_slide_type(text):
- tags = ["[L_TS]", "[L_CS]", "[L_IS]", "[L_THS]"]
- found_text = next((s for s in tags if s in text), None)
- return found_text
-
- def parse_response(reply):
- list_of_slides = reply.split("[SLIDEBREAK]")
- for slide in list_of_slides:
- slide_type = search_for_slide_type(slide)
- if slide_type == "[L_TS]":
- create_title_slide(find_text_in_between_tags(str(slide), "[TITLE]", "[/TITLE]"),
- find_text_in_between_tags(str(slide), "[SUBTITLE]", "[/SUBTITLE]"))
- elif slide_type == "[L_CS]":
- create_title_and_content_slide("".join(find_text_in_between_tags(str(slide), "[TITLE]", "[/TITLE]")),
- "".join(find_text_in_between_tags(str(slide), "[CONTENT]",
- "[/CONTENT]")))
- elif slide_type == "[L_IS]":
- create_title_and_content_and_image_slide("".join(find_text_in_between_tags(str(slide), "[TITLE]",
- "[/TITLE]")),
- "".join(find_text_in_between_tags(str(slide), "[CONTENT]",
- "[/CONTENT]")),
- "".join(find_text_in_between_tags(str(slide), "[IMAGE]",
- "[/IMAGE]")))
- elif slide_type == "[L_THS]":
- create_section_header_slide("".join(find_text_in_between_tags(str(slide), "[TITLE]", "[/TITLE]")))
-
- def find_title():
- return root.slides[0].shapes.title.text
-
- delete_all_slides()
-
- print(response)
-
- parse_response(response['choices'][0]['message']['content'])
-
- name_ = str(uuid4()).replace('-', '')
-
- root.save(f"./{name_}.pptx")
-
- print("done")
-
- dir_path = "./"
- prefix = "prefix_"
-
- for file_name in os.listdir(dir_path):
- if file_name.startswith(prefix):
- file_path = os.path.join(dir_path, file_name)
- if os.path.isfile(file_path):
- os.remove(file_path)
-
- return f"./{name_}.pptx"
-
-
-with gr.Blocks(title="AI Generated Presentation") as demo:
- gr.Markdown("""
-
-# Bingo
-
-Bingo,一个让你呼吸顺畅 New Bing。
-
-高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。
-
-
-
-[](https://hub.docker.com/repository/docker/weaigc/bingo/)
-[](https://hub.docker.com/repository/docker/weaigc/bingo/)
-[](https://github.com/weaigc/bingo/blob/main/license)
-
-问题反馈请前往 https://github.com/weaigc/bingo/issues
-
-
-
diff --git a/spaces/silvesterjk/Talking_Yak_STT/README.md b/spaces/silvesterjk/Talking_Yak_STT/README.md
deleted file mode 100644
index 05522aeed7adb34d8ecb4973e742472bddf45c49..0000000000000000000000000000000000000000
--- a/spaces/silvesterjk/Talking_Yak_STT/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: TYSTT
-emoji: 📉
-colorFrom: indigo
-colorTo: purple
-sdk: gradio
-sdk_version: 3.9
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/simonduerr/ProteinMPNN/ProteinMPNN/vanilla_proteinmpnn/protein_mpnn_run.py b/spaces/simonduerr/ProteinMPNN/ProteinMPNN/vanilla_proteinmpnn/protein_mpnn_run.py
deleted file mode 100644
index 627b51f4fde61f53cac7b8dd2a3e6f122d536ff8..0000000000000000000000000000000000000000
--- a/spaces/simonduerr/ProteinMPNN/ProteinMPNN/vanilla_proteinmpnn/protein_mpnn_run.py
+++ /dev/null
@@ -1,361 +0,0 @@
-import argparse
-import os.path
-
-def main(args):
-
- import json, time, os, sys, glob
- import shutil
- import warnings
- import numpy as np
- import torch
- from torch import optim
- from torch.utils.data import DataLoader
- from torch.utils.data.dataset import random_split, Subset
- import copy
- import torch.nn as nn
- import torch.nn.functional as F
- import random
- import os.path
- from protein_mpnn_utils import loss_nll, loss_smoothed, gather_edges, gather_nodes, gather_nodes_t, cat_neighbors_nodes, _scores, _S_to_seq, tied_featurize, parse_PDB
- from protein_mpnn_utils import StructureDataset, StructureDatasetPDB, ProteinMPNN
-
-
- hidden_dim = 128
- num_layers = 3
-
-
- if args.path_to_model_weights:
- model_folder_path = args.path_to_model_weights
- if model_folder_path[-1] != '/':
- model_folder_path = model_folder_path + '/'
- else:
- file_path = os.path.realpath(__file__)
- k = file_path.rfind("/")
- model_folder_path = file_path[:k] + '/vanilla_model_weights/'
-
- checkpoint_path = model_folder_path + f'{args.model_name}.pt'
- folder_for_outputs = args.out_folder
-
- NUM_BATCHES = args.num_seq_per_target//args.batch_size
- BATCH_COPIES = args.batch_size
- temperatures = [float(item) for item in args.sampling_temp.split()]
- omit_AAs_list = args.omit_AAs
- alphabet = 'ACDEFGHIKLMNPQRSTVWYX'
-
- omit_AAs_np = np.array([AA in omit_AAs_list for AA in alphabet]).astype(np.float32)
- device = torch.device("cuda:0" if (torch.cuda.is_available()) else "cpu")
- if os.path.isfile(args.chain_id_jsonl):
- with open(args.chain_id_jsonl, 'r') as json_file:
- json_list = list(json_file)
- for json_str in json_list:
- chain_id_dict = json.loads(json_str)
- else:
- chain_id_dict = None
- print(40*'-')
- print('chain_id_jsonl is NOT loaded')
-
- if os.path.isfile(args.fixed_positions_jsonl):
- with open(args.fixed_positions_jsonl, 'r') as json_file:
- json_list = list(json_file)
- for json_str in json_list:
- fixed_positions_dict = json.loads(json_str)
- else:
- print(40*'-')
- print('fixed_positions_jsonl is NOT loaded')
- fixed_positions_dict = None
-
-
- if os.path.isfile(args.pssm_jsonl):
- with open(args.pssm_jsonl, 'r') as json_file:
- json_list = list(json_file)
- pssm_dict = {}
- for json_str in json_list:
- pssm_dict.update(json.loads(json_str))
- else:
- print(40*'-')
- print('pssm_jsonl is NOT loaded')
- pssm_dict = None
-
-
- if os.path.isfile(args.omit_AA_jsonl):
- with open(args.omit_AA_jsonl, 'r') as json_file:
- json_list = list(json_file)
- for json_str in json_list:
- omit_AA_dict = json.loads(json_str)
- else:
- print(40*'-')
- print('omit_AA_jsonl is NOT loaded')
- omit_AA_dict = None
-
-
- if os.path.isfile(args.bias_AA_jsonl):
- with open(args.bias_AA_jsonl, 'r') as json_file:
- json_list = list(json_file)
- for json_str in json_list:
- bias_AA_dict = json.loads(json_str)
- else:
- print(40*'-')
- print('bias_AA_jsonl is NOT loaded')
- bias_AA_dict = None
-
-
- if os.path.isfile(args.tied_positions_jsonl):
- with open(args.tied_positions_jsonl, 'r') as json_file:
- json_list = list(json_file)
- for json_str in json_list:
- tied_positions_dict = json.loads(json_str)
- else:
- print(40*'-')
- print('tied_positions_jsonl is NOT loaded')
- tied_positions_dict = None
-
-
- if os.path.isfile(args.bias_by_res_jsonl):
- with open(args.bias_by_res_jsonl, 'r') as json_file:
- json_list = list(json_file)
-
- for json_str in json_list:
- bias_by_res_dict = json.loads(json_str)
- print('bias by residue dictionary is loaded')
- else:
- print(40*'-')
- print('bias by residue dictionary is not loaded, or not provided')
- bias_by_res_dict = None
-
-
-
- print(40*'-')
- bias_AAs_np = np.zeros(len(alphabet))
- if bias_AA_dict:
- for n, AA in enumerate(alphabet):
- if AA in list(bias_AA_dict.keys()):
- bias_AAs_np[n] = bias_AA_dict[AA]
-
- if args.pdb_path:
- pdb_dict_list = parse_PDB(args.pdb_path)
- dataset_valid = StructureDatasetPDB(pdb_dict_list, truncate=None, max_length=args.max_length)
- all_chain_list = [item[-1:] for item in list(pdb_dict_list[0]) if item[:9]=='seq_chain'] #['A','B', 'C',...]
- if args.pdb_path_chains:
- designed_chain_list = [str(item) for item in args.pdb_path_chains.split()]
- else:
- designed_chain_list = all_chain_list
- fixed_chain_list = [letter for letter in all_chain_list if letter not in designed_chain_list]
- chain_id_dict = {}
- chain_id_dict[pdb_dict_list[0]['name']]= (designed_chain_list, fixed_chain_list)
- else:
- dataset_valid = StructureDataset(args.jsonl_path, truncate=None, max_length=args.max_length)
-
- print(40*'-')
- checkpoint = torch.load(checkpoint_path, map_location=device)
- print('Number of edges:', checkpoint['num_edges'])
- noise_level_print = checkpoint['noise_level']
- print(f'Training noise level: {noise_level_print}A')
- model = ProteinMPNN(num_letters=21, node_features=hidden_dim, edge_features=hidden_dim, hidden_dim=hidden_dim, num_encoder_layers=num_layers, num_decoder_layers=num_layers, augment_eps=args.backbone_noise, k_neighbors=checkpoint['num_edges'])
- model.to(device)
- model.load_state_dict(checkpoint['model_state_dict'])
- model.eval()
-
- # Build paths for experiment
- base_folder = folder_for_outputs
- if base_folder[-1] != '/':
- base_folder = base_folder + '/'
- if not os.path.exists(base_folder):
- os.makedirs(base_folder)
-
- if not os.path.exists(base_folder + 'seqs'):
- os.makedirs(base_folder + 'seqs')
-
- if args.save_score:
- if not os.path.exists(base_folder + 'scores'):
- os.makedirs(base_folder + 'scores')
-
- if args.score_only:
- if not os.path.exists(base_folder + 'score_only'):
- os.makedirs(base_folder + 'score_only')
-
-
- if args.conditional_probs_only:
- if not os.path.exists(base_folder + 'conditional_probs_only'):
- os.makedirs(base_folder + 'conditional_probs_only')
-
-
- if args.save_probs:
- if not os.path.exists(base_folder + 'probs'):
- os.makedirs(base_folder + 'probs')
-
- # Timing
- start_time = time.time()
- total_residues = 0
- protein_list = []
- total_step = 0
- # Validation epoch
- with torch.no_grad():
- test_sum, test_weights = 0., 0.
- #print('Generating sequences...')
- for ix, protein in enumerate(dataset_valid):
- score_list = []
- all_probs_list = []
- all_log_probs_list = []
- S_sample_list = []
- batch_clones = [copy.deepcopy(protein) for i in range(BATCH_COPIES)]
- X, S, mask, lengths, chain_M, chain_encoding_all, chain_list_list, visible_list_list, masked_list_list, masked_chain_length_list_list, chain_M_pos, omit_AA_mask, residue_idx, dihedral_mask, tied_pos_list_of_lists_list, pssm_coef, pssm_bias, pssm_log_odds_all, bias_by_res_all, tied_beta = tied_featurize(batch_clones, device, chain_id_dict, fixed_positions_dict, omit_AA_dict, tied_positions_dict, pssm_dict, bias_by_res_dict)
- pssm_log_odds_mask = (pssm_log_odds_all > args.pssm_threshold).float() #1.0 for true, 0.0 for false
- name_ = batch_clones[0]['name']
- if args.score_only:
- structure_sequence_score_file = base_folder + '/score_only/' + batch_clones[0]['name'] + '.npy'
- native_score_list = []
- for j in range(NUM_BATCHES):
- randn_1 = torch.randn(chain_M.shape, device=X.device)
- log_probs = model(X, S, mask, chain_M*chain_M_pos, residue_idx, chain_encoding_all, randn_1)
- mask_for_loss = mask*chain_M*chain_M_pos
- scores = _scores(S, log_probs, mask_for_loss)
- native_score = scores.cpu().data.numpy()
- native_score_list.append(native_score)
- native_score = np.concatenate(native_score_list, 0)
- ns_mean = native_score.mean()
- ns_mean_print = np.format_float_positional(np.float32(ns_mean), unique=False, precision=4)
- ns_std = native_score.std()
- ns_std_print = np.format_float_positional(np.float32(ns_std), unique=False, precision=4)
- ns_sample_size = native_score.shape[0]
- np.save(structure_sequence_score_file, native_score)
- print(f'Score for {name_}, mean: {ns_mean_print}, std: {ns_std_print}, sample size: {ns_sample_size}')
- elif args.conditional_probs_only:
- print(f'Calculating conditional probabilities for {name_}')
- conditional_probs_only_file = base_folder + '/conditional_probs_only/' + batch_clones[0]['name']
- log_conditional_probs_list = []
- for j in range(NUM_BATCHES):
- randn_1 = torch.randn(chain_M.shape, device=X.device)
- log_conditional_probs = model.conditional_probs(X, S, mask, chain_M*chain_M_pos, residue_idx, chain_encoding_all, randn_1, args.conditional_probs_only_backbone)
- log_conditional_probs_list.append(log_conditional_probs.cpu().numpy())
- concat_log_p = np.concatenate(log_conditional_probs_list, 0) #[B, L, 21]
- mask_out = (chain_M*chain_M_pos*mask)[0,].cpu().numpy()
- np.savez(conditional_probs_only_file, log_p=concat_log_p, S=S[0,].cpu().numpy(), mask=mask[0,].cpu().numpy(), design_mask=mask_out)
- else:
- randn_1 = torch.randn(chain_M.shape, device=X.device)
- log_probs = model(X, S, mask, chain_M*chain_M_pos, residue_idx, chain_encoding_all, randn_1)
- mask_for_loss = mask*chain_M*chain_M_pos
- scores = _scores(S, log_probs, mask_for_loss)
- native_score = scores.cpu().data.numpy()
- # Generate some sequences
- ali_file = base_folder + '/seqs/' + batch_clones[0]['name'] + '.fa'
- score_file = base_folder + '/scores/' + batch_clones[0]['name'] + '.npy'
- probs_file = base_folder + '/probs/' + batch_clones[0]['name'] + '.npz'
- print(f'Generating sequences for: {name_}')
- t0 = time.time()
- with open(ali_file, 'w') as f:
- for temp in temperatures:
- for j in range(NUM_BATCHES):
- randn_2 = torch.randn(chain_M.shape, device=X.device)
- if tied_positions_dict == None:
- sample_dict = model.sample(X, randn_2, S, chain_M, chain_encoding_all, residue_idx, mask=mask, temperature=temp, omit_AAs_np=omit_AAs_np, bias_AAs_np=bias_AAs_np, chain_M_pos=chain_M_pos, omit_AA_mask=omit_AA_mask, pssm_coef=pssm_coef, pssm_bias=pssm_bias, pssm_multi=args.pssm_multi, pssm_log_odds_flag=bool(args.pssm_log_odds_flag), pssm_log_odds_mask=pssm_log_odds_mask, pssm_bias_flag=bool(args.pssm_bias_flag), bias_by_res=bias_by_res_all)
- S_sample = sample_dict["S"]
- else:
- sample_dict = model.tied_sample(X, randn_2, S, chain_M, chain_encoding_all, residue_idx, mask=mask, temperature=temp, omit_AAs_np=omit_AAs_np, bias_AAs_np=bias_AAs_np, chain_M_pos=chain_M_pos, omit_AA_mask=omit_AA_mask, pssm_coef=pssm_coef, pssm_bias=pssm_bias, pssm_multi=args.pssm_multi, pssm_log_odds_flag=bool(args.pssm_log_odds_flag), pssm_log_odds_mask=pssm_log_odds_mask, pssm_bias_flag=bool(args.pssm_bias_flag), tied_pos=tied_pos_list_of_lists_list[0], tied_beta=tied_beta, bias_by_res=bias_by_res_all)
- # Compute scores
- S_sample = sample_dict["S"]
- log_probs = model(X, S_sample, mask, chain_M*chain_M_pos, residue_idx, chain_encoding_all, randn_2, use_input_decoding_order=True, decoding_order=sample_dict["decoding_order"])
- mask_for_loss = mask*chain_M*chain_M_pos
- scores = _scores(S_sample, log_probs, mask_for_loss)
- scores = scores.cpu().data.numpy()
- all_probs_list.append(sample_dict["probs"].cpu().data.numpy())
- all_log_probs_list.append(log_probs.cpu().data.numpy())
- S_sample_list.append(S_sample.cpu().data.numpy())
- for b_ix in range(BATCH_COPIES):
- masked_chain_length_list = masked_chain_length_list_list[b_ix]
- masked_list = masked_list_list[b_ix]
- seq_recovery_rate = torch.sum(torch.sum(torch.nn.functional.one_hot(S[b_ix], 21)*torch.nn.functional.one_hot(S_sample[b_ix], 21),axis=-1)*mask_for_loss[b_ix])/torch.sum(mask_for_loss[b_ix])
- seq = _S_to_seq(S_sample[b_ix], chain_M[b_ix])
- score = scores[b_ix]
- score_list.append(score)
- native_seq = _S_to_seq(S[b_ix], chain_M[b_ix])
- if b_ix == 0 and j==0 and temp==temperatures[0]:
- start = 0
- end = 0
- list_of_AAs = []
- for mask_l in masked_chain_length_list:
- end += mask_l
- list_of_AAs.append(native_seq[start:end])
- start = end
- native_seq = "".join(list(np.array(list_of_AAs)[np.argsort(masked_list)]))
- l0 = 0
- for mc_length in list(np.array(masked_chain_length_list)[np.argsort(masked_list)])[:-1]:
- l0 += mc_length
- native_seq = native_seq[:l0] + '/' + native_seq[l0:]
- l0 += 1
- sorted_masked_chain_letters = np.argsort(masked_list_list[0])
- print_masked_chains = [masked_list_list[0][i] for i in sorted_masked_chain_letters]
- sorted_visible_chain_letters = np.argsort(visible_list_list[0])
- print_visible_chains = [visible_list_list[0][i] for i in sorted_visible_chain_letters]
- native_score_print = np.format_float_positional(np.float32(native_score.mean()), unique=False, precision=4)
- f.write('>{}, score={}, fixed_chains={}, designed_chains={}, model_name={}\n{}\n'.format(name_, native_score_print, print_visible_chains, print_masked_chains, args.model_name, native_seq)) #write the native sequence
- start = 0
- end = 0
- list_of_AAs = []
- for mask_l in masked_chain_length_list:
- end += mask_l
- list_of_AAs.append(seq[start:end])
- start = end
-
- seq = "".join(list(np.array(list_of_AAs)[np.argsort(masked_list)]))
- l0 = 0
- for mc_length in list(np.array(masked_chain_length_list)[np.argsort(masked_list)])[:-1]:
- l0 += mc_length
- seq = seq[:l0] + '/' + seq[l0:]
- l0 += 1
- score_print = np.format_float_positional(np.float32(score), unique=False, precision=4)
- seq_rec_print = np.format_float_positional(np.float32(seq_recovery_rate.detach().cpu().numpy()), unique=False, precision=4)
- f.write('>T={}, sample={}, score={}, seq_recovery={}\n{}\n'.format(temp,b_ix,score_print,seq_rec_print,seq)) #write generated sequence
- if args.save_score:
- np.save(score_file, np.array(score_list, np.float32))
- if args.save_probs:
- all_probs_concat = np.concatenate(all_probs_list)
- all_log_probs_concat = np.concatenate(all_log_probs_list)
- S_sample_concat = np.concatenate(S_sample_list)
- np.savez(probs_file, probs=np.array(all_probs_concat, np.float32), log_probs=np.array(all_log_probs_concat, np.float32), S=np.array(S_sample_concat, np.int32), mask=mask_for_loss.cpu().data.numpy(), chain_order=chain_list_list)
- t1 = time.time()
- dt = round(float(t1-t0), 4)
- num_seqs = len(temperatures)*NUM_BATCHES*BATCH_COPIES
- total_length = X.shape[1]
- print(f'{num_seqs} sequences of length {total_length} generated in {dt} seconds')
-
-if __name__ == "__main__":
- argparser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
-
- argparser.add_argument("--path_to_model_weights", type=str, default="", help="Path to model weights folder;")
- argparser.add_argument("--model_name", type=str, default="v_48_020", help="ProteinMPNN model name: v_48_002, v_48_010, v_48_020, v_48_030, v_32_002, v_32_010; v_32_020, v_32_030; v_48_010=version with 48 edges 0.10A noise")
-
- argparser.add_argument("--save_score", type=int, default=0, help="0 for False, 1 for True; save score=-log_prob to npy files")
- argparser.add_argument("--save_probs", type=int, default=0, help="0 for False, 1 for True; save MPNN predicted probabilites per position")
-
- argparser.add_argument("--score_only", type=int, default=0, help="0 for False, 1 for True; score input backbone-sequence pairs")
-
- argparser.add_argument("--conditional_probs_only", type=int, default=0, help="0 for False, 1 for True; output conditional probabilities p(s_i given the rest of the sequence and backbone)")
- argparser.add_argument("--conditional_probs_only_backbone", type=int, default=0, help="0 for False, 1 for True; if true output conditional probabilities p(s_i given backbone)")
-
- argparser.add_argument("--backbone_noise", type=float, default=0.00, help="Standard deviation of Gaussian noise to add to backbone atoms")
- argparser.add_argument("--num_seq_per_target", type=int, default=1, help="Number of sequences to generate per target")
- argparser.add_argument("--batch_size", type=int, default=1, help="Batch size; can set higher for titan, quadro GPUs, reduce this if running out of GPU memory")
- argparser.add_argument("--max_length", type=int, default=20000, help="Max sequence length")
- argparser.add_argument("--sampling_temp", type=str, default="0.1", help="A string of temperatures, 0.2 0.25 0.5. Sampling temperature for amino acids, T=0.0 means taking argmax, T>>1.0 means sample randomly. Suggested values 0.1, 0.15, 0.2, 0.25, 0.3. Higher values will lead to more diversity.")
-
- argparser.add_argument("--out_folder", type=str, help="Path to a folder to output sequences, e.g. /home/out/")
- argparser.add_argument("--pdb_path", type=str, default='', help="Path to a single PDB to be designed")
- argparser.add_argument("--pdb_path_chains", type=str, default='', help="Define which chains need to be designed for a single PDB ")
- argparser.add_argument("--jsonl_path", type=str, help="Path to a folder with parsed pdb into jsonl")
- argparser.add_argument("--chain_id_jsonl",type=str, default='', help="Path to a dictionary specifying which chains need to be designed and which ones are fixed, if not specied all chains will be designed.")
- argparser.add_argument("--fixed_positions_jsonl", type=str, default='', help="Path to a dictionary with fixed positions")
- argparser.add_argument("--omit_AAs", type=list, default='X', help="Specify which amino acids should be omitted in the generated sequence, e.g. 'AC' would omit alanine and cystine.")
- argparser.add_argument("--bias_AA_jsonl", type=str, default='', help="Path to a dictionary which specifies AA composion bias if neededi, e.g. {A: -1.1, F: 0.7} would make A less likely and F more likely.")
-
- argparser.add_argument("--bias_by_res_jsonl", default='', help="Path to dictionary with per position bias.")
- argparser.add_argument("--omit_AA_jsonl", type=str, default='', help="Path to a dictionary which specifies which amino acids need to be omited from design at specific chain indices")
- argparser.add_argument("--pssm_jsonl", type=str, default='', help="Path to a dictionary with pssm")
- argparser.add_argument("--pssm_multi", type=float, default=0.0, help="A value between [0.0, 1.0], 0.0 means do not use pssm, 1.0 ignore MPNN predictions")
- argparser.add_argument("--pssm_threshold", type=float, default=0.0, help="A value between -inf + inf to restric per position AAs")
- argparser.add_argument("--pssm_log_odds_flag", type=int, default=0, help="0 for False, 1 for True")
- argparser.add_argument("--pssm_bias_flag", type=int, default=0, help="0 for False, 1 for True")
-
- argparser.add_argument("--tied_positions_jsonl", type=str, default='', help="Path to a dictionary with tied positions")
-
- args = argparser.parse_args()
- main(args)
diff --git a/spaces/simonraj/ELOralCoachRiverValleyPrimarySchool/RiverValleyData.py b/spaces/simonraj/ELOralCoachRiverValleyPrimarySchool/RiverValleyData.py
deleted file mode 100644
index 4939cb790d53375f47ca19ccd4864ec8fd1965d7..0000000000000000000000000000000000000000
--- a/spaces/simonraj/ELOralCoachRiverValleyPrimarySchool/RiverValleyData.py
+++ /dev/null
@@ -1,47 +0,0 @@
-#RiverValleyData.py
-strategy_text = {
- "SEP": (
- "SEP strategy - State, Elaborate, Personal experiences",
- (
- "Structure your feedback using the SEP strategy. "
- "Begin with a State (S), where you state your point and answer to the question posed. "
- "Next, Elaborate (E) on your statement, justifying and explaining the reasons for your choice of answer in S. "
- "Where relevant, use the Five-Fingers thinking frame to help consolidate examples from different areas of your life: Self, Home, School, Community, and Nation. "
- "Lastly, share Personal experiences (P) that you have gone through, or experiences you have heard of, to support your answer. "
- "Community and Nation are considered higher progress responses, and therefore are used for stretching the higher progress students."
- )
- )
-}
-
-description = (
- "The image showcases a promotional advertisement banner titled \"Bestsellers for the month of October!\" Underneath the title, there's a sub-caption that reads, \"Do not miss out! Now at a special discount for 3 days only!\""
- "\n\nThe items displayed in the advertisement are:"
- "\n\nBestseller #1 - A board game. It is illustrated as a rectangular game box with question marks on its cover, suggesting the content or theme of the game might be a surprise or mystery."
- "\n\nBestseller #2 - A gaming console. The illustration shows a sleek, flat gaming console device next to a controller with buttons and a joystick."
- "\n\nBestseller #3 - Titled \"Smash It\", this item is depicted as a badminton racket paired with a shuttlecock."
- "\n\nThe products are clearly labeled and emphasized to showcase their popularity and relevance for the month of October. The entire design aims to attract customers and prompt them to avail of the special discount."
-)
-
-questions = [
- f"1. Look at the picture. Would you be interested in playing with these toys and games? Why / Why not? ",
- f"2. Do you spend a lot of time playing with toys and games? Why / Why not? ",
- f"3. What do you think are some benefits of leisure activities? "
-]
-
-def generate_system_message():
- strategy, explanation = strategy_text["SEP"]
-
- system_message = f"""
- As your English Oral Coach, my role is to guide you as you prepare to answer the oral questions. I'll be asking thought-provoking questions to help you develop your own answers.
-
- Now, let's focus on the {strategy}. {explanation}
-
- Along the way, I'll prompt you to clarify your thoughts, explore key terms, challenge your reasoning, and reflect on the discussion.
-
- Once we've thoroughly explored each part of the strategy, I'll assist you in assembling your thoughts into a comprehensive and eloquent response using the insights we've gathered.
-
- Remember, our ultimate goal is to enhance your critical thinking skills and independence. Try to use sophisticated vocabulary and expressions, and refer to the picture where relevant to support your response.
-
- Please ensure your response is in English.
- """
- return system_message
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Enjoy the Latest Version of Little Big City 2 MOD APK 9.4.1 with No Ads.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Enjoy the Latest Version of Little Big City 2 MOD APK 9.4.1 with No Ads.md
deleted file mode 100644
index d30b24ed61bfb77acaab6b38caf7a26c56a33245..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Enjoy the Latest Version of Little Big City 2 MOD APK 9.4.1 with No Ads.md
+++ /dev/null
@@ -1,96 +0,0 @@
-
-
Little Big City 2 Mod APK 9.4.1: Build Your Dream City
-
Do you love city-building games? Do you want to create your own metropolis with unlimited resources and customization options? If yes, then you should try Little Big City 2 Mod APK, a modified version of the popular casual game by Gameloft. In this article, we will tell you everything you need to know about this amazing mod, including its features, benefits, and installation guide. Let's get started!
City-building games are very popular among casual gamers who enjoy designing and managing their own virtual cities. They are fun, relaxing, and creative, allowing you to express your imagination and vision. However, most of these games have some limitations and drawbacks, such as limited resources, slow progress, expensive in-app purchases, and annoying ads. That's why many players look for modded versions of these games that can give them more freedom and enjoyment.
-
What is Little Big City 2?
-
Little Big City 2 is a city-building game developed by Gameloft, one of the leading mobile game developers in the world. It was released in 2016 and has over 10 million downloads on Google Play Store. The game lets you transform a small island into a bustling megacity with your own style and preferences. You can choose from three different development paths: cultural, industrial, or technological. You can also interact with other players and visit their cities, as well as complete various quests and challenges to earn rewards and unlock new features.
-
What is Little Big City 2 Mod APK?
-
Little Big City 2 Mod APK is a modified version of the original game that gives you access to unlimited resources and features that are not available in the official version. With this mod, you can build your dream city without any restrictions or limitations. You can enjoy the game to the fullest without spending any real money or watching any ads. You can also explore all the content and options that the game has to offer without waiting for long loading times or energy refills.
-
Why should you download Little Big City 2 Mod APK?
-
If you are a fan of city-building games, then you should definitely download Little Big City 2 Mod APK for many reasons. Here are some of them:
-
-
You can build your city faster and easier with unlimited money and diamonds.
-
You can customize your city with hundreds of buildings and decorations that are unlocked for free.
-
You can play the game without any interruptions or distractions from ads or pop-ups.
-
You can enjoy the game without worrying about running out of energy or resources.
-
You can experience the game in high-quality graphics and sound effects.
-
-
Features of Little Big City 2 Mod APK
-
Little Big City 2 Mod APK has many amazing features that make it one of the best city-building games on Android. Here are some of them:
-
little big city 2 unlimited money mod apk
-download little big city 2 mod apk latest version
-little big city 2 hack mod apk free download
-how to install little big city 2 mod apk
-little big city 2 mod apk offline
-little big city 2 mod apk android 1
-little big city 2 mod apk revdl
-little big city 2 mod apk unlimited diamonds
-little big city 2 mod apk no root
-little big city 2 mod apk for pc
-little big city 2 mod apk unlimited everything
-little big city 2 mod apk online
-little big city 2 mod apk happymod
-little big city 2 mod apk rexdl
-little big city 2 mod apk obb
-little big city 2 mod apk unlimited cash and coins
-little big city 2 mod apk pure
-little big city 2 mod apk old version
-little big city 2 mod apk update
-little big city 2 mod apk cheat
-little big city 2 mod apk full unlocked
-little big city 2 mod apk vip
-little big city 2 mod apk mega
-little big city 2 mod apk data
-little big city 2 mod apk new version
-little big city 2 premium mod apk
-little big city 2 pro mod apk
-little big city 2 cracked mod apk
-little big city 2 unlimited gems mod apk
-little big city 2 hack tool mod apk
-little big city 2 original mod apk
-little big city 2 best mod apk
-little big city 2 super mod apk
-little big city 2 extreme mod apk
-little big city 2 ultimate mod apk
-little big city 2 deluxe mod apk
-little big city 2 gold mod apk
-little big city 2 plus mod apk
-little big city 2 max mod apk
-little big city 2 final mod apk
-
Unlimited money
-
Money is the main currency in the game that you need to buy buildings, decorations, upgrades, and other items. With Little Big City 2 Mod APK, you will have unlimited money in your account that you can use as much as you want. You don't have to worry about saving up or earning money through quests or activities. You can buy anything you like and create your own unique city.
-
Unlimited diamonds
-
Diamonds are the premium currency in the game that you can use to speed up the construction process, buy special buildings, or exchange for more money. With Little Big City 2 Mod APK, you will have unlimited diamonds in your account that you can use as much as you want. You don't have to wait for hours or days for your buildings to be completed or spend real money to buy more diamonds. You can enjoy the game at your own pace and convenience.
-
Unlimited energy
-
Energy is the resource that you need to perform various actions and tasks in the game, such as expanding your land, collecting resources, or completing quests. With Little Big City 2 Mod APK, you will have unlimited energy in your account that you can use as much as you want. You don't have to wait for your energy to refill or watch ads to get more energy. You can play the game without any limitations or restrictions.
-
Unlocked all buildings and decorations
-
The game offers a wide range of buildings and decorations that you can use to customize your city and make it more attractive and functional. However, some of these items are locked and require you to reach a certain level, complete a certain quest, or pay a certain amount of money or diamonds to unlock them. With Little Big City 2 Mod APK, you will have all the buildings and decorations unlocked for free. You can access them from the store and place them anywhere you want. You can also upgrade them to improve their appearance and performance.
-
No ads
-
The official version of the game contains ads that can interrupt your gameplay and annoy you. Sometimes, you have to watch ads to get more energy or rewards. With Little Big City 2 Mod APK, you will not see any ads in the game. You can play the game without any distractions or interruptions. You can also save your data and battery life by not loading any ads.
-
How to download and install Little Big City 2 Mod APK?
-
If you are interested in downloading and installing Little Big City 2 Mod APK, you can follow these simple steps:
-
Step 1: Download the APK file from a trusted source
-
The first step is to download the APK file of Little Big City 2 Mod APK from a reliable and secure source. You can use the link below to download the latest version of the mod (9.4.1) that is compatible with most Android devices.
The next step is to enable unknown sources on your device so that you can install the APK file. To do this, go to your device settings, then security, then unknown sources, and toggle it on. This will allow you to install apps from sources other than Google Play Store.
-
Step 3: Install the APK file and launch the game
-
The final step is to install the APK file and launch the game. To do this, locate the downloaded file in your file manager and tap on it. Follow the instructions on the screen to complete the installation process. Once done, open the game and enjoy building your dream city with Little Big City 2 Mod APK.
-
Conclusion
-
Little Big City 2 Mod APK is a great mod for city-building lovers who want to have more fun and freedom in their gameplay. It gives you unlimited money, diamonds, energy, and access to all the buildings and decorations in the game. It also removes all the ads and improves the graphics and sound quality of the game. It is easy to download and install, and it works on most Android devices. If you are looking for a way to enhance your gaming experience with Little Big City 2, then you should definitely try this mod.
-
FAQs
-
Here are some frequently asked questions about Little Big City 2 Mod APK:
-
Is Little Big City 2 Mod APK safe?
-
Yes, Little Big City 2 Mod APK is safe to use as long as you download it from a trusted source like ours. We have tested it on various devices and found no viruses or malware in it. However, we recommend that you always scan any file before installing it on your device.
-
h4>Does Little Big City 2 Mod APK require root access?
-
No, Little Big City 2 Mod APK does not require root access to work on your device. You can install it and play it without rooting your device. However, if you have a rooted device, you can still use the mod without any problems.
-
Can I play Little Big City 2 Mod APK online with other players?
-
Yes, you can play Little Big City 2 Mod APK online with other players who are using the same mod or the official version of the game. You can visit their cities, chat with them, and exchange gifts with them. However, you should be careful not to abuse the mod features or cheat in the game, as this may result in a ban from the game servers.
-
Can I update Little Big City 2 Mod APK to the latest version?
-
Yes, you can update Little Big City 2 Mod APK to the latest version whenever there is a new update available. However, you should always download the updated mod from the same source that you downloaded the original mod from. You should also backup your game data before updating the mod, as some updates may cause data loss or compatibility issues.
-
Can I use Little Big City 2 Mod APK on iOS devices?
-
No, Little Big City 2 Mod APK is only compatible with Android devices. It cannot be used on iOS devices, such as iPhones or iPads. If you want to play Little Big City 2 on your iOS device, you will have to download the official version of the game from the App Store.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/skhanuja/zeno-winoground/Dockerfile b/spaces/skhanuja/zeno-winoground/Dockerfile
deleted file mode 100644
index b5a55ae3b5928ddc6dca732a5adb4b758c5e1512..0000000000000000000000000000000000000000
--- a/spaces/skhanuja/zeno-winoground/Dockerfile
+++ /dev/null
@@ -1,22 +0,0 @@
-# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker
-# you will also find guides on how best to write your Dockerfile
-
-FROM python:3.8
-
-RUN useradd -m -u 1000 user
-USER user
-# Set home to the user's home directory
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:$PATH
-WORKDIR $HOME/app
-# Copy the current directory contents into the container at $HOME/app setting the owner to the user
-COPY --chown=user . $HOME/app
-ADD --chown=user ./.zeno_cache $HOME/app/.zeno_cache
-RUN chown user:user -R $HOME/app
-
-COPY ./requirements.txt /code/requirements.txt
-
-RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
-
-
-CMD ["zeno", "config.toml"]
\ No newline at end of file
diff --git a/spaces/sklearn-docs/Precision-Recall/app.py b/spaces/sklearn-docs/Precision-Recall/app.py
deleted file mode 100644
index 57f09faf41b5e6ac2745795a0e781e3770175c5c..0000000000000000000000000000000000000000
--- a/spaces/sklearn-docs/Precision-Recall/app.py
+++ /dev/null
@@ -1,84 +0,0 @@
-import numpy as np
-import gradio as gr
-from sklearn.svm import LinearSVC
-from sklearn.datasets import load_iris
-from sklearn.pipeline import make_pipeline
-from sklearn.multiclass import OneVsRestClassifier
-from sklearn.model_selection import train_test_split
-from sklearn.preprocessing import label_binarize, StandardScaler
-
-import utils
-
-
-def app_fn(n_random_features: int, test_size: float, random_state_val: int):
- X, y = load_iris(return_X_y=True)
-
- # Add noisy features
- random_state = np.random.RandomState(random_state_val)
- n_samples, n_features = X.shape
- X = np.concatenate([X, random_state.randn(n_samples, n_random_features)], axis=1)
-
- # Solving Binary Problem
- X_train, X_test, y_train, y_test = train_test_split(
- X[y < 2], y[y < 2], test_size=test_size, random_state=random_state
- )
-
- clf_bin = make_pipeline(StandardScaler(), LinearSVC(random_state=random_state))
- clf_bin.fit(X_train, y_train)
-
- fig_bin = utils.plot_binary_pr_curve(clf_bin, X_test, y_test)
-
- # Solving Multi-Label Problem
- Y = label_binarize(y, classes=[0, 1, 2])
- X_train_multi, X_test_multi, Y_train, Y_test = train_test_split(
- X, Y, test_size=test_size, random_state=random_state
- )
-
- clf = OneVsRestClassifier(
- make_pipeline(StandardScaler(), LinearSVC(random_state=random_state))
- )
- clf.fit(X_train_multi, Y_train)
-
- fig_multi = utils.plot_multi_label_pr_curve(clf, X_test_multi, Y_test)
-
- return fig_bin, fig_multi
-
-
-title = "Precision-Recall Curves"
-with gr.Blocks(title=title) as demo:
- gr.Markdown(f"# {title}")
- gr.Markdown(
- """
- This demo shows the precision-recall curves on the Iris dataset \
- using a Linear SVM classifier + StandardScaler. \
- Noise is added to the dataset to make the problem more challenging. \
- The dataset is split into train and test sets. \
- The model is trained on the train set and evaluated on the test set. \
- Two separate problems are solved:
-
- - Binary classification: class 0 vs class 1
- - Multi-label classification: class 0 vs class 1 vs class 2
-
- See the scikit-learn example [here](https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html#sphx-glr-auto-examples-model-selection-plot-precision-recall-py).
- """
- )
-
- with gr.Row():
- n_random_features = gr.inputs.Slider(0, 1000, 50, 800,label="Number of Random Features")
- test_size = gr.inputs.Slider(0.1, 0.9, 0.01, 0.5, label="Test Size")
- random_state_val = gr.inputs.Slider(0, 100, 5, 0,label="Random State")
-
-
- with gr.Row():
- fig_bin = gr.Plot(label="Binary PR Curve")
- fig_multi = gr.Plot(label="Multi-Label PR Curve")
-
- n_random_features.change(fn=app_fn, inputs=[n_random_features, test_size, random_state_val], outputs=[fig_bin, fig_multi])
- test_size.change(fn=app_fn, inputs=[n_random_features, test_size, random_state_val], outputs=[fig_bin, fig_multi])
- random_state_val.change(fn=app_fn, inputs=[n_random_features, test_size, random_state_val], outputs=[fig_bin, fig_multi])
-
- demo.load(fn=app_fn, inputs=[n_random_features, test_size, random_state_val], outputs=[fig_bin, fig_multi])
-
-demo.launch()
-
-
diff --git a/spaces/sneedium/dvatch_captcha_sneedium/app.py b/spaces/sneedium/dvatch_captcha_sneedium/app.py
deleted file mode 100644
index 82051caa07fab17ea6b7569a793b4203d092d311..0000000000000000000000000000000000000000
--- a/spaces/sneedium/dvatch_captcha_sneedium/app.py
+++ /dev/null
@@ -1,33 +0,0 @@
-# import requests
-# res = requests.get("https://seyarabata.com/6370e77fec965")
-# with open("last.ckpt", 'wb') as f:
-# f.write(res.content)
-import os
-os.system('curl "https://seyarabata.com/6370e77fec965" -L -O last.ckpt')
-
-import gradio as gr
-import torch
-from PIL import Image
-from strhub.data.module import SceneTextDataModule
-# from strhub.models.utils import load_from_checkpoint, parse_model_args
-
-# parseq = torch.load('tensor.pt', map_location=torch.device('cpu')).eval()
-from strhub.models.parseq.system import PARSeq as ModelClass
-parseq = ModelClass.load_from_checkpoint("last.ckpt").eval()
-
-img_transform = SceneTextDataModule.get_transform(parseq.hparams.img_size)
-
-def captcha_solver(img):
- img = img.convert('RGB')
- img = img_transform(img).unsqueeze(0)
-
- logits = parseq(img)
- logits.shape
-
- # # Greedy decoding
- pred = logits.softmax(-1)
- label, confidence = parseq.tokenizer.decode(pred)
- return label[0]
-
-demo = gr.Interface(fn=captcha_solver, inputs=gr.inputs.Image(type="pil"), outputs=gr.outputs.Textbox())
-demo.launch()
\ No newline at end of file
diff --git a/spaces/society-ethics/about/README.md b/spaces/society-ethics/about/README.md
deleted file mode 100644
index f77fed313cddff08feadffe19d0c9cc0fb69d830..0000000000000000000000000000000000000000
--- a/spaces/society-ethics/about/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Ethics & Society at Hugging Face
-emoji: 🧐
-colorFrom: purple
-colorTo: gray
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: true
-license: gpl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/srijitpanja/aip/mic--git-base/git-base/README.md b/spaces/srijitpanja/aip/mic--git-base/git-base/README.md
deleted file mode 100644
index d8f4c06e1aadbc9ec0bd10b66cca6c4f4b4c5472..0000000000000000000000000000000000000000
--- a/spaces/srijitpanja/aip/mic--git-base/git-base/README.md
+++ /dev/null
@@ -1,66 +0,0 @@
----
-language: en
-license: mit
-tags:
-- vision
-- image-to-text
-- image-captioning
-model_name: microsoft/git-base
-pipeline_tag: image-to-text
----
-
-# GIT (GenerativeImage2Text), base-sized
-
-GIT (short for GenerativeImage2Text) model, base-sized version. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text).
-
-Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team.
-
-## Model description
-
-GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs.
-
-The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens.
-
-The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token.
-
-
-
-This allows the model to be used for tasks like:
-
-- image and video captioning
-- visual question answering (VQA) on images and videos
-- even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text).
-
-## Intended uses & limitations
-
-You can use the raw model for image captioning. See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for
-fine-tuned versions on a task that interests you.
-
-### How to use
-
-For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/model_doc/git#transformers.GitForCausalLM.forward.example).
-
-## Training data
-
-From the paper:
-
-> We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions
-(CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016),
-Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B
-data following a similar collection procedure in Hu et al. (2021a).
-
-=> however this is for the model referred to as "GIT" in the paper, which is not open-sourced.
-
-This checkpoint is "GIT-base", which is a smaller variant of GIT trained on 10 million image-text pairs.
-
-See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details.
-
-### Preprocessing
-
-We refer to the original repo regarding details for preprocessing during training.
-
-During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation.
-
-## Evaluation results
-
-For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100).
\ No newline at end of file
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/m2m_100/tok.sh b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/m2m_100/tok.sh
deleted file mode 100644
index ba2ec5a2f3f4794d2e528d3a6574bf05abe1d043..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/m2m_100/tok.sh
+++ /dev/null
@@ -1,83 +0,0 @@
-#!/usr/bin/env bash
-# Copyright (c) 2019-present, Facebook, Inc.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-#
-
-set -e
-
-TOKENIZERS_SCRIPTS=tokenizers
-INSTALL_PATH=$TOKENIZERS_SCRIPTS/thirdparty
-
-N_THREADS=8
-
-lg=$1
-
-MOSES=$INSTALL_PATH/mosesdecoder
-REPLACE_UNICODE_PUNCT=$MOSES/scripts/tokenizer/replace-unicode-punctuation.perl
-NORM_PUNC=$MOSES/scripts/tokenizer/normalize-punctuation.perl
-REM_NON_PRINT_CHAR=$MOSES/scripts/tokenizer/remove-non-printing-char.perl
-TOKENIZER=$MOSES/scripts/tokenizer/tokenizer.perl
-
-# special tokenization for Romanian
-WMT16_SCRIPTS=$INSTALL_PATH/wmt16-scripts
-
-NORMALIZE_ROMANIAN=$WMT16_SCRIPTS/preprocess/normalise-romanian.py
-REMOVE_DIACRITICS=$WMT16_SCRIPTS/preprocess/remove-diacritics.py
-
-# Burmese
-MY_SEGMENT=$INSTALL_PATH/seg_my.py
-
-# Arabic
-AR_TOKENIZER=$TOKENIZERS_SCRIPTS/tokenizer_ar.sh
-
-# Korean
-KO_SEGMENT=$TOKENIZERS_SCRIPTS/seg_ko.sh
-
-# Japanese
-JA_SEGMENT=$TOKENIZERS_SCRIPTS/seg_ja.sh
-
-# Indic
-IN_TOKENIZER=$TOKENIZERS_SCRIPTS/tokenize_indic.py
-INDIC_RESOURCES_PATH=$INSTALL_PATH/indic_nlp_resources
-
-# Thai
-THAI_TOKENIZER=$TOKENIZERS_SCRIPTS/tokenize_thai.py
-
-# Chinese
-CHINESE_TOKENIZER=$TOKENIZERS_SCRIPTS/tokenize_zh.py
-
-# Chinese
-if [ "$lg" = "zh" ]; then
- cat - | $REPLACE_UNICODE_PUNCT | $NORM_PUNC -l $lg | $REM_NON_PRINT_CHAR | python $CHINESE_TOKENIZER
-# Thai
-elif [ "$lg" = "th" ]; then
- cat - | python $THAI_TOKENIZER
-# Japanese
-elif [ "$lg" = "ja" ]; then
- cat - | $REPLACE_UNICODE_PUNCT | $NORM_PUNC -l $lg | $REM_NON_PRINT_CHAR | ${JA_SEGMENT}
-# Korean
-elif [ "$lg" = "ko" ]; then
- cat - | $REM_NON_PRINT_CHAR | ${KO_SEGMENT}
-# Romanian
-elif [ "$lg" = "ro" ]; then
- cat - | $REPLACE_UNICODE_PUNCT | $NORM_PUNC -l $lg | $REM_NON_PRINT_CHAR | $NORMALIZE_ROMANIAN | $REMOVE_DIACRITICS | $TOKENIZER -no-escape -threads $N_THREADS -l $lg
-# Burmese
-elif [ "$lg" = "my" ]; then
- cat - | python ${MY_SEGMENT}
-# Arabic
-elif [ "$lg" = "ar" ]; then
- cat - | ${AR_TOKENIZER}
-# Indic
-elif [ "$lg" = "ne" ]; then
- cat - | python ${IN_TOKENIZER} $lg
-elif [ "$lg" = "si" ]; then
- cat - | python ${IN_TOKENIZER} $lg
-elif [ "$lg" = "hi" ]; then
- cat - | python ${IN_TOKENIZER} $lg
-# other languages
-else
- cat - | $REPLACE_UNICODE_PUNCT | $NORM_PUNC -l $lg | $REM_NON_PRINT_CHAR | $TOKENIZER -no-escape -threads $N_THREADS -l $lg
-fi
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/character_token_embedder.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/character_token_embedder.py
deleted file mode 100644
index 181221b61b9f76453b67e3b848b198620dce912c..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/character_token_embedder.py
+++ /dev/null
@@ -1,214 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-from typing import List, Tuple
-
-import torch
-import torch.nn.functional as F
-from fairseq.data import Dictionary
-from torch import nn
-
-
-CHAR_PAD_IDX = 0
-CHAR_EOS_IDX = 257
-
-
-logger = logging.getLogger(__name__)
-
-
-class CharacterTokenEmbedder(torch.nn.Module):
- def __init__(
- self,
- vocab: Dictionary,
- filters: List[Tuple[int, int]],
- char_embed_dim: int,
- word_embed_dim: int,
- highway_layers: int,
- max_char_len: int = 50,
- char_inputs: bool = False,
- ):
- super(CharacterTokenEmbedder, self).__init__()
-
- self.onnx_trace = False
- self.embedding_dim = word_embed_dim
- self.max_char_len = max_char_len
- self.char_embeddings = nn.Embedding(257, char_embed_dim, padding_idx=0)
- self.symbol_embeddings = nn.Parameter(torch.FloatTensor(2, word_embed_dim))
- self.eos_idx, self.unk_idx = 0, 1
- self.char_inputs = char_inputs
-
- self.convolutions = nn.ModuleList()
- for width, out_c in filters:
- self.convolutions.append(
- nn.Conv1d(char_embed_dim, out_c, kernel_size=width)
- )
-
- last_dim = sum(f[1] for f in filters)
-
- self.highway = Highway(last_dim, highway_layers) if highway_layers > 0 else None
-
- self.projection = nn.Linear(last_dim, word_embed_dim)
-
- assert (
- vocab is not None or char_inputs
- ), "vocab must be set if not using char inputs"
- self.vocab = None
- if vocab is not None:
- self.set_vocab(vocab, max_char_len)
-
- self.reset_parameters()
-
- def prepare_for_onnx_export_(self):
- self.onnx_trace = True
-
- def set_vocab(self, vocab, max_char_len):
- word_to_char = torch.LongTensor(len(vocab), max_char_len)
-
- truncated = 0
- for i in range(len(vocab)):
- if i < vocab.nspecial:
- char_idxs = [0] * max_char_len
- else:
- chars = vocab[i].encode()
- # +1 for padding
- char_idxs = [c + 1 for c in chars] + [0] * (max_char_len - len(chars))
- if len(char_idxs) > max_char_len:
- truncated += 1
- char_idxs = char_idxs[:max_char_len]
- word_to_char[i] = torch.LongTensor(char_idxs)
-
- if truncated > 0:
- logger.info(
- "truncated {} words longer than {} characters".format(
- truncated, max_char_len
- )
- )
-
- self.vocab = vocab
- self.word_to_char = word_to_char
-
- @property
- def padding_idx(self):
- return Dictionary().pad() if self.vocab is None else self.vocab.pad()
-
- def reset_parameters(self):
- nn.init.xavier_normal_(self.char_embeddings.weight)
- nn.init.xavier_normal_(self.symbol_embeddings)
- nn.init.xavier_uniform_(self.projection.weight)
-
- nn.init.constant_(
- self.char_embeddings.weight[self.char_embeddings.padding_idx], 0.0
- )
- nn.init.constant_(self.projection.bias, 0.0)
-
- def forward(
- self,
- input: torch.Tensor,
- ):
- if self.char_inputs:
- chars = input.view(-1, self.max_char_len)
- pads = chars[:, 0].eq(CHAR_PAD_IDX)
- eos = chars[:, 0].eq(CHAR_EOS_IDX)
- if eos.any():
- if self.onnx_trace:
- chars = torch.where(eos.unsqueeze(1), chars.new_zeros(1), chars)
- else:
- chars[eos] = 0
-
- unk = None
- else:
- flat_words = input.view(-1)
- chars = self.word_to_char[flat_words.type_as(self.word_to_char)].type_as(
- input
- )
- pads = flat_words.eq(self.vocab.pad())
- eos = flat_words.eq(self.vocab.eos())
- unk = flat_words.eq(self.vocab.unk())
-
- word_embs = self._convolve(chars)
- if self.onnx_trace:
- if pads.any():
- word_embs = torch.where(
- pads.unsqueeze(1), word_embs.new_zeros(1), word_embs
- )
- if eos.any():
- word_embs = torch.where(
- eos.unsqueeze(1), self.symbol_embeddings[self.eos_idx], word_embs
- )
- if unk is not None and unk.any():
- word_embs = torch.where(
- unk.unsqueeze(1), self.symbol_embeddings[self.unk_idx], word_embs
- )
- else:
- if pads.any():
- word_embs[pads] = 0
- if eos.any():
- word_embs[eos] = self.symbol_embeddings[self.eos_idx]
- if unk is not None and unk.any():
- word_embs[unk] = self.symbol_embeddings[self.unk_idx]
-
- return word_embs.view(input.size()[:2] + (-1,))
-
- def _convolve(
- self,
- char_idxs: torch.Tensor,
- ):
- char_embs = self.char_embeddings(char_idxs)
- char_embs = char_embs.transpose(1, 2) # BTC -> BCT
-
- conv_result = []
-
- for conv in self.convolutions:
- x = conv(char_embs)
- x, _ = torch.max(x, -1)
- x = F.relu(x)
- conv_result.append(x)
-
- x = torch.cat(conv_result, dim=-1)
-
- if self.highway is not None:
- x = self.highway(x)
- x = self.projection(x)
-
- return x
-
-
-class Highway(torch.nn.Module):
- """
- A `Highway layer `_.
- Adopted from the AllenNLP implementation.
- """
-
- def __init__(self, input_dim: int, num_layers: int = 1):
- super(Highway, self).__init__()
- self.input_dim = input_dim
- self.layers = nn.ModuleList(
- [nn.Linear(input_dim, input_dim * 2) for _ in range(num_layers)]
- )
- self.activation = nn.ReLU()
-
- self.reset_parameters()
-
- def reset_parameters(self):
- for layer in self.layers:
- # As per comment in AllenNLP:
- # We should bias the highway layer to just carry its input forward. We do that by
- # setting the bias on `B(x)` to be positive, because that means `g` will be biased to
- # be high, so we will carry the input forward. The bias on `B(x)` is the second half
- # of the bias vector in each Linear layer.
- nn.init.constant_(layer.bias[self.input_dim :], 1)
-
- nn.init.constant_(layer.bias[: self.input_dim], 0)
- nn.init.xavier_normal_(layer.weight)
-
- def forward(self, x: torch.Tensor):
- for layer in self.layers:
- projection = layer(x)
- proj_x, gate = projection.chunk(2, dim=-1)
- proj_x = self.activation(proj_x)
- gate = torch.sigmoid(gate)
- x = gate * x + (gate.new_tensor([1]) - gate) * proj_x
- return x
diff --git a/spaces/stomexserde/gpt4-ui/Examples/