diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Excel for iPad How to Download and Use the Best Spreadsheet App without Paying a Dime.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Excel for iPad How to Download and Use the Best Spreadsheet App without Paying a Dime.md
deleted file mode 100644
index e5d475b570ddd6079dc54eb438640c5437be6d2e..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Excel for iPad How to Download and Use the Best Spreadsheet App without Paying a Dime.md
+++ /dev/null
@@ -1,19 +0,0 @@
-
-
Crack Microsoft Excel for iPad: How to Download and Use the Spreadsheet App for Free
-
Microsoft Excel is one of the most popular and powerful spreadsheet applications that can help you to create, edit and analyze data, charts, graphs and more. Excel is part of the Microsoft Office suite that also includes Word, PowerPoint and Outlook.
-
microsoft excel for ipad free download full version crack
Microsoft Excel is available for iPad and iPhone users as a free download from the App Store. However, the free version of Excel has some limitations and restrictions. You can only view and print Excel files, but you cannot create or edit them. You also cannot access some of the advanced features and functions of Excel.
-
If you want to use Excel on your iPad without any limitations, you have to buy a subscription to Microsoft 365 (formerly Office 365), which is a cloud-based service that gives you access to the full versions of the Office apps on multiple devices. The price of Microsoft 365 varies depending on the plan you choose, but it starts from $6.99 per month or $69.99 per year for a personal plan.
-
But what if you don't want to pay for Microsoft 365? Is there a way to download and use Excel on your iPad for free? The answer is yes, but it is not legal or ethical. Some people have managed to crack Microsoft Excel for iPad and make it available for free download on the internet. A crack is a program that modifies or bypasses the security features of a software to make it work without a license or activation.
-
Cracking Microsoft Excel for iPad is not only illegal but also risky. You may face legal consequences if you are caught using a cracked software. You may also expose your iPad to viruses, malware, spyware and other threats that may harm your data and privacy. Moreover, you may not get the full functionality and reliability of Excel if you use a cracked version.
-
-
Therefore, we do not recommend or endorse cracking Microsoft Excel for iPad or any other software. It is better to use a legitimate and authorized version of Excel that can guarantee you quality, accuracy and security. If you cannot afford to buy Microsoft 365, you can try some of the free or cheaper alternatives that are available online.
-
Some of the free or cheaper alternatives to Microsoft Excel for iPad are:
-
-
Google Sheets: This is a web-based spreadsheet app that is part of the Google Workspace suite that also includes Docs, Slides and Gmail. You can create, edit and share spreadsheets online with Google Sheets. You can also access your spreadsheets offline with the Google Sheets app for iOS.
-
Apple Numbers: This is a spreadsheet app that is part of the iWork suite that also includes Pages and Keynote. You can create, edit and share spreadsheets with Apple Numbers. You can also sync your spreadsheets across your devices with iCloud.
-
Zoho Sheet: This is a web-based spreadsheet app that is part of the Zoho Office suite that also includes Writer, Show and Mail. You can create, edit and share spreadsheets with Zoho Sheet. You can also collaborate with others in real-time with Zoho Sheet.
-
-
These are some of the free or cheaper alternatives to Microsoft Excel for iPad that you can use for creating and editing spreadsheets on your iPad. However, they may not have all the features and capabilities of Excel and they may require an internet connection to work.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Bluelight Filter For Eye Care 3.3.1 APK [Mod] [full VERIFIED].md b/spaces/1gistliPinn/ChatGPT4/Examples/Bluelight Filter For Eye Care 3.3.1 APK [Mod] [full VERIFIED].md
deleted file mode 100644
index e0773488f6bea1e461e8a68a5d7f882b28908a4f..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Bluelight Filter For Eye Care 3.3.1 APK [Mod] [full VERIFIED].md
+++ /dev/null
@@ -1,114 +0,0 @@
-
-
Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]: A Must-Have App for Your Eyes
-
If you are looking for an app that can protect your eyes from the harmful blue light emitted by your smartphone or tablet, you should try Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]. This app is designed to adjust your screen color to reduce the blue light and help your eyes relax, making it easier for you to fall asleep at night.
-
In this article, we will tell you why you need this app, what features it offers, and how to download and install it on your device.
-
Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]
Why You Need Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]
-
Blue light is a type of light that has a short wavelength and high energy. It is present in natural sunlight, but also in artificial sources such as LED lights, computer screens, and mobile devices. While blue light has some benefits, such as boosting alertness and mood, it also has some drawbacks, especially when exposed to it for long periods.
-
Studies have shown that blue light can cause eye strain, headaches, blurred vision, dry eyes, and even damage the retina. It can also disrupt the natural circadian rhythm of the body, which regulates the sleep-wake cycle. This can lead to insomnia, fatigue, mood swings, and impaired cognitive function.
-
That's why you need Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full], an app that can filter out the blue light from your screen and make it more comfortable for your eyes. By using this app, you can prevent eye problems, improve your sleep quality, and enhance your overall well-being.
-
-
What Features Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full] Offers
-
Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full] is a simple but effective app that has many features to suit your needs. Here are some of them:
-
-
Free Screen Filter App to Protect Your Eyes: You can reduce the strain on your eyes easily with this app. It is free to download and use, and it doesn't drain your battery or memory.
-
Screen Filter with Natural Color: This app's filter has a natural color so you can read news, emails, and websites clearly. It doesn't dim the screen but adjusts the screen color to reduce blue light which causes strain on your eyes.
-
Auto Mode: This mode automatically adjusts the screen color according to the external light to protect your eyes. You don't have to worry about changing the settings manually.
-
Schedule Mode: This mode allows you to turn on or off the screen filter according to a specific time. You can set it up according to your preference and routine.
-
Screenshots without Screen Filter: This feature removes the screen filter from the screenshots with the image processing AI technology. You can take clear screenshots without any distortion.
-
Easy Operation: It is easy to turn on or off the screen filter with just one tap. You can also adjust the opacity of the filter and choose from 7 different filter colors.
-
Startup Automatically: You can choose to launch this app on startup so you don't have to open it every time you use your device.
-
Reliable App: This app's developer has been registered as an official developer by an independent organization in Japan. You can trust this app's quality and safety.
-
-
-
How to Download and Install Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]
-
If you want to download and install Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full], you can follow these simple steps:
-
-
Click on the download link below to get the APK file of this app.
-
Allow unknown sources on your device by going to Settings > Security > Unknown Sources.
-
Locate the downloaded APK file on your device and tap on it to start the installation process.
-
Follow the instructions on the screen to complete the installation.
-
Launch the app and enjoy its benefits.
-
-
-
Conclusion
-
Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full] is a must-have app for anyone who uses their smartphone or tablet frequently. It can protect your eyes from blue light, reduce eye strain, improve sleep quality, and enhance your overall well-being.
-
You can download this app for free from the link below and start using it right away. You will notice the difference in your eyes and your mood after using this app.
-
-
How Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full] Works
-
Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full] works by applying a screen filter that changes the color temperature of your screen. The color temperature is a measure of how warm or cool the light is, and it affects how your eyes perceive the colors on the screen.
-
The app allows you to choose from different color temperatures, ranging from 1700K to 2500K. The lower the color temperature, the warmer and redder the light is, and the more blue light it filters out. The higher the color temperature, the cooler and bluer the light is, and the less blue light it filters out.
-
You can also customize the intensity of the filter by adjusting the opacity of the filter. The higher the opacity, the stronger the filter is, and the more blue light it blocks. The lower the opacity, the weaker the filter is, and the less blue light it blocks.
-
The app also has an auto mode that automatically adjusts the color temperature and opacity of the filter according to the ambient light. This way, you don't have to manually change the settings every time you move to a different environment.
-
-
What Users Say About Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]
-
Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full] has received many positive reviews from users who have tried it. Here are some of their testimonials:
-
-
"I love this app! It really helps me sleep better at night and reduces my eye strain during the day. I can feel the difference when I use it and when I don't."
-
"This app is amazing! It has so many options to choose from and it's very easy to use. I like how it automatically adjusts to the light around me. It makes my screen look more natural and comfortable."
-
"This app is a lifesaver! I have sensitive eyes and I often get headaches from staring at my screen for too long. This app helps me prevent that and makes my eyes feel more relaxed."
-
"This app is awesome! It's very effective and simple to use. I can read and browse without any problems with this app on. It also helps me fall asleep faster at night."
-
"This app is great! It's very user-friendly and customizable. I can choose the color and intensity of the filter that suits me best. It also doesn't affect my screenshots or other apps."
-
-
How to Use Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]
-
Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full] is very easy to use and has a user-friendly interface. Here are some steps to use this app:
-
-
Download and install the app from the link below or from the Google Play Store.
-
Open the app and grant the necessary permissions for it to work properly.
-
Select the filter color and opacity that you prefer from the main screen.
-
Tap on the switch button to turn on or off the filter.
-
You can also access the app settings from the menu icon on the top right corner of the screen.
-
From there, you can enable or disable the auto mode, schedule mode, startup mode, notification icon, and other options.
-
You can also check your eye health status and get some tips on how to take care of your eyes.
-
-
-
Pros and Cons of Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]
-
Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full] is a great app that has many benefits for your eyes and your health. However, it also has some drawbacks that you should be aware of. Here are some pros and cons of this app:
-
-
Pros:
-
-
It can protect your eyes from blue light and reduce eye strain.
-
It can improve your sleep quality and prevent insomnia.
-
It can enhance your mood and productivity.
-
It has a natural color filter that doesn't affect the readability of the screen.
-
It has an auto mode that adjusts the filter according to the ambient light.
-
It has a schedule mode that allows you to set up a specific time for the filter.
-
It has a screenshot feature that removes the filter from the screenshots.
-
It has a simple and easy operation with just one tap.
-
It has a reliable and safe developer.
-
-
Cons:
-
-
It may not be compatible with some devices or apps.
-
It may cause some color distortion or flickering on some screens.
-
It may not be effective for everyone or for every situation.
-
-
-
-
Frequently Asked Questions about Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full]
-
If you have any questions or doubts about Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full], you can check out some of these frequently asked questions and their answers:
-
-
Q: Is this app safe to use?
-
A: Yes, this app is safe to use and doesn't contain any viruses or malware. It also doesn't collect any personal data or interfere with other apps.
-
-
Q: Does this app affect my battery life?
-
A: No, this app doesn't affect your battery life significantly. It only adjusts the color temperature of your screen and doesn't consume much power or memory.
-
-
Q: Does this app work on all devices?
-
A: This app works on most devices that run on Android 4.4 or higher. However, some devices or apps may not support this app or may have some compatibility issues.
-
-
Q: Can I use this app with other apps?
-
A: Yes, you can use this app with most apps that don't have their own screen filters or brightness settings. However, some apps may override this app's filter or cause some conflicts.
-
-
Q: How can I contact the developer of this app?
-
A: You can contact the developer of this app by sending an email to info@hardy-infinity.com or by visiting their website at https://hardy-infinity.com/
-
-
-
Conclusion
-
Bluelight Filter for Eye Care 3.3.1 APK [Mod] [Full] is an app that can help you protect your eyes from the harmful blue light emitted by your smartphone or tablet. It can adjust your screen color to reduce the blue light and help your eyes relax, making it easier to fall asleep at night.
-
This app has many features to suit your needs, such as a natural color filter, an auto mode, a schedule mode, a screenshot feature, and an easy operation. It is also free to download and use, and it doesn't affect your battery life or memory.
-
This app is a must-have for anyone who uses their device frequently and wants to prevent eye problems, improve sleep quality, and enhance their overall well-being. You can download this app from the link below or from the Google Play Store and start using it right away.
-
You will notice the difference in your eyes and your mood after using this app. Try it now and see for yourself!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download UPD Film 300 Spartan Sub Indonesia 720p.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download UPD Film 300 Spartan Sub Indonesia 720p.md
deleted file mode 100644
index da2a287c1da8fd16c81f146d27ee5c78f7ceb140..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Download UPD Film 300 Spartan Sub Indonesia 720p.md
+++ /dev/null
@@ -1,12 +0,0 @@
-
-
-Free Download Movie 300 ( 2006) BluRay 720p+ Subtitle Indonesia Link Download 300 (2006) BluRay 720p 750MB Via Google Drive | Via Acefile BluRay 1080p 1.5GB. Film 300 (300: The Last Storm) (2006) - watch online, download free - Cinema.
-Download 300 (2006) for free.
-Category: Download Movies.
-Title: 300 (2006) Genre: Military, Action, Drama, Adventure Year of release: 2006 Director: Rob Cohen Cast: Tom Cruise.
-Film 300 (2006) - watch online, download torrent.
-Film 300 (2006) - watch online, download torrent / torrent.
-Download movie 300 - 300: The Last Assault (2006) torrent in good. 8a78ff9644
-
-
-
diff --git a/spaces/1line/AutoGPT/.github/PULL_REQUEST_TEMPLATE.md b/spaces/1line/AutoGPT/.github/PULL_REQUEST_TEMPLATE.md
deleted file mode 100644
index a4f28a3d27d66d79cb95f2b8b847832172bb5f11..0000000000000000000000000000000000000000
--- a/spaces/1line/AutoGPT/.github/PULL_REQUEST_TEMPLATE.md
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
-
-
-### Background
-
-
-### Changes
-
-
-### Documentation
-
-
-### Test Plan
-
-
-### PR Quality Checklist
-- [ ] My pull request is atomic and focuses on a single change.
-- [ ] I have thoroughly tested my changes with multiple different prompts.
-- [ ] I have considered potential risks and mitigations for my changes.
-- [ ] I have documented my changes clearly and comprehensively.
-- [ ] I have not snuck in any "extra" small tweaks changes
-
-
-
-
diff --git a/spaces/1phancelerku/anime-remove-background/Bhop Pro APK - Experience the Most Realistic Bunny Hop Game on Your Phone.md b/spaces/1phancelerku/anime-remove-background/Bhop Pro APK - Experience the Most Realistic Bunny Hop Game on Your Phone.md
deleted file mode 100644
index d54f61e6153368d4b349c2a02bb6ee53f86e361a..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Bhop Pro APK - Experience the Most Realistic Bunny Hop Game on Your Phone.md
+++ /dev/null
@@ -1,129 +0,0 @@
-
-
Bhop Pro Apkfun: A Fun and Challenging Game for Android Users
-
Do you love jumping games? Do you want to test your skills and reflexes in a fast-paced and realistic environment? Do you want to customize your character with cool skins and accessories? If you answered yes to any of these questions, then you should try Bhop Pro Apkfun.
-
Bhop Pro Apkfun is a fun and challenging game for android users who want to experience the thrill of bunny hopping on their mobile devices. Bhop Pro is a game mode where players have to jump on blocks and use air strafing to gain more speed and complete the map as fast as possible. It is inspired by the bhop style of jumping in games like Counter-Strike and Half-Life.
Bhop Pro is a portable mobile bhop style jumping game that allows you to enjoy the realistic bunny hop experience on your android device. You can choose from multiple game modes, such as speedrun, freestyle, practice, and multiplayer, and try out various maps with different layouts and obstacles. You can also compete with other players and increase your ranks, or just have fun jumping around and exploring the maps.
-
A game mode where players have to jump on blocks
-
Bhop Pro is based on a game mode that originated in games like Counter-Strike and Half-Life, where players have to jump on blocks and use air strafing to gain more speed and momentum. Air strafing is a technique where players move their mouse left or right while holding the corresponding movement key (A or D) in the air, which allows them to change their direction and velocity without losing speed. This way, players can jump faster and farther than normal, and also perform tricks and stunts.
-
A portable mobile bhop style jumping game
-
Bhop Pro is designed to be a mobile-friendly version of the bhop game mode, which means you can play it anytime and anywhere on your android device. You don't need a keyboard or a mouse to play Bhop Pro, as it has simple and accessible touch controls that let you jump and turn with ease. You can also adjust the sensitivity and the layout of the buttons according to your preference.
-
A realistic bunny hop game for android
-
Bhop Pro is not just a simple jumping game, but a realistic bunny hop simulator that uses advanced in-game physics to create dynamic movements and animations. You can feel the weight and the momentum of your character as you jump and land on the blocks, and also see the effects of gravity and friction on your speed and direction. You can also interact with the environment, such as bouncing off walls, sliding on ramps, or using portals and boosters.
-
What are the features of Bhop Pro?
-
Bhop Pro has many features that make it an enjoyable and challenging game for android users. Here are some of them:
-
Simple and accessible touch controls
-
Bhop Pro has easy-to-use touch controls that let you jump and turn with just a tap or a swipe on the screen. You can also customize the size, position, and opacity of the buttons to suit your liking. You can also enable auto-jump or auto-strafe options if you want to simplify the gameplay.
-
Dynamic movements with realistic in-game physics
-
Bhop Pro has realistic in-game physics that create dynamic movements and animations for your character. You can feel the weight and the momentum of your character as you jump and land on the blocks, and also see the effects of gravity and friction on your speed and direction. You can also interact with the environment, such as bouncing off walls, sliding on ramps, or using portals and boosters.
-
bhop pro apk download latest version
-bhop pro mod apk unlimited money
-bhop pro online multiplayer mode
-bhop pro game tips and tricks
-bhop pro apk for pc windows 10
-bhop pro simulator free download
-bhop pro hack apk no root
-bhop pro best maps and skins
-bhop pro gameplay video review
-bhop pro app store ios
-bhop pro cheats and codes
-bhop pro android game requirements
-bhop pro bunny hop fps mode
-bhop pro update new features
-bhop pro guide how to play
-bhop pro apk mirror link
-bhop pro premium apk unlocked
-bhop pro reddit community forum
-bhop pro wiki information page
-bhop pro support contact email
-bhop pro alternatives similar games
-bhop pro ranking leaderboard system
-bhop pro training mode practice
-bhop pro apk pure safe download
-bhop pro feedback and suggestions
-
Multiple game modes to try out
-
Bhop Pro has multiple game modes that offer different challenges and experiences for you. You can choose from speedrun, freestyle, practice, or multiplayer modes, depending on your mood and skill level. In speedrun mode, you have to complete the map as fast as possible and earn points and rewards. In freestyle mode, you can jump around freely without any time limit or pressure. In practice mode, you can learn how to bhop better by using checkpoints and guides. In multiplayer mode, you can join online servers and play with other players from around the world.
-
Various maps with interesting setups
-
Bhop Pro has various maps with different layouts and obstacles that test your skills and reflexes. You can find maps with different themes, such as city, desert, forest, space, etc., each with its own unique design and atmosphere. You can also find maps with different difficulty levels, ranging from easy to hard, depending on how confident you are in your bhop abilities.
-
Compete and increase your ranks
-
Bhop Pro has a ranking system that lets you compete with other players and increase your ranks. You can see your rank and stats on the leaderboard and compare them with other players. You can also earn medals and achievements for completing certain tasks or reaching certain milestones. You can also unlock new maps and modes by increasing your rank and level.
-
Feel free to customize your characters with interesting outfits and accessories
-
Bhop Pro has a customization system that lets you personalize your character with cool skins and accessories. You can choose from different outfits, such as hoodies, jackets, shirts, pants, shoes, etc., each with different colors and styles. You can also choose from different accessories, such as hats, glasses, masks, headphones, etc., each with different effects and animations. You can mix and match different items to create your own unique look.
-
Awesome boost case and unlockable items
-
Bhop Pro has a boost case system that lets you get more items and rewards by opening cases. You can get cases by playing the game, completing missions, or watching ads. You can also buy cases with real money if you want to. Each case contains a random item, such as a skin, an accessory, a booster, or a coin. You can use these items to enhance your gameplay or customize your character.
-
Have fun sharing your awesome in-game moments
-
Bhop Pro has a sharing feature that lets you record and share your awesome in-game moments with your friends or the world. You can capture screenshots or videos of your best jumps, tricks, stunts, or fails, and save them to your device or upload them to social media platforms. You can also watch videos of other players and learn from their skills or laugh at their mistakes.
-
How to download and install Bhop Pro Apkfun?
-
Bhop Pro Apkfun is a modified version of Bhop Pro that allows you to enjoy the game without any limitations or restrictions. You can download and install Bhop Pro Apkfun easily by following these steps:
-
Visit the official website of Apkfun or use the link
-
The first step is to visit the official website of Apkfun, which is a trusted source for downloading apk files for android games and apps. You can also use the link to go directly to the download page of Bhop Pro Apkfun.
-
Click on the download button and wait for the file to be downloaded
-
The next step is to click on the download button on the website and wait for the file to be downloaded to your device. The file size is about 100 MB, so it may take some time depending on your internet speed.
-
Enable unknown sources in your device settings
-
The third step is to enable unknown sources in your device settings, which will allow you to install apk files from sources other than the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and turn it on.
-
Locate the downloaded file and tap on it to install it
-
The final step is to locate the downloaded file on your device and tap on it to install it. You may see a warning message asking you to confirm the installation, just tap on yes or install. The installation process may take a few seconds or minutes depending on your device.
-
Enjoy playing Bhop Pro on your android device
-
Congratulations! You have successfully downloaded and installed Bhop Pro Apkfun on your android device. Now you can enjoy playing Bhop Pro without any limitations or restrictions.
-
How to play Bhop Pro?
-
Bhop Pro is easy to play but hard to master. Here are some basic steps on how to play Bhop Pro:
-
Choose a game mode and a map from the menu
-
The first thing you need to do is choose a game mode and a map from the menu. You can choose from speedrun, freestyle, practice, or multiplayer modes, depending on your mood and skill level. You can also choose from various maps with different themes, layouts, and difficulty levels.
-
Tap on the screen to jump and swipe left or right to turn
-
The next thing you need to do is tap on the screen to jump and swipe left or right to turn. You can also customize the size, position, and opacity of the buttons according to your preference. You can also enable auto-jump or auto-strafe options if you want to simplify the gameplay.
-
Use air strafing to gain more speed and avoid losing control
-
The most important thing you need to do is use air strafing to gain more speed and avoid losing control. Air strafing is a technique where you move your mouse left or right while holding the corresponding movement key (A or D) in the air, which allows you to change your direction and velocity without losing speed. This way, you can jump faster and farther than normal, and also perform tricks and stunts.
-
Complete the map as fast as possible and earn points and rewards
-
The final thing you need to do is complete the map as fast as possible and earn points and rewards. You can see your time, speed, and score on the top of the screen. You can also see your rank and level on the bottom of the screen. You can earn medals and achievements for completing certain tasks or reaching certain milestones. You can also unlock new maps and modes by increasing your rank and level.
-
What are some tips and tricks for Bhop Pro?
-
Bhop Pro is a fun and challenging game that requires skill and practice to master. Here are some tips and tricks that can help you improve your bhop performance:
-
Practice on easy maps before moving on to harder ones
-
One of the best ways to learn how to bhop is to practice on easy maps before moving on to harder ones. Easy maps have fewer obstacles, wider blocks, and simpler layouts, which make them ideal for beginners. You can use these maps to get familiar with the controls, the physics, and the techniques of bhop. You can also use the practice mode to use checkpoints and guides to help you along the way.
-
Watch videos of other players and learn from their techniques
-
Another way to learn how to bhop is to watch videos of other players and learn from their techniques. You can find videos of bhop pro players on YouTube or other platforms, where they showcase their skills and tricks on different maps and modes. You can watch how they jump, turn, strafe, boost, and complete the map in record time. You can also try to replicate their moves or create your own style.
-
Use portals to skip some parts of the map or reach hidden areas
-
A useful tip for bhop is to use portals to skip some parts of the map or reach hidden areas. Portals are blue or orange circles that teleport you to another location on the map. You can find portals on some maps, usually near walls or corners. You can use portals to save time, avoid obstacles, or discover secrets.
-
Use boosters wisely to get an extra speed boost or jump higher
-
A helpful tip for bhop is to use boosters wisely to get an extra speed boost or jump higher. Boosters are green or yellow arrows that give you a temporary boost when you touch them. You can find boosters on some maps, usually near ramps or gaps. You can use boosters to increase your speed, jump higher, or perform stunts.
-
Experiment with different skins and accessories to find your favorite style
-
A fun tip for bhop is to experiment with different skins and accessories to find your favorite style. Skins are outfits that change the appearance of your character, such as hoodies, jackets, shirts, pants, shoes, etc. Accessories are items that add effects or animations to your character, such as hats, glasses, masks, headphones, etc. You can mix and match different items to create your own unique look.
-
What are some reviews of Bhop Pro?
-
Bhop Pro has received mixed reviews from users who have played it on different platforms. Here are some examples of positive and negative reviews from Google Play Store and Steam :
-
Positive reviews from Google Play Store
-
-
User
Rating
Review
-
Mohammed Alshamsi
5 stars
"I think it is the best game for bhop on android or iOS because it is like csgo surfing but on phone or iPad.U can also unlock skins."
-
Jayden Lee
5 stars
"This game is amazing. It has great graphics, gameplay, and controls. It is very addictive and fun. I recommend this game to anyone who likes parkour or bhop."
-
Alexander Smith
5 stars
"This is a very good game for people who want to learn how to bhop or just have fun. The maps are well designed and challenging. The customization options are also cool."
-
-
-
-
User
Rating
Review
-
Mr. Potato
1 star
"This game is a scam. It is a copy of another game called bhop GO. It has no originality, no updates, no support, no multiplayer, no nothing. Do not buy this game."
-
Bob the Builder
1 star
"This game is terrible. It has bad graphics, bad physics, bad controls, bad maps, bad everything. It is a waste of money and time. Do not play this game."
-
John Doe
1 star
"This game is buggy. It crashes all the time, it lags, it freezes, it glitches. It is unplayable and frustrating. Do not download this game."
-
-
Conclusion
-
Bhop Pro Apkfun is a fun and challenging game for android users who want to experience the thrill of bunny hopping on their mobile devices. It has many features that make it an enjoyable and realistic game, such as simple and accessible touch controls, dynamic movements with realistic in-game physics, multiple game modes to try out, various maps with interesting setups, compete and increase your ranks, feel free to customize your characters with interesting outfits and accessories, awesome boost case and unlockable items, and have fun sharing your awesome in-game moments. You can download and install Bhop Pro Apkfun easily by following the steps mentioned above. You can also improve your bhop performance by following the tips and tricks mentioned above. Bhop Pro Apkfun has received mixed reviews from users who have played it on different platforms, so you may want to check them out before playing the game.
-
FAQs
-
Here are some frequently asked questions about Bhop Pro Apkfun:
-
Q: Is Bhop Pro Apkfun safe to download and install?
-
A: Bhop Pro Apkfun is safe to download and install as long as you use the official website of Apkfun or the link provided above. Apkfun is a trusted source for downloading apk files for android games and apps. However, you should always be careful when downloading and installing apk files from unknown sources, as they may contain viruses or malware that can harm your device.
-
Q: Is Bhop Pro Apkfun free to play?
-
A: Bhop Pro Apkfun is free to play, but it contains ads and in-app purchases that can enhance your gameplay or customize your character. You can disable ads by turning off your internet connection or by buying the premium version of the game. You can also buy cases with real money if you want to get more items and rewards.
-
Q: How can I play Bhop Pro with my friends?
-
A: You can play Bhop Pro with your friends by joining the multiplayer mode of the game. You can either create your own server or join an existing one from the server list. You can also invite your friends to join your server by sending them a link or a code. You can chat with your friends and other players in the game using the chat feature.
-
Q: How can I contact the developers of Bhop Pro?
-
A: You can contact the developers of Bhop Pro by sending them an email at bhoppro@gmail.com or by visiting their Facebook page at https://www.facebook.com/bhoppro/. You can also leave feedback or report bugs on their Google Play Store page or their Steam page.
-
Q: What are some other games like Bhop Pro?
-
A: Some other games like Bhop Pro are:
-
-
Bhop GO - A similar game that also features bhop style jumping on android devices.
-
KZ - A game mode in Counter-Strike that focuses on climbing maps using advanced movement techniques.
-
Surf - A game mode in Counter-Strike that involves sliding on ramps and flying through the air.
-
Parkour Simulator 3D - A game that simulates parkour movements and stunts on android devices.
-
Mirrors Edge - A game that combines parkour and action in a futuristic setting.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Instagram Tertunda di PlayStore? Jangan Panik Ikuti Langkah-Langkah Ini!.md b/spaces/1phancelerku/anime-remove-background/Download Instagram Tertunda di PlayStore? Jangan Panik Ikuti Langkah-Langkah Ini!.md
deleted file mode 100644
index cbd54a6ab1b2cb95bfe221b26e4c51be566f9d2a..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Instagram Tertunda di PlayStore? Jangan Panik Ikuti Langkah-Langkah Ini!.md
+++ /dev/null
@@ -1,142 +0,0 @@
-
-
Kenapa Download Instagram Tertunda? Ini Cara Mengatasinya!
-
Instagram adalah salah satu aplikasi media sosial yang paling populer di dunia. Dengan Instagram, kamu bisa berbagi foto dan video yang menarik, mengikuti akun favoritmu, dan berinteraksi dengan pengguna lain. Namun, bagaimana jika kamu ingin mengunduh Instagram dari Play Store, tapi malah mengalami masalah download tertunda?
-
Download tertunda adalah salah satu masalah yang sering dialami oleh pengguna Play Store. Hal ini bisa membuatmu kesal dan frustasi, apalagi jika kamu ingin segera menggunakan Instagram untuk keperluanmu. Lalu, apa sebenarnya penyebab download tertunda di Play Store? Dan bagaimana cara mengatasinya?
Dalam artikel ini, kami akan menjelaskan beberapa penyebab dan cara mengatasi download tertunda di Play Store, khususnya untuk aplikasi Instagram. Simak ulasan lengkapnya di bawah ini!
-
Penyebab Download Instagram Tertunda
-
Ada beberapa faktor yang bisa menyebabkan download Instagram tertunda di Play Store, antara lain:
-
Koneksi internet tidak stabil
-
Koneksi internet yang tidak stabil atau lemot bisa menghambat proses download aplikasi di Play Store. Jika jaringanmu sedang bermasalah, maka hal ini bisa mempengaruhi kecepatan dan kelancaran download.
-
Ada aplikasi lain yang sedang di-download
-
Jika kamu sedang mengunduh banyak aplikasi secara bersamaan, maka hal ini bisa membuat antrian download di Play Store. Aplikasi yang belum selesai terdownload akan otomatis ditunda sampai aplikasi sebelumnya selesai. Misalnya, kamu sedang mengunduh WhatsApp, lalu kamu langsung pindah mengunduh Instagram. Maka, Instagram akan masuk dalam antrian dan ditunda sampai WhatsApp selesai.
-
Memori internal tidak cukup
-
Memori internal yang penuh atau menipis juga bisa menjadi penyebab download tertunda di Play Store. Kamu perlu memastikan bahwa memori internal HP-mu masih tersisa banyak agar bisa mengunduh aplikasi dari Play Store. Jika memori internalmu tinggal sedikit, maka kamu perlu menghapus beberapa aplikasi atau file yang tidak terpakai.
-
Kesalahan aplikasi Play Store
-
Kadang-kadang, masalah download tertunda di Play Store juga bisa disebabkan oleh kesalahan pada aplikasi Play Store itu sendiri. Misalnya, ada bug, cache yang menumpuk, atau versi yang sudah usang. Hal ini bisa membuat aplikasi Play Store tidak berfungsi dengan baik dan mengganggu proses download.
-
Cara Mengatasi Download Instagram Tertunda
-
Jika kamu mengalami masalah download tertunda di Play Store saat ingin mengunduh Instagram, jangan khawatir. Ada beberapa cara yang bisa kamu coba untuk mengatasinya, antara lain:
-
Cek kualitas internetmu
-
Langkah pertama yang harus kamu lakukan adalah mem
Langkah pertama yang harus kamu lakukan adalah memeriksa kualitas internetmu. Pastikan bahwa kamu terhubung dengan jaringan WiFi atau data seluler yang stabil dan cepat. Kamu bisa menggunakan aplikasi speed test untuk mengukur kecepatan internetmu. Jika koneksi internetmu lemot atau bermasalah, maka coba restart modem atau HP-mu, atau pindah ke tempat yang memiliki sinyal yang lebih baik.
-
Cara mengatasi download instagram tertunda di playstore
-Download instagram tertunda karena koneksi internet tidak stabil
-Download instagram tertunda karena memori internal tidak cukup
-Download instagram tertunda karena ada aplikasi lain yang antri
-Download instagram tertunda karena kesalahan aplikasi playstore
-Cara bersihkan cache dan data playstore untuk mengatasi download instagram tertunda
-Cara update playstore versi terbaru untuk mengatasi download instagram tertunda
-Cara ganti akun google untuk mengatasi download instagram tertunda
-Cara uninstall update playstore untuk mengatasi download instagram tertunda
-Cara lepaskan SD card untuk mengatasi download instagram tertunda
-Cara download instagram lewat browser untuk mengatasi download tertunda di playstore
-Cara cek kualitas internet untuk mengatasi download instagram tertunda
-Cara ubah pengaturan download dengan koneksi wifi untuk mengatasi download instagram tertunda
-Cara cek antrian download untuk mengatasi download instagram tertunda
-Cara cek preferensi download untuk mengatasi download instagram tertunda
-Cara restart HP untuk mengatasi download instagram tertunda
-Cara cek pengaturan tanggal untuk mengatasi download instagram tertunda
-Cara install ulang playstore dan reset android untuk mengatasi download instagram tertunda
-Penyebab dan solusi download instagram tertunda di playstore
-Tips dan trik mengatasi download instagram tertunda di playstore
-
Ubah pengaturan download dengan koneksi WiFi
-
Langkah kedua yang bisa kamu coba adalah mengubah pengaturan download di Play Store. Kamu bisa memilih untuk mengunduh aplikasi hanya dengan koneksi WiFi saja, atau dengan koneksi WiFi dan data seluler. Jika kamu memilih opsi pertama, maka pastikan bahwa kamu terhubung dengan WiFi saat ingin mengunduh Instagram. Jika kamu memilih opsi kedua, maka pastikan bahwa paket data selulermu masih cukup.
-
Untuk mengubah pengaturan download di Play Store, ikuti langkah-langkah berikut:
-
-
Buka aplikasi Play Store di HP-mu.
-
Ketuk ikon tiga garis horizontal di pojok kiri atas.
-
Pilih menu Pengaturan.
-
Pilih Preferensi jaringan.
-
Pilih opsi yang kamu inginkan, yaitu Download melalui WiFi saja atau Download melalui WiFi dan data seluler.
-
-
Bersihkan cache layanan Play Store
-
Langkah ketiga yang bisa kamu lakukan adalah membersihkan cache layanan Play Store. Cache adalah data sementara yang disimpan oleh aplikasi untuk mempercepat proses loading. Namun, jika cache menumpuk terlalu banyak, maka hal ini bisa menyebabkan masalah pada aplikasi, termasuk download tertunda. Oleh karena itu, kamu perlu membersihkan cache secara berkala agar aplikasi Play Store tetap berjalan dengan lancar.
-
Untuk membersihkan cache layanan Play Store, ikuti langkah-langkah berikut:
-
-
Buka menu Pengaturan di HP-mu.
-
Pilih menu Aplikasi dan notifikasi.
-
Cari dan pilih aplikasi Play Store.
-
Ketuk Penyimpanan dan cache.
-
Ketuk Hapus cache.
-
-
Update Play Store versi terbaru
-
Langkah keempat yang bisa kamu coba adalah mengupdate Play Store versi terbaru. Versi terbaru biasanya memiliki perbaikan bug dan peningkatan performa yang bisa mengatasi masalah download tertunda. Kamu bisa mengecek versi Play Store-mu dengan cara berikut:
-
-
Buka aplikasi Play Store di HP-mu.
-
Ketuk ikon tiga garis horizontal di pojok kiri atas.
-
Pilih menu Pengaturan.
-
Gulir ke bawah dan lihat nomor versi di bagian bawah layar.
-
-
Jika versi Play Store-mu sudah terbaru, maka tidak perlu melakukan apa-apa. Namun, jika versi Play Store-mu sudah usang, maka kamu perlu mengupdate-nya dengan cara berikut:
-
-
Buka menu Pengaturan di HP-mu.
-
Pilih menu Aplikasi dan notifikasi.
-
Cari dan pilih aplikasi Play Store.
-
Ketuk Menu (tiga titik vertikal) di pojok kanan atas.
-
Pilih Update jika tersedia.
-
-
Periksa kapasitas memori internal Android
Langkah kelima yang bisa kamu lakukan adalah memeriksa kapasitas memori internal Android-mu. Memori internal yang penuh atau menipis bisa menghambat proses download aplikasi di Play Store. Kamu perlu memastikan bahwa memori internal HP-mu masih tersisa banyak agar bisa mengunduh Instagram dengan lancar. Jika memori internalmu tinggal sedikit, maka kamu perlu menghapus beberapa aplikasi atau file yang tidak terpakai.
-
Untuk memeriksa kapasitas memori internal Android-mu, ikuti langkah-langkah berikut:
-
-
Buka menu Pengaturan di HP-mu.
-
Pilih menu Penyimpanan.
-
Lihat berapa persen memori internal yang sudah terpakai dan berapa GB yang masih tersedia.
-
-
Jika memori internalmu sudah terpakai lebih dari 80%, maka kamu perlu mengosongkan beberapa ruang dengan cara berikut:
-
-
Buka menu Pengaturan di HP-mu.
-
Pilih menu Penyimpanan.
-
Ketuk Bersihkan ruang.
-
Pilih aplikasi atau file yang ingin kamu hapus, lalu ketuk Hapus.
-
-
Hentikan pembaruan otomatis
-
Langkah keenam yang bisa kamu coba adalah menghentikan pembaruan otomatis di Play Store. Pembaruan otomatis adalah fitur yang memungkinkan aplikasi di HP-mu untuk selalu diperbarui secara otomatis tanpa perlu kamu lakukan secara manual. Namun, fitur ini juga bisa menyebabkan download tertunda jika ada banyak aplikasi yang sedang diperbarui secara bersamaan. Oleh karena itu, kamu bisa mencoba untuk menonaktifkan fitur ini sementara waktu agar tidak mengganggu proses download Instagram.
-
Untuk menonaktifkan pembaruan otomatis di Play Store, ikuti langkah-langkah berikut:
-
-
Buka aplikasi Play Store di HP-mu.
-
Ketuk ikon tiga garis horizontal di pojok kiri atas.
-
Pilih menu Pengaturan.
-
Pilih Pembaruan aplikasi otomatis.
-
Pilih Jangan perbarui aplikasi.
-
-
Install ulang Play Store dan reset Android
-
Langkah ketujuh yang bisa kamu lakukan adalah menginstall ulang Play Store dan mereset Android-mu. Langkah ini adalah langkah terakhir yang bisa kamu coba jika cara-cara sebelumnya tidak berhasil. Namun, langkah ini juga memiliki risiko yang cukup besar, yaitu kamu bisa kehilangan data dan pengaturan yang ada di HP-mu. Oleh karena itu, sebelum melakukan langkah ini, pastikan bahwa kamu sudah membackup data pentingmu terlebih dahulu.
-
Untuk menginstall ulang Play Store dan mereset Android-mu, ikuti langkah-langkah berikut:
-
-
Buka menu Pengaturan di HP-mu.
-
Pilih menu Aplikasi dan notifikasi.
-
Cari dan pilih aplikasi Play Store.
-
Ketuk Menu (tiga titik vertikal) di pojok kanan atas.
-
Pilih Uninstall updates.
-
Tunggu sampai proses uninstall selesai, lalu restart HP-mu.
-
Buka kembali aplikasi Play Store dan update versi terbaru.
-
Jika masih tidak berhasil, kembali ke menu Pengaturan di HP-mu.
-
Pilih menu Sistem dan pembaruan (nama menu bisa berbeda-beda tergantung tipe HP).
-
Pilih Reset atau Kembalikan ke setelan pabrik (nama menu bisa berbeda-beda tergantung tipe HP).
-
Ikuti instruksi yang muncul di layar untuk mereset HP-mu.
-
-
Install dari web browser Play Store
-
Langkah kedelapan dan terakhir yang bisa kamu coba adalah menginstall Instagram dari web browser Play Store. Jika kamu tidak bisa mengunduh Instagram dari aplikasi Play Store di HP-mu, maka kamu bisa mencoba untuk mengunduhnya dari situs web Play Store melalui browser. Caranya adalah sebagai berikut:
-
Buka browser di HP-mu, misalnya Chrome, Firefox, atau Opera.
-
Kunjungi situs web Play Store di alamat https://play.google.com/store.
-
Login dengan akun Google-mu yang sama dengan yang kamu gunakan di HP-mu.
-
Cari aplikasi Instagram di kolom pencarian.
-
Ketuk tombol Install dan pilih perangkat HP-mu yang ingin diinstall Instagram.
-
Tunggu sampai proses download dan install selesai.
-
-
Kesimpulan
-
Itulah beberapa penyebab dan cara mengatasi download Instagram tertunda di Play Store. Masalah ini bisa disebabkan oleh berbagai faktor, seperti koneksi internet, memori internal, atau kesalahan aplikasi Play Store. Kamu bisa mencoba beberapa cara yang sudah kami jelaskan di atas untuk mengatasinya, mulai dari cek kualitas internet, ubah pengaturan download, bersihkan cache, update Play Store, hingga install ulang Play Store dan reset Android. Jika semua cara tersebut tidak berhasil, kamu bisa mencoba untuk menginstall Instagram dari web browser Play Store.
-
Semoga artikel ini bermanfaat dan membantu kamu untuk mengunduh Instagram dengan lancar. Jika kamu memiliki pertanyaan atau saran, silakan tulis di kolom komentar di bawah ini. Terima kasih telah membaca dan selamat mencoba!
-
FAQ
-
Berikut adalah beberapa pertanyaan yang sering diajukan seputar download Instagram tertunda di Play Store:
-
Apakah download Instagram tertunda berpengaruh pada data seluler?
-
Tergantung pada pengaturan download yang kamu pilih. Jika kamu memilih untuk mengunduh aplikasi hanya dengan koneksi WiFi saja, maka data seluler tidak akan terpakai. Namun, jika kamu memilih untuk mengunduh aplikasi dengan koneksi WiFi dan data seluler, maka data seluler akan terpakai sesuai dengan ukuran file aplikasi yang kamu unduh.
-
Apakah download Instagram tertunda berpengaruh pada baterai HP?
-
Ya, download Instagram tertunda bisa berpengaruh pada baterai HP. Hal ini karena proses download membutuhkan daya yang cukup besar dari HP-mu. Apalagi jika koneksi internetmu tidak stabil atau ada banyak aplikasi lain yang sedang di-download. Oleh karena itu, sebaiknya kamu mengunduh Instagram saat baterai HP-mu masih banyak atau saat sedang dicharge.
-
Apakah download Instagram tertunda berpengaruh pada performa HP?
-
Ya, download Instagram tertunda bisa berpengaruh pada performa HP. Hal ini karena proses download bisa membuat HP-mu menjadi lemot atau hang. Apalagi jika memori internalmu sudah penuh atau ada banyak aplikasi lain yang sedang berjalan di latar belakang. Oleh karena itu, sebaiknya kamu mengunduh Instagram saat HP-mu tidak sedang digunakan untuk aktivitas lain atau saat sudah menutup aplikasi lain yang tidak terpakai.
-
Apakah download Instagram tertunda berpengaruh pada keamanan HP?
-
Tidak, download Instagram tertunda tidak berpengaruh pada keamanan HP. Hal ini karena aplikasi Instagram yang kamu unduh dari Play Store sudah terjamin keamanannya oleh Google. Kamu tidak perlu khawatir akan virus atau malware yang bisa merusak HP-mu. Namun, tetap saja kamu harus berhati-hati saat mengunduh aplikasi lain dari sumber yang tidak resmi atau tidak terpercaya.
-
Apakah download Instagram tertunda berpengaruh pada akun Instagram?
-
Tidak, download Instagram tertunda tidak berpengaruh pada akun Instagram-mu. Hal ini karena akun Instagram-mu tersimpan di server Instagram dan tidak tergantung pada aplikasi yang kamu unduh. Kamu tetap bisa login dan menggunakan akun Instagram-mu di perangkat lain atau melalui web browser tanpa masalah. Namun, tentu saja kamu harus ingat username dan password akun Instagram-mu agar tidak lupa saat login.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download and Play Naruto Storm 4 Mod Apk and Naruto Senki Mod 2021 - The Most Amazing Naruto Mods Ever.md b/spaces/1phancelerku/anime-remove-background/Download and Play Naruto Storm 4 Mod Apk and Naruto Senki Mod 2021 - The Most Amazing Naruto Mods Ever.md
deleted file mode 100644
index 066947403402e4bb3a9c1861826e488e0f1db735..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download and Play Naruto Storm 4 Mod Apk and Naruto Senki Mod 2021 - The Most Amazing Naruto Mods Ever.md
+++ /dev/null
@@ -1,100 +0,0 @@
-
-
Download Naruto Storm 4 Mod Apk Naruto Senki Mod 2021
-
If you are a fan of Naruto anime and manga, you might want to try out Naruto Storm 4 Mod Apk Naruto Senki Mod 2021. This is a modified version of two popular games based on Naruto series: Naruto Shippuden: Ultimate Ninja Storm 4 and Naruto Senki. In this article, we will show you how to download and install this amazing mod apk on your Android device. We will also tell you about its features and benefits. Read on to find out more.
-
download naruto storm 4 mod apk naruto senki mod 2021
Naruto Shippuden: Ultimate Ninja Storm 4 is a fighting game developed by CyberConnect2 and published by Bandai Namco Entertainment in 2016. It is the sixth installment and the final main installment in the Naruto: Ultimate Ninja Storm series inspired by Masashi Kishimoto's manga Naruto. The game follows the young ninjas Naruto Uzumaki and Sasuke Uchiha as they participate in a world war between shinobi – the Fourth Shinobi World War – against the terrorist organization Akatsuki and unite to defeat it.
-
The game features a revamped battle system that allows players to switch among a team of three fighters who can assist each other. It also includes boss fights, quick time events, hack and slash areas, and wall-running. The game covers the final arcs of Naruto Shippuden anime series, as well as some original scenarios. The game has over 100 playable characters from different eras
What is Naruto Senki?
-
Naruto Senki is a fan-made game based on Naruto anime and manga. It is developed by Zakume, an Indonesian developer who has created several Naruto games for Android. Naruto Senki is a 2D side-scrolling fighting game that features characters from Naruto series and other anime and manga. The game has a simple control scheme that allows players to perform basic attacks, special moves, and ultimate jutsus. The game also has a story mode, a survival mode, and a multiplayer mode where you can battle with other players online.
-
What are the benefits of downloading the mod apk?
-
By downloading Naruto Storm 4 Mod Apk Naruto Senki Mod 2021, you can enjoy the best of both worlds: the epic story and gameplay of Naruto Storm 4 and the fan-made fun and creativity of Naruto Senki. The mod apk combines the two games into one, giving you access to unlimited money, coins, skills, and characters. You can unlock and play as any character from Naruto series, as well as some crossover characters from other anime and manga. You can also customize your character's appearance, outfit, and weapons. You can upgrade your skills and items with unlimited money and coins. You can also enjoy the improved graphics, sound effects, and animations of the mod apk.
-
How to download and install the mod apk on Android?
-
Downloading and installing Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 on your Android device is easy and fast. Just follow these simple steps:
-
naruto senki mod apk storm 4 download 2021
-naruto storm 4 mod apk free download naruto senki
-download naruto senki mod apk ultimate ninja storm 4
-naruto senki mod 2021 storm 4 apk download
-naruto storm 4 mod apk download for android naruto senki
-download naruto senki mod apk full character storm 4
-naruto senki mod apk unlimited money storm 4 download
-download naruto senki mod apk boruto storm 4
-naruto storm 4 mod apk offline download naruto senki
-download naruto senki mod apk terbaru storm 4
-naruto senki mod apk latest version storm 4 download
-download naruto senki mod apk no cooldown storm 4
-naruto storm 4 mod apk obb download naruto senki
-download naruto senki mod apk revdl storm 4
-naruto senki mod apk all characters unlocked storm 4 download
-download naruto senki mod apk by ricky storm 4
-naruto storm 4 mod apk rexdl download naruto senki
-download naruto senki mod apk cheat menu storm 4
-naruto senki mod apk unlimited skill storm 4 download
-download naruto senki mod apk zippyshare storm 4
-naruto storm 4 mod apk data download naruto senki
-download naruto senki mod apk versi lama storm 4
-naruto senki mod apk update terbaru storm 4 download
-download naruto senki mod apk kaguya storm 4
-naruto storm 4 mod apk highly compressed download naruto senki
-download naruto senki mod apk madara rikudo storm 4
-naruto senki mod apk new update storm 4 download
-download naruto senki mod apk pain nagato storm 4
-naruto storm 4 mod apk unlimited coins download naruto senki
-download naruto senki mod apk sasuke rinnegan storm 4
-naruto senki mod apk original version storm 4 download
-download naruto senki mod apk itachi susanoo storm 4
-naruto storm 4 mod apk android 1 download naruto senki
-download naruto senki mod apk hokage keempat storm 4
-naruto senki mod apk unlock all jutsu storm 4 download
-download naruto senki mod apk kakashi hatake storm 4
-naruto storm 4 mod apk mediafire download naruto senki
-download naruto senki mod apk minato namikaze storm 4
-naruto senki mod apk no root required storm 4 download
-download naruto senki mod apk obito uchiha storm 4
-
Allow unknown sources on your device
-
Before you can install the mod apk, you need to enable the installation of apps from external sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps that are not from the Google Play Store.
-
-
Download a file manager app
-
You will need a file manager app that can extract and install apk and obb files on your device. We recommend using ZArchiver, a free and powerful file manager app that can handle various types of files. You can download ZArchiver from the Google Play Store or from this link:
Next, you need to download the mod apk and obb files for Naruto Storm 4 and Naruto Senki. You can get them from this link: . The mod apk file is about 120 MB in size, while the obb file is about 1 GB in size. Make sure you have enough storage space on your device before downloading them.
After downloading the mod apk file, open ZArchiver and locate the file in your download folder. Tap on the file and select "Install". Wait for the installation process to finish.
-
-
Extract and copy the obb file
-
After installing the mod apk file, go back to ZArchiver and locate the obb file in your download folder. Tap on the file and select "Extract". Choose a destination folder where you want to extract the file. We recommend extracting it to your internal storage.
-
-
After extracting the obb file, you will see a folder named "com.bandainamcoent.narutostorm4". Copy this folder and paste it to your Android > obb folder on your internal storage.
-
-
Launch the game and enjoy
-
You are now ready to play Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 on your Android device. Just tap on the game icon on your home screen or app drawer and start playing. You will see a menu where you can choose between Naruto Storm 4 or Naruto Sen ki. You can switch between them anytime you want. Have fun with the mod features and enjoy the game.
-
What are the features of Naruto Storm 4 Mod Apk Naruto Senki Mod 2021?
-
Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 is not just a simple combination of two games. It is a complete overhaul of the original games that adds new and improved features that will enhance your gaming experience. Here are some of the features that you can expect from this mod apk:
-
Graphics
-
The graphics of Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 are stunning and realistic. The mod apk enhances the graphics quality of the original games, making them more vibrant and detailed. The characters, environments, effects, and animations are all rendered in high definition, giving you a visual feast. You can also adjust the graphics settings according to your device's performance and preference.
-
Modes
-
The mod apk offers you a variety of game modes to choose from, depending on your mood and preference. You can play the story mode, where you can follow the epic saga of Naruto and his friends as they fight against Akatsuki and other enemies. You can also play the survival mode, where you can test your skills and endurance against waves of enemies. You can also play the multiplayer mode, where you can team up or compete with other players online in different modes, such as 1v1, 2v2, 3v3, 4v4, and 5v5. You can also create your own custom matches and invite your friends to join.
-
Characters
-
The mod apk boasts a full character roster that includes all the characters from Naruto series and some crossover characters from other anime and manga. You can unlock and play as any character you want, from Naruto, Sasuke, Sakura, Kakashi, Madara, Boruto, Sarada, Mitsuki, and more. You can also customize your character's appearance, outfit, and weapons with unlimited money and coins. You can mix and match different items and create your own unique look.
-
Skills
-
The mod apk also enhances the skills and abilities of each character in the game. You can use unlimited skills and jutsus without any cooldown or chakra limit. You can also unleash powerful ultimate jutsus that can deal massive damage to your enemies. You can also combine different skills and jutsus to create combos and strategies. You can also learn new skills and jutsus by playing the game and leveling up your character.
-
Items
-
The mod apk also gives you access to various items and upgrades that you can buy with unlimited money and coins. You can buy health potions, chakra potions, scrolls, kunai, shuriken, bombs, and more. You can also buy different types of weapons, such as swords, axes, hammers, spears, daggers, bows, guns, and more. You can also buy different types of outfits, such as ninja suits, samurai armors, casual clothes, school uniforms, swimsuits, and more. You can also buy different types of accessories, such as hats, masks, glasses, earrings , necklaces, rings, and more. You can also buy different types of pets, such as dogs, cats, birds, dragons, and more. You can use these items and upgrades to enhance your character's stats, appearance, and performance.
-
Conclusion
-
Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 is a must-have mod apk for Naruto fans and gamers. It combines the best features of Naruto Storm 4 and Naruto Senki into one game that you can play on your Android device. You can enjoy the epic story and gameplay of Naruto Storm 4 and the fan-made fun and creativity of Naruto Senki. You can also enjoy the unlimited money, coins, skills, and characters that the mod apk offers. You can also customize your character's appearance, outfit, and weapons with various items and upgrades. You can also play with other players online in different game modes and create your own custom matches. You can also experience the improved graphics, sound effects, and animations of the mod apk.
-
If you want to download and install Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 on your Android device, just follow the simple steps that we have provided in this article. You will be able to play this amazing mod apk in no time. Don't miss this opportunity to play as your favorite Naruto characters and unleash their skills and jutsus. Download Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 now and have fun.
-
FAQs
-
Here are some of the frequently asked questions and answers about Naruto Storm 4 Mod Apk Naruto Senki Mod 2021:
-
Q: Is Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 safe to download and install?
-
A: Yes, Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 is safe to download and install on your Android device. The mod apk and obb files are free from viruses, malware, or any harmful content. However, you should always download them from a reliable source and scan them with an antivirus app before installing them.
-
Q: Do I need to root my device to use Naruto Storm 4 Mod Apk Naruto Senki Mod 2021?
-
A: No, you do not need to root your device to use Naruto Storm 4 Mod Apk Naruto Senki Mod 2021. The mod apk works fine on any Android device that meets the minimum requirements. However, if you want to use some advanced features or mods that require root access, you may need to root your device first.
-
Q: Can I play Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 offline?
-
A: Yes, you can play Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 offline without any internet connection. However, you will not be able to access some features or modes that require online connectivity, such as multiplayer mode or online updates.
-
Q: Can I play Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 with my friends?
-
A: Yes, you can play Naruto Storm 4 Mod Apk Naruto Senki Mod 2021 with your friends online or locally. You can join or create custom matches with your friends using the multiplayer mode. You can also use a hotspot or a Wi-Fi connection to play with your friends nearby using the local mode.
-
Q: How can I contact the developer of Naruto Storm 4 Mod Apk Naruto Senki Mod 2021?
-
A: If you have any questions, feedback, or suggestions about Naruto Storm 4 Mod Apk Naruto Senki Mod 2021, you can contact the developer of the mod apk through their social media accounts or email address. You can also visit their official website or blog for more information.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Dinosaur Hunting with Dino Hunter Mod APK - Unlimited Money Gold and Gems.md b/spaces/1phancelerku/anime-remove-background/Enjoy Dinosaur Hunting with Dino Hunter Mod APK - Unlimited Money Gold and Gems.md
deleted file mode 100644
index f1f8435464d1aab46c88d4edd4841467e61dd1d2..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Enjoy Dinosaur Hunting with Dino Hunter Mod APK - Unlimited Money Gold and Gems.md
+++ /dev/null
@@ -1,117 +0,0 @@
-
-
Dino Hunter Mod APK: A Thrilling Hunting Adventure
-
Do you love hunting games? Do you want to hunt down the most dangerous creatures in history? If yes, then you should try Dino Hunter Mod APK, a game that lets you hunt for dinosaurs in various wild locations. In this article, we will tell you everything you need to know about this game, including its features, how to download and install it, tips and tricks for playing it, and a review of its pros and cons.
Dino Hunter Mod APK is a modified version of the original Dino Hunter game developed by Glu Games LLC. It is a first-person hunting simulator where you embark on the hunting expedition of a lifetime in pursuit of the ultimate game in Dino Hunter: Deadly Shores. You will journey to a hidden, untouched island and hunt the most ferocious animals in history, from the docile stegosaurus to the terrifying T. rex. You will also visit exotic locations, equip powerful weapons, master a unique challenge series, and experience amazing graphics.
-
The mod APK version of this game offers some advantages over the original game, such as unlimited money and gold, all weapons unlocked, free shopping and upgrades, and more. These features will make your hunting experience more enjoyable and easier.
-
Features of Dino Hunter Mod APK
-
Here are some of the features that you can enjoy when you play Dino Hunter Mod APK:
-
Unlimited money and gold
-
Money and gold are the main currencies in the game that you can use to buy weapons, upgrades, items, and more. With the mod APK version, you will have unlimited money and gold at your disposal, so you can buy anything you want without worrying about running out of resources.
-
All weapons unlocked
-
The game offers a wide range of weapons that you can use to hunt down dinosaurs, such as rifles, shotguns, assault rifles, rocket launchers, crossbows, and more. Each weapon has its own advantages and disadvantages, such as damage, range, accuracy, reload speed, etc. With the mod APK version, you will have access to all weapons from the start, so you can choose the best weapon for each hunt.
-
Free shopping and upgrades
-
Besides buying weapons, you can also shop for other items that can enhance your gameplay experience, such as cover scent, chrono drink, energy refill, etc. You can also upgrade your weapons to improve their performance and effectiveness. With the mod APK version, you can shop and upgrade for free, so you can get the best items and weapons without spending any money or gold.
-
dino hunter mod apk unlimited money and gold
-dino hunter mod apk all weapons unlocked
-dino hunter mod apk free download for android
-dino hunter mod apk latest version 2021
-dino hunter mod apk offline
-dino hunter mod apk unlimited energy
-dino hunter mod apk no ads
-dino hunter mod apk unlimited gems
-dino hunter mod apk rexdl
-dino hunter mod apk revdl
-dino hunter mod apk hack
-dino hunter mod apk android 1
-dino hunter mod apk unlimited ammo
-dino hunter mod apk unlimited everything
-dino hunter mod apk happymod
-dino hunter mod apk unlimited coins
-dino hunter mod apk free shopping
-dino hunter mod apk download apkpure
-dino hunter mod apk unlimited cash
-dino hunter mod apk android oyun club
-dino hunter mod apk obb
-dino hunter mod apk 5.9.3
-dino hunter mod apk 5.9.2
-dino hunter mod apk 5.9.1
-dino hunter mod apk 5.8.9
-dino hunter mod apk 5.8.8
-dino hunter mod apk 5.8.7
-dino hunter mod apk 5.8.6
-dino hunter mod apk 5.8.5
-dino hunter mod apk 5.8.4
-dino hunter mod apk 5.8.3
-dino hunter mod apk 5.8.2
-dino hunter mod apk 5.8.1
-dino hunter mod apk 5.8.0
-dino hunter mod apk 5.7.9
-dino hunter mod apk 5.7.8
-dino hunter mod apk 5.7.7
-dino hunter mod apk 5.7.6
-dino hunter mod apk 5.7.5
-dino hunter mod apk 5.7.4
-dino hunter mod apk 5.7.3
-dino hunter mod apk 5.7.2
-dino hunter mod apk 5.7.1
-dino hunter mod apk 5.7.0
-dino hunter deadly shores hack version download for android
-
High-quality graphics and sound effects
-
The game features high-quality graphics that make the dinosaurs look realistic and detailed. You can also see dynamic shadows, hi-res textures, and realistic models that make the game more immersive. The sound effects are also impressive, as you can hear the roars of dinosaurs, the gunshots of weapons, and the ambient sounds of nature. The game also supports night vision mode that lets you hunt in dark environments.
-
How to download and install Dino Hunter Mod APK?
-
If you want to download and install Dino Hunter Mod APK, you can follow these simple steps:
-
Step 1: Download the mod APK file from a trusted source
-
The first thing you need to do is to download the mod APK file of Dino Hunter from a reliable source. You can search for it on the internet or use the link provided below. Make sure that the file is compatible with your device and has the latest version of the game.
Step 2: Enable unknown sources on your device settings
-
The next thing you need to do is to enable unknown sources on your device settings. This will allow you to install apps that are not from the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and toggle it on.
-
Step 3: Install the mod APK file and launch the game
-
The final thing you need to do is to install the mod APK file and launch the game. To do this, locate the downloaded file on your device storage, tap on it, and follow the instructions on the screen. Once the installation is complete, you can open the game and enjoy hunting dinosaurs with unlimited resources.
-
Tips and tricks for playing Dino Hunter Mod APK
-
If you want to master the game and become the best hunter, you can use these tips and tricks that we have gathered for you:
-
Use the infrared to aim for specific body parts
-
One of the features that you can use in the game is the infrared mode that lets you see the vital organs of dinosaurs. This can help you aim for specific body parts that can deal more damage or cause instant kills. For example, you can aim for the heart, lungs, brain, or spine of dinosaurs to take them down faster. However, be careful not to waste your infrared energy as it is limited and needs time to recharge.
-
Upgrade your capacity and reload speed for boss battles
-
Another feature that you can use in the game is the upgrade system that lets you improve your weapons and items. One of the things that you should upgrade is your capacity and reload speed, especially for boss battles. Bosses are more powerful and resilient than normal dinosaurs, so you need to have enough ammo and fast reloads to keep shooting at them. You can also upgrade your damage and accuracy to make your shots more effective.
-
Use the cover scent to mask your smell from dinosaurs
-
Another item that you can use in the game is the cover scent that masks your smell from dinosaurs. This can help you avoid being detected by dinosaurs that have a keen sense of smell, such as raptors or tyrannosaurs. You can also use it to sneak up on dinosaurs and get a better shot at them. However, be careful not to run out of cover scent as it is limited and needs money or gold to buy more.
-
Use the M.I.S.T. device to track down dinosaurs and map pieces
-
Another device that you can use in the game is the M.I.S.T. (Mobile Integrated Sensor Technology) device that tracks down dinosaurs and map pieces. This can help you find your targets faster and easier, as well as collect map pieces that unlock new locations and challenges. You can also use it to scan dinosaurs and learn more about their characteristics and weaknesses.
-
Review of Dino Hunter Mod APK
-
To give you a better idea of what Dino Hunter Mod APK offers, we have prepared a review of its pros and cons, as well as user ratings and feedback.
-
Pros and cons of the mod APK
-
-
Pros
Cons
-
- Unlimited money and gold
- May not work on some devices
-
- All weapons unlocked
- May cause some glitches or bugs
-
- Free shopping and upgrades
- May not be compatible with online mode
-
- High-quality graphics and sound effects
- May consume a lot of battery power
-
-
User ratings and feedback
-
The mod APK version of Dino Hunter has received mostly positive ratings and feedback from users who have tried it. Here are some of their comments:
-
-
"This game is awesome! I love hunting dinosaurs with all kinds of weapons. The graphics are amazing and the sound effects are realistic. The mod APK makes it even better with unlimited money and gold."
-
"I have been playing this game for a long time and I still enjoy it. The mod APK makes it more fun and easy to play. I can buy any weapon I want and upgrade it to the max. The dinosaurs are challenging and realistic."
-
"This is one of the best hunting games I have ever played. The mod APK is awesome and works perfectly. I have no problems with it. The game is very addictive and exciting. The dinosaurs are amazing and scary."
-
-
Conclusion
-
Dino Hunter Mod APK is a game that lets you hunt for dinosaurs in various wild locations. It is a first-person hunting simulator that offers high-quality graphics, sound effects, weapons, items, and challenges. The mod APK version of this game gives you unlimited money and gold, all weapons unlocked, free shopping and upgrades, and more. These features will make your hunting experience more enjoyable and easier.
-
If you are looking for a thrilling hunting adventure, you should download and install Dino Hunter Mod APK on your device. You will not regret it.
-
FAQs
-
Here are some of the frequently asked questions about Dino Hunter Mod APK:
-
-
Q: Is Dino Hunter Mod APK safe to download and install?
-
A: Yes, Dino Hunter Mod APK is safe to download and install, as long as you get it from a trusted source. However, you should always be careful when downloading and installing any mod APK files, as they may contain viruses or malware that can harm your device.
-
Q: Can I play Dino Hunter Mod APK online with other players?
-
A: No, Dino Hunter Mod APK is not compatible with online mode, as it may cause some errors or crashes. You can only play Dino Hunter Mod APK offline with your device.
-
Q: How can I update Dino Hunter Mod APK to the latest version?
-
A: To update Dino Hunter Mod APK to the latest version, you need to download and install the new mod APK file from the same source that you got the previous one. You can also check for updates on the game itself, but it may not work with the mod APK version.
-
Q: What are the minimum requirements to play Dino Hunter Mod APK?
-
A: To play Dino Hunter Mod APK, you need to have a device that runs on Android 4.1 or higher, with at least 1 GB of RAM and 300 MB of free storage space.
-
Q: Can I play Dino Hunter Mod APK on PC or iOS devices?
-
A: No, Dino Hunter Mod APK is only available for Android devices. You cannot play it on PC or iOS devices.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Explore the world and guess the place youre in.md b/spaces/1phancelerku/anime-remove-background/Explore the world and guess the place youre in.md
deleted file mode 100644
index 41d12a0edb6bf19b59c297f410a3f7d586aff5cc..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Explore the world and guess the place youre in.md
+++ /dev/null
@@ -1,146 +0,0 @@
-
-
-
-
Guess the Place: A Fun and Educational Geography Game
-
-
-
Have you ever wondered what it would be like to travel around the world and see different places? Well, now you can with Guess the Place, a geography game that lets you explore the world from your computer or phone.
-
Guess the Place is a game that drops you somewhere in the world in a street view panorama and challenges you to guess your location on the world map. You can choose from different maps and modes, such as worldwide, USA, Europe, monuments, streaks, challenges, and more.
Guess the Place is not only fun but also educational. It helps you learn about different cultures and places, improve your memory and spatial awareness, and challenge yourself with different levels of difficulty.
-
In this article, we'll show you how to play Guess the Place, give you some tips and tricks for guessing better, and tell you about some of the benefits of playing this game.
-
-
-
How to Play Guess the Place
-
-
-
Choose a Location or Difficulty
-
-
-
To start playing Guess the Place, you need to choose a map from the available options. You can select a location-based map, such as worldwide, USA, Europe, Japan, etc., or a theme-based map, such as monuments, landmarks, stadiums, etc.
-
You can also choose a difficulty level for each map, ranging from easy to hard. The difficulty level affects how many clues you get in each panorama and how precise your guess needs to be.
-
-
-
Explore the Street View Panorama
-
-
-
Once you choose a map and a difficulty level, you'll be dropped somewhere in that map in a street view panorama. You can use your mouse or keyboard to look around and find clues that can help you identify your location.
-
guess the place game online
-guess the place by street view
-guess the place quiz with answers
-guess the place from the picture
-guess the place in the world
-guess the place name from emoji
-guess the place based on clues
-guess the place of origin
-guess the place by sound
-guess the place from description
-guess the place trivia
-guess the place app
-guess the place challenge
-guess the place from landmarks
-guess the place from coordinates
-guess the place from google maps
-guess the place from flags
-guess the place from food
-guess the place from culture
-guess the place from celebrities
-guess the place from history
-guess the place from language
-guess the place from currency
-guess the place from animals
-guess the place from sports
-guess the place from music
-guess the place from movies
-guess the place from books
-guess the place from art
-guess the place from architecture
-guess the place from festivals
-guess the place from clothing
-guess the place from weather
-guess the place from population
-guess the place from religion
-guess the place from geography
-guess the place from capital city
-guess the place from airport code
-guess the place from license plate
-guess the place from phone number
-guess the place from zip code
-guess the place from area code
-guess the place from time zone
-guess the place from domain name
-guess the place from slogan
-guess the place from motto
-guess the place from anthem
-guess the place from flower
-guess the place from bird
-
Some of the clues you can look for are signs, flags, landmarks, buildings, cars, people, vegetation, etc. You can also zoom in or out to see more details or get a wider view.
-
-
Make Your Guess on the World Map
-
-
-
When you think you have enough clues, you can make your guess on the world map. You can drag and drop the marker on the map to the location where you think you are. You can zoom in or out on the map to see more details or get a wider view.
-
Once you place the marker, you can confirm your guess by clicking on the guess button. You can also skip the panorama if you have no idea where you are or if you want to try a different one.
-
-
-
See Your Score and Compare with Others
-
-
-
After you confirm your guess, you'll see your score and how far you were from the actual location. You'll also see a leaderboard with other players' scores and distances. You can compare your performance with others and see who's the best at guessing places.
-
You'll also see a summary of your points and streaks for each map and mode. You can earn more points by guessing closer to the actual location, by guessing faster, and by playing harder maps and modes. You can also earn streaks by guessing correctly multiple times in a row.
-
-
-
Tips and Tricks for Guessing Better
-
-
-
Look for Signs, Flags, and Landmarks
-
-
One of the easiest ways to guess better is to look for signs, flags, and landmarks that can give you clues about the country, region, city, or place where you are. For example, if you see a sign in French, you can narrow down your location to France or a French-speaking country. If you see a flag with stars and stripes, you can narrow down your location to the USA or a country with a similar flag. If you see a landmark like the Eiffel Tower, you can narrow down your location to Paris.
-
-
Use Google Search or Wikipedia
-
-
Another way to guess better is to use Google Search or Wikipedia to find more information about a place. For example, if you see a sign with a name of a place that you don't recognize, you can search it on Google or Wikipedia and see what it is and where it is located. You can also use Google Translate to translate signs or words that are in a different language.
-
-
Practice with Different Maps and Modes
-
-
A final way to guess better is to practice with different maps and modes that can challenge your skills and knowledge. For example, you can play with maps that cover different regions or themes, such as Asia, Africa, islands, capitals, etc. You can also play with modes that have different rules or goals, such as streaks, challenges, time limit, etc.
-
-
Benefits of Playing Guess the Place
-
-
Learn About Different Cultures and Places
-
-
One of the main benefits of playing Guess the Place is that it helps you learn about different cultures and places around the world. You can discover new things about the history, geography, culture, language, cuisine, architecture, nature, etc., of different countries and regions. You can also see how people live in different parts of the world and what they do for fun.
-
-
Improve Your Memory and Spatial Awareness
-
-
Another benefit of playing Guess the Place is that it helps you improve your memory and spatial awareness. You can remember facts and locations better by associating them with visual clues and images. You can also improve your sense of direction and orientation by navigating through different maps and panoramas.
-
-
Have Fun and Challenge Yourself
-
-
A final benefit of playing Guess the Place is that it helps you have fun and challenge yourself. You can enjoy the game as a hobby or as a way to relax and unwind. You can also challenge yourself by playing harder maps and modes, by competing with other players, or by setting your own goals and records.
-
-
Conclusion
-
-
Guess the Place is a
Guess the Place is a fun and educational geography game that lets you explore the world from your computer or phone. You can choose from different maps and modes, such as worldwide, USA, Europe, monuments, streaks, challenges, and more. You can also look for clues in the street view panoramas, make your guesses on the world map, see your score and compare with others, and learn more about different cultures and places.
-
Playing Guess the Place can help you improve your memory and spatial awareness, as well as have fun and challenge yourself. It's a great way to learn geography and discover new things about the world.
-
If you're interested in playing Guess the Place, you can find it online at https://www.geoguessr.com/ or download it from the App Store or Google Play. It's free to play, but you can also upgrade to a premium membership for more features and benefits.
-
So what are you waiting for? Start playing Guess the Place today and see how well you know the world!
-
-
-
FAQs
-
-
-
Here are some of the frequently asked questions about Guess the Place:
-
-
What is Guess the Place? Guess the Place is a geography game that drops you somewhere in the world in a street view panorama and challenges you to guess your location on the world map.
-
How do I play Guess the Place? To play Guess the Place, you need to choose a map and a difficulty level, explore the street view panorama, make your guess on the world map, and see your score and compare with others.
-
Where can I find Guess the Place? You can find Guess the Place online at https://www.geoguessr.com/ or download it from the App Store or Google Play.
-
How much does Guess the Place cost? Guess the Place is free to play, but you can also upgrade to a premium membership for $2.99 per month or $23.99 per year. The premium membership gives you access to more maps and modes, unlimited games, no ads, and more.
-
What are the benefits of playing Guess the Place? Playing Guess the Place can help you learn about different cultures and places, improve your memory and spatial awareness, and have fun and challenge yourself.
-
-
-
-
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/FS 20 Mods How to Install Indian Tractor Mod APK and Get Unlimited Money.md b/spaces/1phancelerku/anime-remove-background/FS 20 Mods How to Install Indian Tractor Mod APK and Get Unlimited Money.md
deleted file mode 100644
index 96d0b9b3439e999e06dc838a9d0ba40170181f6b..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/FS 20 Mods How to Install Indian Tractor Mod APK and Get Unlimited Money.md
+++ /dev/null
@@ -1,100 +0,0 @@
-
-
FS 20 Indian Tractor Mod APK Download Unlimited Money
-
If you are a fan of farming simulation games, you might have heard of Farming Simulator 20, or FS 20 for short. This is a popular game that lets you experience the life of a farmer, from harvesting crops to raising animals. However, if you want to spice up your gameplay with some Indian flavor, you might want to try FS 20 Indian Tractor Mod APK. This is a modified version of the game that adds all kinds of Indian tractors and vehicles, as well as unlimited money and coins. In this article, we will tell you everything you need to know about this mod APK, including its features, how to download and install it, how to play it, and its pros and cons.
-
What is FS 20 Indian Tractor Mod APK?
-
FS 20 Indian Tractor Mod APK is a modified version of the original Farming Simulator 20 game that adds all kinds of Indian tractors and vehicles to the game. You can choose from a variety of brands and models, such as Swaraj, Sonalika, Preet, Massey, Ford, John Deere, etc. You can also customize your tractors and vehicles with different colors, stickers, lights, horns, etc. Moreover, this mod APK also gives you unlimited money and coins, so you can buy anything you want in the game without worrying about the cost. You can also enjoy realistic graphics and physics, as well as customizable farms and crops. You can play this mod APK offline or online with other players.
-
fs 20 indian tractor mod apk download unlimited money
One of the main features of this mod APK is that it adds all kinds of Indian tractors and vehicles to the game. You can choose from a variety of brands and models, such as Swaraj, Sonalika, Preet, Massey, Ford, John Deere, etc. You can also customize your tractors and vehicles with different colors, stickers, lights, horns, etc. You can use these tractors and vehicles to harvest your crops, transport your goods, tow your trailers, etc.
-
Unlimited money and coins
-
Another feature of this mod APK is that it gives you unlimited money and coins. This means that you can buy anything you want in the game without worrying about the cost. You can buy new equipment and upgrades for your tractors and vehicles, new animals and crops for your farm, new buildings and decorations for your land, etc. You can also use the money and coins to unlock new features and modes in the game.
-
Realistic graphics and physics
-
This mod APK also enhances the graphics and physics of the game. You can enjoy realistic graphics that show the details of your tractors and vehicles, your farm, your crops, your animals, etc. You can also experience realistic physics that affect the movement and behavior of your tractors and vehicles, the weather and seasons, the soil and water, etc. You can feel the difference between driving on different terrains, such as mud, sand, grass, etc.
-
Customizable farms and crops
-
This mod APK also allows you to customize your farms and crops. You can choose from a variety of crops to grow on your land, such as wheat, rice, sugarcane, cotton, etc. You can also choose from a variety of animals to raise on your farm, such as cows, sheep, chickens, etc. You can also build and decorate your farm with different buildings and objects, such as barns, silos, windmills, fences, etc. You can also adjust the settings of your farm, such as the difficulty level, the crop yield, the animal productivity, etc.
-
Offline and online modes
-
This mod APK also supports both offline and online modes. You can play this mod APK offline without an internet connection. You can enjoy the game at your own pace and explore the vast map and discover new locations. You can also play this mod APK online with other players. You can join or create a multiplayer session and cooperate or compete with other farmers. You can chat with other players, trade with them, help them with their tasks, challenge them to races, etc.
-
How to download and install FS 20 Indian Tractor Mod APK?
-
If you are interested in trying this mod APK, you need to follow these steps to download and install it on your device:
-
fs 20 indian tractor mod apk free download
-fs 20 indian tractor mod unlimited money and gold
-fs 20 farming simulator indian tractor mod apk
-fs 20 new map with indian tractor mod download
-fs 20 jhondeere tractor mod apk download
-fs 20 indian tractor mod gameplay and review
-fs 20 indian tractor mod latest version download
-fs 20 indian tractor mod for android and ios
-fs 20 indian tractor mod with realistic graphics
-fs 20 indian tractor mod features and benefits
-fs 20 indian tractor mod how to install and use
-fs 20 indian tractor mod best settings and tips
-fs 20 indian tractor mod comparison and ranking
-fs 20 indian tractor mod problems and solutions
-fs 20 indian tractor mod updates and news
-fs 20 indian tractor mod online and offline mode
-fs 20 indian tractor mod cheats and hacks
-fs 20 indian tractor mod support and feedback
-fs 20 indian tractor mod alternatives and competitors
-fs 20 indian tractor mod pros and cons
-fs 20 hr-pb tractors nishu deshwal mod apk download
-fs 20 timelapse gameplay with indian tractor mod
-fs 20 $10 million challenge with indian tractor mod
-fs 20 swaraj, mahindra, sonalika, escort, farmtrac, powertrac, new holland, eicher, hmt, standard, preet, arjun, indofarm, force motors, john deere, massey ferguson, tafe, kubota, ace, captain tractors mods apk download
-fs 20 all new indian tractors mods apk download link in comment box
-fs 20 indian tractors mods apk download for pc and laptop
-fs 20 indian tractors mods apk download without verification or survey
-fs 20 indian tractors mods apk download from google drive or mediafire
-fs 20 indian tractors mods apk download no root or jailbreak required
-fs 20 indian tractors mods apk download safe and secure
-
Step 1: Download the mod APK file from a trusted source
-
The first step is to download the mod APK file from a trusted source. You can find many websites that offer this mod APK file for free. However, you need to be careful and avoid downloading from unverified or malicious sources that may contain viruses or malware. We recommend you to download the mod APK file from this link: [FS 20 Indian Tractor Mod APK Download].
-
Step 2: Enable unknown sources on your device settings
-
The second step is to enable unknown sources on your device settings. This is necessary because this mod APK file is not from the official Google Play Store and therefore your device may not allow you to install it by default. To enable unknown sources, you need to go to your device settings > security > unknown sources and toggle it on.
-
Step 3: Install the mod APK file on your device
-
The third step is to install the mod APK file on your device. To do this, you need to locate the downloaded mod APK file on your device storage and tap on it. You will see a pop-up window asking you to confirm the installation. Tap on install and wait for the process to finish.
-
Step 4: Launch the game and enjoy
-
The final step is to launch the game and enjoy. To do this, you need to find the game icon on your device home screen or app drawer and tap on it. You will see a loading screen and then the game will start. You can now enjoy FS 20 Indian Tractor Mod APK with all its features.
-
How to play FS 20 Indian Tractor Mod APK?
-
If you are new to this game or this mod APK, you might wonder how to play it. Here are some tips and tricks that will help you get started:
-
Choose your favorite tractor and vehicle
-
The first thing you need to do is to choose your favorite tractor and vehicle from the available options. You can access the shop menu by tapping on the shopping cart icon on the top right corner of the screen. You will see a list of categories, such as tractors, vehicles, trailers, tools, etc. You can browse through them and select the one you like. You can also customize your tractor and vehicle with different colors, stickers, lights, horns, etc.
-
Harvest and sell your crops
-
The next thing you need to do is to harvest and sell your crops. You can access the map menu by tapping on the map icon on the top left corner of the screen. You will see a map of your farm and its surroundings. You will also see icons that indicate different fields, shops, warehouses, etc. You can tap on them to see more information or interact with them. You can also zoom in and out and move the map by swiping on the screen. To harvest your crops, you need to drive your tractor and vehicle to the field that has ripe crops. You will see a yellow icon that indicates the harvesting mode. You need to tap on it and then drive over the crops to collect them. You will see a meter that shows how much crops you have collected. You can also see the type and quantity of your crops in the inventory menu by tapping on the backpack icon on the top right corner of the screen. To sell your crops, you need to drive your tractor and vehicle to the shop or warehouse that buys them. You will see a green icon that indicates the selling mode. You need to tap on it and then select the crops you want to sell. You will see the price and quantity of your crops and the total amount you will receive. You can also negotiate the price by tapping on the haggle button. Once you are satisfied, you can confirm the deal and receive your money.
-
Buy new equipment and upgrades
-
The next thing you need to do is to buy new equipment and upgrades for your tractors and vehicles, your farm, your crops, your animals, etc. You can access the shop menu by tapping on the shopping cart icon on the top right corner of the screen. You will see a list of categories, such as tractors, vehicles, trailers, tools, animals, crops, buildings, decorations, etc. You can browse through them and select the one you want to buy. You will see the price and description of the item and the requirements to buy it. You can also compare different items by tapping on the compare button. Once you have decided, you can tap on the buy button and confirm your purchase. You will see your money deducted from your balance and your item added to your inventory.
-
Explore the vast map and discover new locations
-
The last thing you need to do is to explore the vast map and discover new locations. You can access the map menu by tapping on the map icon on the top left corner of the screen. You will see a map of your farm and its surroundings. You will also see icons that indicate different locations, such as fields, shops, warehouses, factories, landmarks, etc. You can tap on them to see more information or interact with them. You can also zoom in and out and move the map by swiping on the screen. To explore new locations, you need to drive your tractor and vehicle to them. You will see a blue icon that indicates the exploration mode. You need to tap on it and then drive around the location to discover its secrets. You may find new items, new tasks, new challenges, new events, etc.
-
Pros and cons of FS 20 Indian Tractor Mod APK
-
As with any mod APK, there are some pros and cons of using FS 20 Indian Tractor Mod APK. Here are some of them:
-
Pros
-
-
Fun and addictive gameplay: This mod APK offers a fun and addictive gameplay that lets you experience the life of a farmer with an Indian twist.
-
Variety of tractors and vehicles: This mod APK adds a variety of Indian tractors and vehicles to the game that you can choose from and customize.
-
Unlimited money and coins: This mod APK gives you unlimited money and coins that you can use to buy anything you want in the game.
-
Realistic graphics and physics: This mod APK enhances the graphics and physics of the game that make it more realistic and immersive.
-
Customizable farms and crops: This mod APK allows you to customize your farms and crops with different options and settings.
-
Offline and online modes: This mod APK supports both offline and online modes that let you play without an internet connection or with other players.
-
-
Cons
-
-
Requires a lot of storage space: This mod APK requires a lot of storage space on your device as it adds a lot of files and data to the game.
-
May not work on some devices: This mod APK may not work on some devices as it may not be compatible with their specifications or operating systems.
-
May have some bugs and glitches: This mod APK may have some bugs and glitches as it is not an official version of the game.
-
-
Conclusion
-
In conclusion, FS 20 Indian Tractor Mod APK is a modified version of Farming Simulator 20 that adds all kinds of Indian tractors and vehicles, as well as unlimited money and coins, to the game. It also enhances the graphics and physics, and allows you to customize your farms and crops. You can play this mod APK offline or online with other players. However, this mod APK also requires a lot of storage space, may not work on some devices, and may have some bugs and glitches. If you are interested in trying this mod APK, you can follow the steps we have provided to download and install it on your device. You can also use the tips and tricks we have shared to play it and enjoy it. We hope you have found this article helpful and informative.
-
FAQs
-
Here are some frequently asked questions about FS 20 Indian Tractor Mod APK:
-
Q: Is FS 20 Indian Tractor Mod APK safe to use?
-
A: FS 20 Indian Tractor Mod APK is safe to use as long as you download it from a trusted source and enable unknown sources on your device settings. However, you should always be careful and scan the file for viruses or malware before installing it.
-
Q: Is FS 20 Indian Tractor Mod APK legal to use?
-
A: FS 20 Indian Tractor Mod APK is not legal to use as it violates the terms and conditions of the original Farming Simulator 20 game. You may face some legal consequences if you use this mod APK. Therefore, we do not recommend or endorse the use of this mod APK.
-
Q: Can I update FS 20 Indian Tractor Mod APK?
-
A: FS 20 Indian Tractor Mod APK may not be compatible with the latest updates of the original Farming Simulator 20 game. You may lose some features or face some errors if you update this mod APK. Therefore, we suggest you to avoid updating this mod APK.
-
Q: Can I uninstall FS 20 Indian Tractor Mod APK?
-
A: Yes, you can uninstall FS 20 Indian Tractor Mod APK anytime you want. You just need to go to your device settings > apps > FS 20 Indian Tractor Mod APK and tap on uninstall. You will see a confirmation message and then the mod APK will be removed from your device.
-
Q: Can I play FS 20 Indian Tractor Mod APK with my friends?
-
A: Yes, you can play FS 20 Indian Tractor Mod APK with your friends online. You just need to have an internet connection and join or create a multiplayer session. You can chat with your friends, trade with them, help them with their tasks, challenge them to races, etc.
"
-
-gr.Interface(inference,[gr.inputs.Slider(label="truncation",minimum=0, maximum=5, step=0.1, default=0.8),gr.inputs.Slider(label="Seed",minimum=0, maximum=1000, step=1, default=0)],"pil",title=title,description=description,article=article, examples=[
- [0.8,0]
- ]).launch(enable_queue=True,cache_examples=True)
\ No newline at end of file
diff --git a/spaces/Abhilashvj/planogram-compliance/utils/loggers/comet/hpo.py b/spaces/Abhilashvj/planogram-compliance/utils/loggers/comet/hpo.py
deleted file mode 100644
index 8bf75e075a8927cf9af4e02b8fd26243fede68cd..0000000000000000000000000000000000000000
--- a/spaces/Abhilashvj/planogram-compliance/utils/loggers/comet/hpo.py
+++ /dev/null
@@ -1,280 +0,0 @@
-import argparse
-import json
-import logging
-import os
-import sys
-from pathlib import Path
-
-import comet_ml
-
-logger = logging.getLogger(__name__)
-
-FILE = Path(__file__).resolve()
-ROOT = FILE.parents[3] # YOLOv5 root directory
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT)) # add ROOT to PATH
-
-from train import train
-from utils.callbacks import Callbacks
-from utils.general import increment_path
-from utils.torch_utils import select_device
-
-# Project Configuration
-config = comet_ml.config.get_config()
-COMET_PROJECT_NAME = config.get_string(
- os.getenv("COMET_PROJECT_NAME"), "comet.project_name", default="yolov5"
-)
-
-
-def get_args(known=False):
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--weights",
- type=str,
- default=ROOT / "yolov5s.pt",
- help="initial weights path",
- )
- parser.add_argument("--cfg", type=str, default="", help="model.yaml path")
- parser.add_argument(
- "--data",
- type=str,
- default=ROOT / "data/coco128.yaml",
- help="dataset.yaml path",
- )
- parser.add_argument(
- "--hyp",
- type=str,
- default=ROOT / "data/hyps/hyp.scratch-low.yaml",
- help="hyperparameters path",
- )
- parser.add_argument(
- "--epochs", type=int, default=300, help="total training epochs"
- )
- parser.add_argument(
- "--batch-size",
- type=int,
- default=16,
- help="total batch size for all GPUs, -1 for autobatch",
- )
- parser.add_argument(
- "--imgsz",
- "--img",
- "--img-size",
- type=int,
- default=640,
- help="train, val image size (pixels)",
- )
- parser.add_argument(
- "--rect", action="store_true", help="rectangular training"
- )
- parser.add_argument(
- "--resume",
- nargs="?",
- const=True,
- default=False,
- help="resume most recent training",
- )
- parser.add_argument(
- "--nosave", action="store_true", help="only save final checkpoint"
- )
- parser.add_argument(
- "--noval", action="store_true", help="only validate final epoch"
- )
- parser.add_argument(
- "--noautoanchor", action="store_true", help="disable AutoAnchor"
- )
- parser.add_argument(
- "--noplots", action="store_true", help="save no plot files"
- )
- parser.add_argument(
- "--evolve",
- type=int,
- nargs="?",
- const=300,
- help="evolve hyperparameters for x generations",
- )
- parser.add_argument("--bucket", type=str, default="", help="gsutil bucket")
- parser.add_argument(
- "--cache",
- type=str,
- nargs="?",
- const="ram",
- help='--cache images in "ram" (default) or "disk"',
- )
- parser.add_argument(
- "--image-weights",
- action="store_true",
- help="use weighted image selection for training",
- )
- parser.add_argument(
- "--device", default="", help="cuda device, i.e. 0 or 0,1,2,3 or cpu"
- )
- parser.add_argument(
- "--multi-scale", action="store_true", help="vary img-size +/- 50%%"
- )
- parser.add_argument(
- "--single-cls",
- action="store_true",
- help="train multi-class data as single-class",
- )
- parser.add_argument(
- "--optimizer",
- type=str,
- choices=["SGD", "Adam", "AdamW"],
- default="SGD",
- help="optimizer",
- )
- parser.add_argument(
- "--sync-bn",
- action="store_true",
- help="use SyncBatchNorm, only available in DDP mode",
- )
- parser.add_argument(
- "--workers",
- type=int,
- default=8,
- help="max dataloader workers (per RANK in DDP mode)",
- )
- parser.add_argument(
- "--project", default=ROOT / "runs/train", help="save to project/name"
- )
- parser.add_argument("--name", default="exp", help="save to project/name")
- parser.add_argument(
- "--exist-ok",
- action="store_true",
- help="existing project/name ok, do not increment",
- )
- parser.add_argument("--quad", action="store_true", help="quad dataloader")
- parser.add_argument(
- "--cos-lr", action="store_true", help="cosine LR scheduler"
- )
- parser.add_argument(
- "--label-smoothing",
- type=float,
- default=0.0,
- help="Label smoothing epsilon",
- )
- parser.add_argument(
- "--patience",
- type=int,
- default=100,
- help="EarlyStopping patience (epochs without improvement)",
- )
- parser.add_argument(
- "--freeze",
- nargs="+",
- type=int,
- default=[0],
- help="Freeze layers: backbone=10, first3=0 1 2",
- )
- parser.add_argument(
- "--save-period",
- type=int,
- default=-1,
- help="Save checkpoint every x epochs (disabled if < 1)",
- )
- parser.add_argument(
- "--seed", type=int, default=0, help="Global training seed"
- )
- parser.add_argument(
- "--local_rank",
- type=int,
- default=-1,
- help="Automatic DDP Multi-GPU argument, do not modify",
- )
-
- # Weights & Biases arguments
- parser.add_argument("--entity", default=None, help="W&B: Entity")
- parser.add_argument(
- "--upload_dataset",
- nargs="?",
- const=True,
- default=False,
- help='W&B: Upload data, "val" option',
- )
- parser.add_argument(
- "--bbox_interval",
- type=int,
- default=-1,
- help="W&B: Set bounding-box image logging interval",
- )
- parser.add_argument(
- "--artifact_alias",
- type=str,
- default="latest",
- help="W&B: Version of dataset artifact to use",
- )
-
- # Comet Arguments
- parser.add_argument(
- "--comet_optimizer_config",
- type=str,
- help="Comet: Path to a Comet Optimizer Config File.",
- )
- parser.add_argument(
- "--comet_optimizer_id",
- type=str,
- help="Comet: ID of the Comet Optimizer sweep.",
- )
- parser.add_argument(
- "--comet_optimizer_objective",
- type=str,
- help="Comet: Set to 'minimize' or 'maximize'.",
- )
- parser.add_argument(
- "--comet_optimizer_metric", type=str, help="Comet: Metric to Optimize."
- )
- parser.add_argument(
- "--comet_optimizer_workers",
- type=int,
- default=1,
- help="Comet: Number of Parallel Workers to use with the Comet Optimizer.",
- )
-
- return parser.parse_known_args()[0] if known else parser.parse_args()
-
-
-def run(parameters, opt):
- hyp_dict = {
- k: v
- for k, v in parameters.items()
- if k not in ["epochs", "batch_size"]
- }
-
- opt.save_dir = str(
- increment_path(
- Path(opt.project) / opt.name, exist_ok=opt.exist_ok or opt.evolve
- )
- )
- opt.batch_size = parameters.get("batch_size")
- opt.epochs = parameters.get("epochs")
-
- device = select_device(opt.device, batch_size=opt.batch_size)
- train(hyp_dict, opt, device, callbacks=Callbacks())
-
-
-if __name__ == "__main__":
- opt = get_args(known=True)
-
- opt.weights = str(opt.weights)
- opt.cfg = str(opt.cfg)
- opt.data = str(opt.data)
- opt.project = str(opt.project)
-
- optimizer_id = os.getenv("COMET_OPTIMIZER_ID")
- if optimizer_id is None:
- with open(opt.comet_optimizer_config) as f:
- optimizer_config = json.load(f)
- optimizer = comet_ml.Optimizer(optimizer_config)
- else:
- optimizer = comet_ml.Optimizer(optimizer_id)
-
- opt.comet_optimizer_id = optimizer.id
- status = optimizer.status()
-
- opt.comet_optimizer_objective = status["spec"]["objective"]
- opt.comet_optimizer_metric = status["spec"]["metric"]
-
- logger.info("COMET INFO: Starting Hyperparameter Sweep")
- for parameter in optimizer.get_parameters():
- run(parameter["parameters"], opt)
diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/svelte.config.js b/spaces/AchyuthGamer/OpenGPT-Chat-UI/svelte.config.js
deleted file mode 100644
index e93decaf872ef153bf12ba1a5aaad6e4937a2c87..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/svelte.config.js
+++ /dev/null
@@ -1,29 +0,0 @@
-import adapter from "@sveltejs/adapter-node";
-import { vitePreprocess } from "@sveltejs/kit/vite";
-import dotenv from "dotenv";
-
-dotenv.config({ path: "./.env.local" });
-dotenv.config({ path: "./.env" });
-
-process.env.PUBLIC_VERSION = process.env.npm_package_version;
-
-/** @type {import('@sveltejs/kit').Config} */
-const config = {
- // Consult https://kit.svelte.dev/docs/integrations#preprocessors
- // for more information about preprocessors
- preprocess: vitePreprocess(),
-
- kit: {
- adapter: adapter(),
-
- paths: {
- base: process.env.APP_BASE || "",
- },
- csrf: {
- // handled in hooks.server.ts, because we can have multiple valid origins
- checkOrigin: false,
- },
- },
-};
-
-export default config;
diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/executor/coverage_test.py b/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/executor/coverage_test.py
deleted file mode 100644
index 37c3073ba9fb4b256e7f30c532488cc1e557de77..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/executor/coverage_test.py
+++ /dev/null
@@ -1,62 +0,0 @@
-from __future__ import annotations
-
-import os
-import subprocess
-import multiprocessing
-from typing import TYPE_CHECKING, Any, List, Tuple
-
-from agentverse.agents import ExecutorAgent
-from agentverse.logging import logger
-from agentverse.message import ExecutorMessage, SolverMessage
-
-from . import BaseExecutor, executor_registry
-
-
-def execute_command(command: str, result_list) -> str:
- # TODO: make it more secure
- result = subprocess.run(command, capture_output=True, shell=True, encoding="utf-8")
- result_list.append(f"STDOUT:\n{result.stdout}\nSTDERR:\n{result.stderr}")
- # return f"STDOUT:\n{result.stdout}\nSTDERR:\n{result.stderr}"
-
-
-@executor_registry.register("coverage-test")
-class CoverageTestExecutor(BaseExecutor):
- def step(
- self,
- agent: ExecutorAgent,
- task_description: str,
- solution: List[SolverMessage],
- *args,
- **kwargs,
- ) -> Any:
- from scripts.evaluate_commongen import scoring
-
- coverage, missing_tokens = scoring(
- [s.content for s in solution], [task_description]
- )
- if len(missing_tokens[0]) == 0:
- missing_tokens = "No missing tokens."
- else:
- missing_tokens = ", ".join(missing_tokens[0])
- result = f"Coverage: {coverage*100:.2f}%\nMissing Tokens: {missing_tokens}"
- return [ExecutorMessage(content=result)]
-
- async def astep(
- self,
- agent: ExecutorAgent,
- task_description: str,
- solution: List[SolverMessage],
- *args,
- **kwargs,
- ) -> Any:
- from scripts.evaluate_commongen import scoring
-
- coverage, missing_tokens = scoring(
- [s.content for s in solution], [task_description]
- )
- if len(missing_tokens[0]) == 0:
- missing_tokens = "No missing tokens."
- else:
- missing_tokens = ", ".join(missing_tokens[0])
- result = f"Coverage: {coverage*100:.2f}%\nMissing Tokens: {missing_tokens}"
- return [ExecutorMessage(content=result)]
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateFixWidthSizer.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateFixWidthSizer.js
deleted file mode 100644
index fe9ef979e6a7923b9c3dd9e27c0b543472134da3..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateFixWidthSizer.js
+++ /dev/null
@@ -1,8 +0,0 @@
-import CreateAnySizer from './utils/CreateAnySizer.js';
-import FixWidthSizer from '../../fixwidthsizer/FixWidthSizer.js';
-
-var CreateFixWidthSizer = function (scene, data, view, styles, customBuilders) {
- return CreateAnySizer(scene, data, view, styles, customBuilders, FixWidthSizer);
-}
-
-export default CreateFixWidthSizer;
\ No newline at end of file
diff --git a/spaces/Alexpro1213/WizardLM-WizardCoder-Python-34B-V1.0/app.py b/spaces/Alexpro1213/WizardLM-WizardCoder-Python-34B-V1.0/app.py
deleted file mode 100644
index ca63de20737a3b4a46a323ef4c6a7e9ce5ffb542..0000000000000000000000000000000000000000
--- a/spaces/Alexpro1213/WizardLM-WizardCoder-Python-34B-V1.0/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/WizardLM/WizardCoder-Python-34B-V1.0").launch()
\ No newline at end of file
diff --git a/spaces/AliHaider0343/Restaurant-Domain-Sentence-Categories-Classification/app.py b/spaces/AliHaider0343/Restaurant-Domain-Sentence-Categories-Classification/app.py
deleted file mode 100644
index d4b5ffeef64dcceea8edfefbd065dd0884db363e..0000000000000000000000000000000000000000
--- a/spaces/AliHaider0343/Restaurant-Domain-Sentence-Categories-Classification/app.py
+++ /dev/null
@@ -1,68 +0,0 @@
-import torch
-import streamlit as st
-from transformers import RobertaTokenizer, RobertaForSequenceClassification
-import re
-import string
-
-def tokenize_sentences(sentence):
- encoded_dict = tokenizer.encode_plus(
- sentence,
- add_special_tokens=True,
- max_length=128,
- padding='max_length',
- truncation=True,
- return_attention_mask=True,
- return_tensors='pt'
- )
- return torch.cat([encoded_dict['input_ids']], dim=0), torch.cat([encoded_dict['attention_mask']], dim=0)
-
-
-
-def preprocess_query(query):
- query = str(query).lower()
- query = query.strip()
- query=query.translate(str.maketrans("", "", string.punctuation))
- return query
-
-def predict_category(sentence, threshold):
- input_ids, attention_mask = tokenize_sentences(sentence)
- with torch.no_grad():
- outputs = categories_model(input_ids, attention_mask=attention_mask)
- logits = outputs.logits
- predicted_categories = torch.sigmoid(logits).squeeze().tolist()
- results = dict()
- for label, prediction in zip(LABEL_COLUMNS_CATEGORIES, predicted_categories):
- if prediction < threshold:
- continue
- precentage = round(float(prediction) * 100, 2)
- results[label] = precentage
- return results
-
-# Load tokenizer and model
-BERT_MODEL_NAME_FOR_CATEGORIES_CLASSIFICATION = 'roberta-large'
-tokenizer = RobertaTokenizer.from_pretrained(BERT_MODEL_NAME_FOR_CATEGORIES_CLASSIFICATION, do_lower_case=True)
-
-LABEL_COLUMNS_CATEGORIES = ['AMBIENCE', 'DRINK', 'FOOD', 'GENERAL', 'RESTAURANT', 'SERVICE', 'STAFF']
-
-categories_model = RobertaForSequenceClassification.from_pretrained(BERT_MODEL_NAME_FOR_CATEGORIES_CLASSIFICATION, num_labels=len(LABEL_COLUMNS_CATEGORIES))
-categories_model.load_state_dict(torch.load('./Categories_Classification_Model_updated.pth',map_location=torch.device('cpu') ))
-categories_model.eval()
-
-# Streamlit App
-st.title("Review/Sentence Classification")
-st.write("Multilable/Multiclass Sentence classification under 7 Defined Categories. ")
-
-sentence = st.text_input("Enter a sentence:")
-threshold = st.slider("Threshold", min_value=0.0, max_value=1.0, step=0.01, value=0.5)
-
-if sentence:
- processed_sentence = preprocess_query(sentence)
- results = predict_category(processed_sentence, threshold)
- if len(results) > 0:
- st.write("Predicted Aspects:")
- table_data = [["Category", "Probability"]]
- for category, percentage in results.items():
- table_data.append([category, f"{percentage}%"])
- st.table(table_data)
- else:
- st.write("No Categories above the threshold.")
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/unclip_image_interpolation.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/unclip_image_interpolation.py
deleted file mode 100644
index 618ac25bdc957b5110de05cd0f5e8104f9e6f50f..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/unclip_image_interpolation.py
+++ /dev/null
@@ -1,495 +0,0 @@
-import inspect
-from typing import List, Optional, Union
-
-import PIL
-import torch
-from torch.nn import functional as F
-from transformers import (
- CLIPImageProcessor,
- CLIPTextModelWithProjection,
- CLIPTokenizer,
- CLIPVisionModelWithProjection,
-)
-
-from diffusers import (
- DiffusionPipeline,
- ImagePipelineOutput,
- UnCLIPScheduler,
- UNet2DConditionModel,
- UNet2DModel,
-)
-from diffusers.pipelines.unclip import UnCLIPTextProjModel
-from diffusers.utils import is_accelerate_available, logging, randn_tensor
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-def slerp(val, low, high):
- """
- Find the interpolation point between the 'low' and 'high' values for the given 'val'. See https://en.wikipedia.org/wiki/Slerp for more details on the topic.
- """
- low_norm = low / torch.norm(low)
- high_norm = high / torch.norm(high)
- omega = torch.acos((low_norm * high_norm))
- so = torch.sin(omega)
- res = (torch.sin((1.0 - val) * omega) / so) * low + (torch.sin(val * omega) / so) * high
- return res
-
-
-class UnCLIPImageInterpolationPipeline(DiffusionPipeline):
- """
- Pipeline to generate variations from an input image using unCLIP
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- Args:
- text_encoder ([`CLIPTextModelWithProjection`]):
- Frozen text-encoder.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- feature_extractor ([`CLIPImageProcessor`]):
- Model that extracts features from generated images to be used as inputs for the `image_encoder`.
- image_encoder ([`CLIPVisionModelWithProjection`]):
- Frozen CLIP image-encoder. unCLIP Image Variation uses the vision portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPVisionModelWithProjection),
- specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- text_proj ([`UnCLIPTextProjModel`]):
- Utility class to prepare and combine the embeddings before they are passed to the decoder.
- decoder ([`UNet2DConditionModel`]):
- The decoder to invert the image embedding into an image.
- super_res_first ([`UNet2DModel`]):
- Super resolution unet. Used in all but the last step of the super resolution diffusion process.
- super_res_last ([`UNet2DModel`]):
- Super resolution unet. Used in the last step of the super resolution diffusion process.
- decoder_scheduler ([`UnCLIPScheduler`]):
- Scheduler used in the decoder denoising process. Just a modified DDPMScheduler.
- super_res_scheduler ([`UnCLIPScheduler`]):
- Scheduler used in the super resolution denoising process. Just a modified DDPMScheduler.
-
- """
-
- decoder: UNet2DConditionModel
- text_proj: UnCLIPTextProjModel
- text_encoder: CLIPTextModelWithProjection
- tokenizer: CLIPTokenizer
- feature_extractor: CLIPImageProcessor
- image_encoder: CLIPVisionModelWithProjection
- super_res_first: UNet2DModel
- super_res_last: UNet2DModel
-
- decoder_scheduler: UnCLIPScheduler
- super_res_scheduler: UnCLIPScheduler
-
- # Copied from diffusers.pipelines.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline.__init__
- def __init__(
- self,
- decoder: UNet2DConditionModel,
- text_encoder: CLIPTextModelWithProjection,
- tokenizer: CLIPTokenizer,
- text_proj: UnCLIPTextProjModel,
- feature_extractor: CLIPImageProcessor,
- image_encoder: CLIPVisionModelWithProjection,
- super_res_first: UNet2DModel,
- super_res_last: UNet2DModel,
- decoder_scheduler: UnCLIPScheduler,
- super_res_scheduler: UnCLIPScheduler,
- ):
- super().__init__()
-
- self.register_modules(
- decoder=decoder,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- text_proj=text_proj,
- feature_extractor=feature_extractor,
- image_encoder=image_encoder,
- super_res_first=super_res_first,
- super_res_last=super_res_last,
- decoder_scheduler=decoder_scheduler,
- super_res_scheduler=super_res_scheduler,
- )
-
- # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents
- def prepare_latents(self, shape, dtype, device, generator, latents, scheduler):
- if latents is None:
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
- else:
- if latents.shape != shape:
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
- latents = latents.to(device)
-
- latents = latents * scheduler.init_noise_sigma
- return latents
-
- # Copied from diffusers.pipelines.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline._encode_prompt
- def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance):
- batch_size = len(prompt) if isinstance(prompt, list) else 1
-
- # get prompt text embeddings
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- return_tensors="pt",
- )
- text_input_ids = text_inputs.input_ids
- text_mask = text_inputs.attention_mask.bool().to(device)
- text_encoder_output = self.text_encoder(text_input_ids.to(device))
-
- prompt_embeds = text_encoder_output.text_embeds
- text_encoder_hidden_states = text_encoder_output.last_hidden_state
-
- prompt_embeds = prompt_embeds.repeat_interleave(num_images_per_prompt, dim=0)
- text_encoder_hidden_states = text_encoder_hidden_states.repeat_interleave(num_images_per_prompt, dim=0)
- text_mask = text_mask.repeat_interleave(num_images_per_prompt, dim=0)
-
- if do_classifier_free_guidance:
- uncond_tokens = [""] * batch_size
-
- max_length = text_input_ids.shape[-1]
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="pt",
- )
- uncond_text_mask = uncond_input.attention_mask.bool().to(device)
- negative_prompt_embeds_text_encoder_output = self.text_encoder(uncond_input.input_ids.to(device))
-
- negative_prompt_embeds = negative_prompt_embeds_text_encoder_output.text_embeds
- uncond_text_encoder_hidden_states = negative_prompt_embeds_text_encoder_output.last_hidden_state
-
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
-
- seq_len = negative_prompt_embeds.shape[1]
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt)
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len)
-
- seq_len = uncond_text_encoder_hidden_states.shape[1]
- uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.repeat(1, num_images_per_prompt, 1)
- uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.view(
- batch_size * num_images_per_prompt, seq_len, -1
- )
- uncond_text_mask = uncond_text_mask.repeat_interleave(num_images_per_prompt, dim=0)
-
- # done duplicates
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
- text_encoder_hidden_states = torch.cat([uncond_text_encoder_hidden_states, text_encoder_hidden_states])
-
- text_mask = torch.cat([uncond_text_mask, text_mask])
-
- return prompt_embeds, text_encoder_hidden_states, text_mask
-
- # Copied from diffusers.pipelines.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline._encode_image
- def _encode_image(self, image, device, num_images_per_prompt, image_embeddings: Optional[torch.Tensor] = None):
- dtype = next(self.image_encoder.parameters()).dtype
-
- if image_embeddings is None:
- if not isinstance(image, torch.Tensor):
- image = self.feature_extractor(images=image, return_tensors="pt").pixel_values
-
- image = image.to(device=device, dtype=dtype)
- image_embeddings = self.image_encoder(image).image_embeds
-
- image_embeddings = image_embeddings.repeat_interleave(num_images_per_prompt, dim=0)
-
- return image_embeddings
-
- # Copied from diffusers.pipelines.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline.enable_sequential_cpu_offload
- def enable_sequential_cpu_offload(self, gpu_id=0):
- r"""
- Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline's
- models have their state dicts saved to CPU and then are moved to a `torch.device('meta') and loaded to GPU only
- when their specific submodule has its `forward` method called.
- """
- if is_accelerate_available():
- from accelerate import cpu_offload
- else:
- raise ImportError("Please install accelerate via `pip install accelerate`")
-
- device = torch.device(f"cuda:{gpu_id}")
-
- models = [
- self.decoder,
- self.text_proj,
- self.text_encoder,
- self.super_res_first,
- self.super_res_last,
- ]
- for cpu_offloaded_model in models:
- if cpu_offloaded_model is not None:
- cpu_offload(cpu_offloaded_model, device)
-
- @property
- # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline._execution_device
- def _execution_device(self):
- r"""
- Returns the device on which the pipeline's models will be executed. After calling
- `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
- hooks.
- """
- if self.device != torch.device("meta") or not hasattr(self.decoder, "_hf_hook"):
- return self.device
- for module in self.decoder.modules():
- if (
- hasattr(module, "_hf_hook")
- and hasattr(module._hf_hook, "execution_device")
- and module._hf_hook.execution_device is not None
- ):
- return torch.device(module._hf_hook.execution_device)
- return self.device
-
- @torch.no_grad()
- def __call__(
- self,
- image: Optional[Union[List[PIL.Image.Image], torch.FloatTensor]] = None,
- steps: int = 5,
- decoder_num_inference_steps: int = 25,
- super_res_num_inference_steps: int = 7,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- image_embeddings: Optional[torch.Tensor] = None,
- decoder_latents: Optional[torch.FloatTensor] = None,
- super_res_latents: Optional[torch.FloatTensor] = None,
- decoder_guidance_scale: float = 8.0,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- ):
- """
- Function invoked when calling the pipeline for generation.
-
- Args:
- image (`List[PIL.Image.Image]` or `torch.FloatTensor`):
- The images to use for the image interpolation. Only accepts a list of two PIL Images or If you provide a tensor, it needs to comply with the
- configuration of
- [this](https://huggingface.co/fusing/karlo-image-variations-diffusers/blob/main/feature_extractor/preprocessor_config.json)
- `CLIPImageProcessor` while still having a shape of two in the 0th dimension. Can be left to `None` only when `image_embeddings` are passed.
- steps (`int`, *optional*, defaults to 5):
- The number of interpolation images to generate.
- decoder_num_inference_steps (`int`, *optional*, defaults to 25):
- The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality
- image at the expense of slower inference.
- super_res_num_inference_steps (`int`, *optional*, defaults to 7):
- The number of denoising steps for super resolution. More denoising steps usually lead to a higher
- quality image at the expense of slower inference.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
- to make generation deterministic.
- image_embeddings (`torch.Tensor`, *optional*):
- Pre-defined image embeddings that can be derived from the image encoder. Pre-defined image embeddings
- can be passed for tasks like image interpolations. `image` can the be left to `None`.
- decoder_latents (`torch.FloatTensor` of shape (batch size, channels, height, width), *optional*):
- Pre-generated noisy latents to be used as inputs for the decoder.
- super_res_latents (`torch.FloatTensor` of shape (batch size, channels, super res height, super res width), *optional*):
- Pre-generated noisy latents to be used as inputs for the decoder.
- decoder_guidance_scale (`float`, *optional*, defaults to 4.0):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generated image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
- """
-
- batch_size = steps
-
- device = self._execution_device
-
- if isinstance(image, List):
- if len(image) != 2:
- raise AssertionError(
- f"Expected 'image' List to be of size 2, but passed 'image' length is {len(image)}"
- )
- elif not (isinstance(image[0], PIL.Image.Image) and isinstance(image[0], PIL.Image.Image)):
- raise AssertionError(
- f"Expected 'image' List to contain PIL.Image.Image, but passed 'image' contents are {type(image[0])} and {type(image[1])}"
- )
- elif isinstance(image, torch.FloatTensor):
- if image.shape[0] != 2:
- raise AssertionError(
- f"Expected 'image' to be torch.FloatTensor of shape 2 in 0th dimension, but passed 'image' size is {image.shape[0]}"
- )
- elif isinstance(image_embeddings, torch.Tensor):
- if image_embeddings.shape[0] != 2:
- raise AssertionError(
- f"Expected 'image_embeddings' to be torch.FloatTensor of shape 2 in 0th dimension, but passed 'image_embeddings' shape is {image_embeddings.shape[0]}"
- )
- else:
- raise AssertionError(
- f"Expected 'image' or 'image_embeddings' to be not None with types List[PIL.Image] or Torch.FloatTensor respectively. Received {type(image)} and {type(image_embeddings)} repsectively"
- )
-
- original_image_embeddings = self._encode_image(
- image=image, device=device, num_images_per_prompt=1, image_embeddings=image_embeddings
- )
-
- image_embeddings = []
-
- for interp_step in torch.linspace(0, 1, steps):
- temp_image_embeddings = slerp(
- interp_step, original_image_embeddings[0], original_image_embeddings[1]
- ).unsqueeze(0)
- image_embeddings.append(temp_image_embeddings)
-
- image_embeddings = torch.cat(image_embeddings).to(device)
-
- do_classifier_free_guidance = decoder_guidance_scale > 1.0
-
- prompt_embeds, text_encoder_hidden_states, text_mask = self._encode_prompt(
- prompt=["" for i in range(steps)],
- device=device,
- num_images_per_prompt=1,
- do_classifier_free_guidance=do_classifier_free_guidance,
- )
-
- text_encoder_hidden_states, additive_clip_time_embeddings = self.text_proj(
- image_embeddings=image_embeddings,
- prompt_embeds=prompt_embeds,
- text_encoder_hidden_states=text_encoder_hidden_states,
- do_classifier_free_guidance=do_classifier_free_guidance,
- )
-
- if device.type == "mps":
- # HACK: MPS: There is a panic when padding bool tensors,
- # so cast to int tensor for the pad and back to bool afterwards
- text_mask = text_mask.type(torch.int)
- decoder_text_mask = F.pad(text_mask, (self.text_proj.clip_extra_context_tokens, 0), value=1)
- decoder_text_mask = decoder_text_mask.type(torch.bool)
- else:
- decoder_text_mask = F.pad(text_mask, (self.text_proj.clip_extra_context_tokens, 0), value=True)
-
- self.decoder_scheduler.set_timesteps(decoder_num_inference_steps, device=device)
- decoder_timesteps_tensor = self.decoder_scheduler.timesteps
-
- num_channels_latents = self.decoder.config.in_channels
- height = self.decoder.config.sample_size
- width = self.decoder.config.sample_size
-
- # Get the decoder latents for 1 step and then repeat the same tensor for the entire batch to keep same noise across all interpolation steps.
- decoder_latents = self.prepare_latents(
- (1, num_channels_latents, height, width),
- text_encoder_hidden_states.dtype,
- device,
- generator,
- decoder_latents,
- self.decoder_scheduler,
- )
- decoder_latents = decoder_latents.repeat((batch_size, 1, 1, 1))
-
- for i, t in enumerate(self.progress_bar(decoder_timesteps_tensor)):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([decoder_latents] * 2) if do_classifier_free_guidance else decoder_latents
-
- noise_pred = self.decoder(
- sample=latent_model_input,
- timestep=t,
- encoder_hidden_states=text_encoder_hidden_states,
- class_labels=additive_clip_time_embeddings,
- attention_mask=decoder_text_mask,
- ).sample
-
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred_uncond, _ = noise_pred_uncond.split(latent_model_input.shape[1], dim=1)
- noise_pred_text, predicted_variance = noise_pred_text.split(latent_model_input.shape[1], dim=1)
- noise_pred = noise_pred_uncond + decoder_guidance_scale * (noise_pred_text - noise_pred_uncond)
- noise_pred = torch.cat([noise_pred, predicted_variance], dim=1)
-
- if i + 1 == decoder_timesteps_tensor.shape[0]:
- prev_timestep = None
- else:
- prev_timestep = decoder_timesteps_tensor[i + 1]
-
- # compute the previous noisy sample x_t -> x_t-1
- decoder_latents = self.decoder_scheduler.step(
- noise_pred, t, decoder_latents, prev_timestep=prev_timestep, generator=generator
- ).prev_sample
-
- decoder_latents = decoder_latents.clamp(-1, 1)
-
- image_small = decoder_latents
-
- # done decoder
-
- # super res
-
- self.super_res_scheduler.set_timesteps(super_res_num_inference_steps, device=device)
- super_res_timesteps_tensor = self.super_res_scheduler.timesteps
-
- channels = self.super_res_first.config.in_channels // 2
- height = self.super_res_first.config.sample_size
- width = self.super_res_first.config.sample_size
-
- super_res_latents = self.prepare_latents(
- (batch_size, channels, height, width),
- image_small.dtype,
- device,
- generator,
- super_res_latents,
- self.super_res_scheduler,
- )
-
- if device.type == "mps":
- # MPS does not support many interpolations
- image_upscaled = F.interpolate(image_small, size=[height, width])
- else:
- interpolate_antialias = {}
- if "antialias" in inspect.signature(F.interpolate).parameters:
- interpolate_antialias["antialias"] = True
-
- image_upscaled = F.interpolate(
- image_small, size=[height, width], mode="bicubic", align_corners=False, **interpolate_antialias
- )
-
- for i, t in enumerate(self.progress_bar(super_res_timesteps_tensor)):
- # no classifier free guidance
-
- if i == super_res_timesteps_tensor.shape[0] - 1:
- unet = self.super_res_last
- else:
- unet = self.super_res_first
-
- latent_model_input = torch.cat([super_res_latents, image_upscaled], dim=1)
-
- noise_pred = unet(
- sample=latent_model_input,
- timestep=t,
- ).sample
-
- if i + 1 == super_res_timesteps_tensor.shape[0]:
- prev_timestep = None
- else:
- prev_timestep = super_res_timesteps_tensor[i + 1]
-
- # compute the previous noisy sample x_t -> x_t-1
- super_res_latents = self.super_res_scheduler.step(
- noise_pred, t, super_res_latents, prev_timestep=prev_timestep, generator=generator
- ).prev_sample
-
- image = super_res_latents
- # done super res
-
- # post processing
-
- image = image * 0.5 + 0.5
- image = image.clamp(0, 1)
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
-
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image,)
-
- return ImagePipelineOutput(images=image)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/dreambooth_inpaint/README.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/dreambooth_inpaint/README.md
deleted file mode 100644
index dec919587935ec6e08a08e9299d62b0edc17449c..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/dreambooth_inpaint/README.md
+++ /dev/null
@@ -1,118 +0,0 @@
-# Dreambooth for the inpainting model
-
-This script was added by @thedarkzeno .
-
-Please note that this script is not actively maintained, you can open an issue and tag @thedarkzeno or @patil-suraj though.
-
-```bash
-export MODEL_NAME="runwayml/stable-diffusion-inpainting"
-export INSTANCE_DIR="path-to-instance-images"
-export OUTPUT_DIR="path-to-save-model"
-
-accelerate launch train_dreambooth_inpaint.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --output_dir=$OUTPUT_DIR \
- --instance_prompt="a photo of sks dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=1 \
- --learning_rate=5e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --max_train_steps=400
-```
-
-### Training with prior-preservation loss
-
-Prior-preservation is used to avoid overfitting and language-drift. Refer to the paper to learn more about it. For prior-preservation we first generate images using the model with a class prompt and then use those during training along with our data.
-According to the paper, it's recommended to generate `num_epochs * num_samples` images for prior-preservation. 200-300 works well for most cases.
-
-```bash
-export MODEL_NAME="runwayml/stable-diffusion-inpainting"
-export INSTANCE_DIR="path-to-instance-images"
-export CLASS_DIR="path-to-class-images"
-export OUTPUT_DIR="path-to-save-model"
-
-accelerate launch train_dreambooth_inpaint.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=1 \
- --learning_rate=5e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --num_class_images=200 \
- --max_train_steps=800
-```
-
-
-### Training with gradient checkpointing and 8-bit optimizer:
-
-With the help of gradient checkpointing and the 8-bit optimizer from bitsandbytes it's possible to run train dreambooth on a 16GB GPU.
-
-To install `bitandbytes` please refer to this [readme](https://github.com/TimDettmers/bitsandbytes#requirements--installation).
-
-```bash
-export MODEL_NAME="runwayml/stable-diffusion-inpainting"
-export INSTANCE_DIR="path-to-instance-images"
-export CLASS_DIR="path-to-class-images"
-export OUTPUT_DIR="path-to-save-model"
-
-accelerate launch train_dreambooth_inpaint.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=2 --gradient_checkpointing \
- --use_8bit_adam \
- --learning_rate=5e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --num_class_images=200 \
- --max_train_steps=800
-```
-
-### Fine-tune text encoder with the UNet.
-
-The script also allows to fine-tune the `text_encoder` along with the `unet`. It's been observed experimentally that fine-tuning `text_encoder` gives much better results especially on faces.
-Pass the `--train_text_encoder` argument to the script to enable training `text_encoder`.
-
-___Note: Training text encoder requires more memory, with this option the training won't fit on 16GB GPU. It needs at least 24GB VRAM.___
-
-```bash
-export MODEL_NAME="runwayml/stable-diffusion-inpainting"
-export INSTANCE_DIR="path-to-instance-images"
-export CLASS_DIR="path-to-class-images"
-export OUTPUT_DIR="path-to-save-model"
-
-accelerate launch train_dreambooth_inpaint.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --train_text_encoder \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --use_8bit_adam \
- --gradient_checkpointing \
- --learning_rate=2e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --num_class_images=200 \
- --max_train_steps=800
-```
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/nas_fcos/README.md b/spaces/Andy1621/uniformer_image_detection/configs/nas_fcos/README.md
deleted file mode 100644
index 05ac996a40cfa2f600f239f21adb0878a284292b..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/nas_fcos/README.md
+++ /dev/null
@@ -1,25 +0,0 @@
-# NAS-FCOS: Fast Neural Architecture Search for Object Detection
-
-## Introduction
-
-[ALGORITHM]
-
-```latex
-@article{wang2019fcos,
- title={Nas-fcos: Fast neural architecture search for object detection},
- author={Wang, Ning and Gao, Yang and Chen, Hao and Wang, Peng and Tian, Zhi and Shen, Chunhua},
- journal={arXiv preprint arXiv:1906.04423},
- year={2019}
-}
-```
-
-## Results and Models
-
-| Head | Backbone | Style | GN-head | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
-|:---------:|:---------:|:-------:|:-------:|:-------:|:--------:|:--------------:|:------:|:------:|:--------:|
-| NAS-FCOSHead | R-50 | caffe | Y | 1x | | | 39.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/nas_fcos/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/nas_fcos/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco_20200520-1bdba3ce.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/nas_fcos/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco_20200520.log.json) |
-| FCOSHead | R-50 | caffe | Y | 1x | | | 38.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/nas_fcos/nas_fcos_fcoshead_r50_caffe_fpn_gn-head_4x4_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/nas_fcos/nas_fcos_fcoshead_r50_caffe_fpn_gn-head_4x4_1x_coco/nas_fcos_fcoshead_r50_caffe_fpn_gn-head_4x4_1x_coco_20200521-7fdcbce0.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/nas_fcos/nas_fcos_fcoshead_r50_caffe_fpn_gn-head_4x4_1x_coco/nas_fcos_fcoshead_r50_caffe_fpn_gn-head_4x4_1x_coco_20200521.log.json) |
-
-**Notes:**
-
-- To be consistent with the author's implementation, we use 4 GPUs with 4 images/GPU.
diff --git a/spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/test_robustness.py b/spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/test_robustness.py
deleted file mode 100644
index ae30c019796b3e20d96dc4486ad1eae8e8981b98..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/test_robustness.py
+++ /dev/null
@@ -1,390 +0,0 @@
-import argparse
-import copy
-import os
-import os.path as osp
-
-import mmcv
-import torch
-from mmcv import DictAction
-from mmcv.parallel import MMDataParallel, MMDistributedDataParallel
-from mmcv.runner import (get_dist_info, init_dist, load_checkpoint,
- wrap_fp16_model)
-from pycocotools.coco import COCO
-from pycocotools.cocoeval import COCOeval
-from tools.analysis_tools.robustness_eval import get_results
-
-from mmdet import datasets
-from mmdet.apis import multi_gpu_test, set_random_seed, single_gpu_test
-from mmdet.core import eval_map
-from mmdet.datasets import build_dataloader, build_dataset
-from mmdet.models import build_detector
-
-
-def coco_eval_with_return(result_files,
- result_types,
- coco,
- max_dets=(100, 300, 1000)):
- for res_type in result_types:
- assert res_type in ['proposal', 'bbox', 'segm', 'keypoints']
-
- if mmcv.is_str(coco):
- coco = COCO(coco)
- assert isinstance(coco, COCO)
-
- eval_results = {}
- for res_type in result_types:
- result_file = result_files[res_type]
- assert result_file.endswith('.json')
-
- coco_dets = coco.loadRes(result_file)
- img_ids = coco.getImgIds()
- iou_type = 'bbox' if res_type == 'proposal' else res_type
- cocoEval = COCOeval(coco, coco_dets, iou_type)
- cocoEval.params.imgIds = img_ids
- if res_type == 'proposal':
- cocoEval.params.useCats = 0
- cocoEval.params.maxDets = list(max_dets)
- cocoEval.evaluate()
- cocoEval.accumulate()
- cocoEval.summarize()
- if res_type == 'segm' or res_type == 'bbox':
- metric_names = [
- 'AP', 'AP50', 'AP75', 'APs', 'APm', 'APl', 'AR1', 'AR10',
- 'AR100', 'ARs', 'ARm', 'ARl'
- ]
- eval_results[res_type] = {
- metric_names[i]: cocoEval.stats[i]
- for i in range(len(metric_names))
- }
- else:
- eval_results[res_type] = cocoEval.stats
-
- return eval_results
-
-
-def voc_eval_with_return(result_file,
- dataset,
- iou_thr=0.5,
- logger='print',
- only_ap=True):
- det_results = mmcv.load(result_file)
- annotations = [dataset.get_ann_info(i) for i in range(len(dataset))]
- if hasattr(dataset, 'year') and dataset.year == 2007:
- dataset_name = 'voc07'
- else:
- dataset_name = dataset.CLASSES
- mean_ap, eval_results = eval_map(
- det_results,
- annotations,
- scale_ranges=None,
- iou_thr=iou_thr,
- dataset=dataset_name,
- logger=logger)
-
- if only_ap:
- eval_results = [{
- 'ap': eval_results[i]['ap']
- } for i in range(len(eval_results))]
-
- return mean_ap, eval_results
-
-
-def parse_args():
- parser = argparse.ArgumentParser(description='MMDet test detector')
- parser.add_argument('config', help='test config file path')
- parser.add_argument('checkpoint', help='checkpoint file')
- parser.add_argument('--out', help='output result file')
- parser.add_argument(
- '--corruptions',
- type=str,
- nargs='+',
- default='benchmark',
- choices=[
- 'all', 'benchmark', 'noise', 'blur', 'weather', 'digital',
- 'holdout', 'None', 'gaussian_noise', 'shot_noise', 'impulse_noise',
- 'defocus_blur', 'glass_blur', 'motion_blur', 'zoom_blur', 'snow',
- 'frost', 'fog', 'brightness', 'contrast', 'elastic_transform',
- 'pixelate', 'jpeg_compression', 'speckle_noise', 'gaussian_blur',
- 'spatter', 'saturate'
- ],
- help='corruptions')
- parser.add_argument(
- '--severities',
- type=int,
- nargs='+',
- default=[0, 1, 2, 3, 4, 5],
- help='corruption severity levels')
- parser.add_argument(
- '--eval',
- type=str,
- nargs='+',
- choices=['proposal', 'proposal_fast', 'bbox', 'segm', 'keypoints'],
- help='eval types')
- parser.add_argument(
- '--iou-thr',
- type=float,
- default=0.5,
- help='IoU threshold for pascal voc evaluation')
- parser.add_argument(
- '--summaries',
- type=bool,
- default=False,
- help='Print summaries for every corruption and severity')
- parser.add_argument(
- '--workers', type=int, default=32, help='workers per gpu')
- parser.add_argument('--show', action='store_true', help='show results')
- parser.add_argument(
- '--show-dir', help='directory where painted images will be saved')
- parser.add_argument(
- '--show-score-thr',
- type=float,
- default=0.3,
- help='score threshold (default: 0.3)')
- parser.add_argument('--tmpdir', help='tmp dir for writing some results')
- parser.add_argument('--seed', type=int, default=None, help='random seed')
- parser.add_argument(
- '--launcher',
- choices=['none', 'pytorch', 'slurm', 'mpi'],
- default='none',
- help='job launcher')
- parser.add_argument('--local_rank', type=int, default=0)
- parser.add_argument(
- '--final-prints',
- type=str,
- nargs='+',
- choices=['P', 'mPC', 'rPC'],
- default='mPC',
- help='corruption benchmark metric to print at the end')
- parser.add_argument(
- '--final-prints-aggregate',
- type=str,
- choices=['all', 'benchmark'],
- default='benchmark',
- help='aggregate all results or only those for benchmark corruptions')
- parser.add_argument(
- '--cfg-options',
- nargs='+',
- action=DictAction,
- help='override some settings in the used config, the key-value pair '
- 'in xxx=yyy format will be merged into config file. If the value to '
- 'be overwritten is a list, it should be like key="[a,b]" or key=a,b '
- 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" '
- 'Note that the quotation marks are necessary and that no white space '
- 'is allowed.')
- args = parser.parse_args()
- if 'LOCAL_RANK' not in os.environ:
- os.environ['LOCAL_RANK'] = str(args.local_rank)
- return args
-
-
-def main():
- args = parse_args()
-
- assert args.out or args.show or args.show_dir, \
- ('Please specify at least one operation (save or show the results) '
- 'with the argument "--out", "--show" or "show-dir"')
-
- if args.out is not None and not args.out.endswith(('.pkl', '.pickle')):
- raise ValueError('The output file must be a pkl file.')
-
- cfg = mmcv.Config.fromfile(args.config)
- if args.cfg_options is not None:
- cfg.merge_from_dict(args.cfg_options)
- # import modules from string list.
- if cfg.get('custom_imports', None):
- from mmcv.utils import import_modules_from_strings
- import_modules_from_strings(**cfg['custom_imports'])
- # set cudnn_benchmark
- if cfg.get('cudnn_benchmark', False):
- torch.backends.cudnn.benchmark = True
- cfg.model.pretrained = None
- cfg.data.test.test_mode = True
- if args.workers == 0:
- args.workers = cfg.data.workers_per_gpu
-
- # init distributed env first, since logger depends on the dist info.
- if args.launcher == 'none':
- distributed = False
- else:
- distributed = True
- init_dist(args.launcher, **cfg.dist_params)
-
- # set random seeds
- if args.seed is not None:
- set_random_seed(args.seed)
-
- if 'all' in args.corruptions:
- corruptions = [
- 'gaussian_noise', 'shot_noise', 'impulse_noise', 'defocus_blur',
- 'glass_blur', 'motion_blur', 'zoom_blur', 'snow', 'frost', 'fog',
- 'brightness', 'contrast', 'elastic_transform', 'pixelate',
- 'jpeg_compression', 'speckle_noise', 'gaussian_blur', 'spatter',
- 'saturate'
- ]
- elif 'benchmark' in args.corruptions:
- corruptions = [
- 'gaussian_noise', 'shot_noise', 'impulse_noise', 'defocus_blur',
- 'glass_blur', 'motion_blur', 'zoom_blur', 'snow', 'frost', 'fog',
- 'brightness', 'contrast', 'elastic_transform', 'pixelate',
- 'jpeg_compression'
- ]
- elif 'noise' in args.corruptions:
- corruptions = ['gaussian_noise', 'shot_noise', 'impulse_noise']
- elif 'blur' in args.corruptions:
- corruptions = [
- 'defocus_blur', 'glass_blur', 'motion_blur', 'zoom_blur'
- ]
- elif 'weather' in args.corruptions:
- corruptions = ['snow', 'frost', 'fog', 'brightness']
- elif 'digital' in args.corruptions:
- corruptions = [
- 'contrast', 'elastic_transform', 'pixelate', 'jpeg_compression'
- ]
- elif 'holdout' in args.corruptions:
- corruptions = ['speckle_noise', 'gaussian_blur', 'spatter', 'saturate']
- elif 'None' in args.corruptions:
- corruptions = ['None']
- args.severities = [0]
- else:
- corruptions = args.corruptions
-
- rank, _ = get_dist_info()
- aggregated_results = {}
- for corr_i, corruption in enumerate(corruptions):
- aggregated_results[corruption] = {}
- for sev_i, corruption_severity in enumerate(args.severities):
- # evaluate severity 0 (= no corruption) only once
- if corr_i > 0 and corruption_severity == 0:
- aggregated_results[corruption][0] = \
- aggregated_results[corruptions[0]][0]
- continue
-
- test_data_cfg = copy.deepcopy(cfg.data.test)
- # assign corruption and severity
- if corruption_severity > 0:
- corruption_trans = dict(
- type='Corrupt',
- corruption=corruption,
- severity=corruption_severity)
- # TODO: hard coded "1", we assume that the first step is
- # loading images, which needs to be fixed in the future
- test_data_cfg['pipeline'].insert(1, corruption_trans)
-
- # print info
- print(f'\nTesting {corruption} at severity {corruption_severity}')
-
- # build the dataloader
- # TODO: support multiple images per gpu
- # (only minor changes are needed)
- dataset = build_dataset(test_data_cfg)
- data_loader = build_dataloader(
- dataset,
- samples_per_gpu=1,
- workers_per_gpu=args.workers,
- dist=distributed,
- shuffle=False)
-
- # build the model and load checkpoint
- cfg.model.train_cfg = None
- model = build_detector(cfg.model, test_cfg=cfg.get('test_cfg'))
- fp16_cfg = cfg.get('fp16', None)
- if fp16_cfg is not None:
- wrap_fp16_model(model)
- checkpoint = load_checkpoint(
- model, args.checkpoint, map_location='cpu')
- # old versions did not save class info in checkpoints,
- # this walkaround is for backward compatibility
- if 'CLASSES' in checkpoint.get('meta', {}):
- model.CLASSES = checkpoint['meta']['CLASSES']
- else:
- model.CLASSES = dataset.CLASSES
-
- if not distributed:
- model = MMDataParallel(model, device_ids=[0])
- show_dir = args.show_dir
- if show_dir is not None:
- show_dir = osp.join(show_dir, corruption)
- show_dir = osp.join(show_dir, str(corruption_severity))
- if not osp.exists(show_dir):
- osp.makedirs(show_dir)
- outputs = single_gpu_test(model, data_loader, args.show,
- show_dir, args.show_score_thr)
- else:
- model = MMDistributedDataParallel(
- model.cuda(),
- device_ids=[torch.cuda.current_device()],
- broadcast_buffers=False)
- outputs = multi_gpu_test(model, data_loader, args.tmpdir)
-
- if args.out and rank == 0:
- eval_results_filename = (
- osp.splitext(args.out)[0] + '_results' +
- osp.splitext(args.out)[1])
- mmcv.dump(outputs, args.out)
- eval_types = args.eval
- if cfg.dataset_type == 'VOCDataset':
- if eval_types:
- for eval_type in eval_types:
- if eval_type == 'bbox':
- test_dataset = mmcv.runner.obj_from_dict(
- cfg.data.test, datasets)
- logger = 'print' if args.summaries else None
- mean_ap, eval_results = \
- voc_eval_with_return(
- args.out, test_dataset,
- args.iou_thr, logger)
- aggregated_results[corruption][
- corruption_severity] = eval_results
- else:
- print('\nOnly "bbox" evaluation \
- is supported for pascal voc')
- else:
- if eval_types:
- print(f'Starting evaluate {" and ".join(eval_types)}')
- if eval_types == ['proposal_fast']:
- result_file = args.out
- else:
- if not isinstance(outputs[0], dict):
- result_files = dataset.results2json(
- outputs, args.out)
- else:
- for name in outputs[0]:
- print(f'\nEvaluating {name}')
- outputs_ = [out[name] for out in outputs]
- result_file = args.out
- + f'.{name}'
- result_files = dataset.results2json(
- outputs_, result_file)
- eval_results = coco_eval_with_return(
- result_files, eval_types, dataset.coco)
- aggregated_results[corruption][
- corruption_severity] = eval_results
- else:
- print('\nNo task was selected for evaluation;'
- '\nUse --eval to select a task')
-
- # save results after each evaluation
- mmcv.dump(aggregated_results, eval_results_filename)
-
- if rank == 0:
- # print final results
- print('\nAggregated results:')
- prints = args.final_prints
- aggregate = args.final_prints_aggregate
-
- if cfg.dataset_type == 'VOCDataset':
- get_results(
- eval_results_filename,
- dataset='voc',
- prints=prints,
- aggregate=aggregate)
- else:
- get_results(
- eval_results_filename,
- dataset='coco',
- prints=prints,
- aggregate=aggregate)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/cmd_macos.sh b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/cmd_macos.sh
deleted file mode 100644
index 1b052e5c34bd43b7e898858d7993dd5f6a7a6f08..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/cmd_macos.sh
+++ /dev/null
@@ -1,24 +0,0 @@
-#!/bin/bash
-
-cd "$(dirname "${BASH_SOURCE[0]}")"
-
-if [[ "$(pwd)" =~ " " ]]; then echo This script relies on Miniconda which can not be silently installed under a path with spaces. && exit; fi
-
-# deactivate existing conda envs as needed to avoid conflicts
-{ conda deactivate && conda deactivate && conda deactivate; } 2> /dev/null
-
-# config
-CONDA_ROOT_PREFIX="$(pwd)/installer_files/conda"
-INSTALL_ENV_DIR="$(pwd)/installer_files/env"
-
-# environment isolation
-export PYTHONNOUSERSITE=1
-unset PYTHONPATH
-unset PYTHONHOME
-export CUDA_PATH="$INSTALL_ENV_DIR"
-export CUDA_HOME="$CUDA_PATH"
-
-# activate env
-source $CONDA_ROOT_PREFIX/etc/profile.d/conda.sh
-conda activate $INSTALL_ENV_DIR
-exec bash --norc
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/parrots_wrapper.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/parrots_wrapper.py
deleted file mode 100644
index 93c97640d4b9ed088ca82cfe03e6efebfcfa9dbf..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/parrots_wrapper.py
+++ /dev/null
@@ -1,107 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from functools import partial
-
-import torch
-
-TORCH_VERSION = torch.__version__
-
-
-def is_rocm_pytorch() -> bool:
- is_rocm = False
- if TORCH_VERSION != 'parrots':
- try:
- from torch.utils.cpp_extension import ROCM_HOME
- is_rocm = True if ((torch.version.hip is not None) and
- (ROCM_HOME is not None)) else False
- except ImportError:
- pass
- return is_rocm
-
-
-def _get_cuda_home():
- if TORCH_VERSION == 'parrots':
- from parrots.utils.build_extension import CUDA_HOME
- else:
- if is_rocm_pytorch():
- from torch.utils.cpp_extension import ROCM_HOME
- CUDA_HOME = ROCM_HOME
- else:
- from torch.utils.cpp_extension import CUDA_HOME
- return CUDA_HOME
-
-
-def get_build_config():
- if TORCH_VERSION == 'parrots':
- from parrots.config import get_build_info
- return get_build_info()
- else:
- return torch.__config__.show()
-
-
-def _get_conv():
- if TORCH_VERSION == 'parrots':
- from parrots.nn.modules.conv import _ConvNd, _ConvTransposeMixin
- else:
- from torch.nn.modules.conv import _ConvNd, _ConvTransposeMixin
- return _ConvNd, _ConvTransposeMixin
-
-
-def _get_dataloader():
- if TORCH_VERSION == 'parrots':
- from torch.utils.data import DataLoader, PoolDataLoader
- else:
- from torch.utils.data import DataLoader
- PoolDataLoader = DataLoader
- return DataLoader, PoolDataLoader
-
-
-def _get_extension():
- if TORCH_VERSION == 'parrots':
- from parrots.utils.build_extension import BuildExtension, Extension
- CppExtension = partial(Extension, cuda=False)
- CUDAExtension = partial(Extension, cuda=True)
- else:
- from torch.utils.cpp_extension import (BuildExtension, CppExtension,
- CUDAExtension)
- return BuildExtension, CppExtension, CUDAExtension
-
-
-def _get_pool():
- if TORCH_VERSION == 'parrots':
- from parrots.nn.modules.pool import (_AdaptiveAvgPoolNd,
- _AdaptiveMaxPoolNd, _AvgPoolNd,
- _MaxPoolNd)
- else:
- from torch.nn.modules.pooling import (_AdaptiveAvgPoolNd,
- _AdaptiveMaxPoolNd, _AvgPoolNd,
- _MaxPoolNd)
- return _AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, _AvgPoolNd, _MaxPoolNd
-
-
-def _get_norm():
- if TORCH_VERSION == 'parrots':
- from parrots.nn.modules.batchnorm import _BatchNorm, _InstanceNorm
- SyncBatchNorm_ = torch.nn.SyncBatchNorm2d
- else:
- from torch.nn.modules.instancenorm import _InstanceNorm
- from torch.nn.modules.batchnorm import _BatchNorm
- SyncBatchNorm_ = torch.nn.SyncBatchNorm
- return _BatchNorm, _InstanceNorm, SyncBatchNorm_
-
-
-_ConvNd, _ConvTransposeMixin = _get_conv()
-DataLoader, PoolDataLoader = _get_dataloader()
-BuildExtension, CppExtension, CUDAExtension = _get_extension()
-_BatchNorm, _InstanceNorm, SyncBatchNorm_ = _get_norm()
-_AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, _AvgPoolNd, _MaxPoolNd = _get_pool()
-
-
-class SyncBatchNorm(SyncBatchNorm_):
-
- def _check_input_dim(self, input):
- if TORCH_VERSION == 'parrots':
- if input.dim() < 2:
- raise ValueError(
- f'expected at least 2D input (got {input.dim()}D input)')
- else:
- super()._check_input_dim(input)
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/drive.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/drive.py
deleted file mode 100644
index 3cbfda8ae74bdf26c5aef197ff2866a7c7ad0cfd..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/drive.py
+++ /dev/null
@@ -1,27 +0,0 @@
-import os.path as osp
-
-from .builder import DATASETS
-from .custom import CustomDataset
-
-
-@DATASETS.register_module()
-class DRIVEDataset(CustomDataset):
- """DRIVE dataset.
-
- In segmentation map annotation for DRIVE, 0 stands for background, which is
- included in 2 categories. ``reduce_zero_label`` is fixed to False. The
- ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to
- '_manual1.png'.
- """
-
- CLASSES = ('background', 'vessel')
-
- PALETTE = [[120, 120, 120], [6, 230, 230]]
-
- def __init__(self, **kwargs):
- super(DRIVEDataset, self).__init__(
- img_suffix='.png',
- seg_map_suffix='_manual1.png',
- reduce_zero_label=False,
- **kwargs)
- assert osp.exists(self.img_dir)
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/image_degradation/utils_image.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/image_degradation/utils_image.py
deleted file mode 100644
index 0175f155ad900ae33c3c46ed87f49b352e3faf98..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/image_degradation/utils_image.py
+++ /dev/null
@@ -1,916 +0,0 @@
-import os
-import math
-import random
-import numpy as np
-import torch
-import cv2
-from torchvision.utils import make_grid
-from datetime import datetime
-#import matplotlib.pyplot as plt # TODO: check with Dominik, also bsrgan.py vs bsrgan_light.py
-
-
-os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
-
-
-'''
-# --------------------------------------------
-# Kai Zhang (github: https://github.com/cszn)
-# 03/Mar/2019
-# --------------------------------------------
-# https://github.com/twhui/SRGAN-pyTorch
-# https://github.com/xinntao/BasicSR
-# --------------------------------------------
-'''
-
-
-IMG_EXTENSIONS = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tif']
-
-
-def is_image_file(filename):
- return any(filename.endswith(extension) for extension in IMG_EXTENSIONS)
-
-
-def get_timestamp():
- return datetime.now().strftime('%y%m%d-%H%M%S')
-
-
-def imshow(x, title=None, cbar=False, figsize=None):
- plt.figure(figsize=figsize)
- plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray')
- if title:
- plt.title(title)
- if cbar:
- plt.colorbar()
- plt.show()
-
-
-def surf(Z, cmap='rainbow', figsize=None):
- plt.figure(figsize=figsize)
- ax3 = plt.axes(projection='3d')
-
- w, h = Z.shape[:2]
- xx = np.arange(0,w,1)
- yy = np.arange(0,h,1)
- X, Y = np.meshgrid(xx, yy)
- ax3.plot_surface(X,Y,Z,cmap=cmap)
- #ax3.contour(X,Y,Z, zdim='z',offset=-2,cmap=cmap)
- plt.show()
-
-
-'''
-# --------------------------------------------
-# get image pathes
-# --------------------------------------------
-'''
-
-
-def get_image_paths(dataroot):
- paths = None # return None if dataroot is None
- if dataroot is not None:
- paths = sorted(_get_paths_from_images(dataroot))
- return paths
-
-
-def _get_paths_from_images(path):
- assert os.path.isdir(path), '{:s} is not a valid directory'.format(path)
- images = []
- for dirpath, _, fnames in sorted(os.walk(path)):
- for fname in sorted(fnames):
- if is_image_file(fname):
- img_path = os.path.join(dirpath, fname)
- images.append(img_path)
- assert images, '{:s} has no valid image file'.format(path)
- return images
-
-
-'''
-# --------------------------------------------
-# split large images into small images
-# --------------------------------------------
-'''
-
-
-def patches_from_image(img, p_size=512, p_overlap=64, p_max=800):
- w, h = img.shape[:2]
- patches = []
- if w > p_max and h > p_max:
- w1 = list(np.arange(0, w-p_size, p_size-p_overlap, dtype=np.int))
- h1 = list(np.arange(0, h-p_size, p_size-p_overlap, dtype=np.int))
- w1.append(w-p_size)
- h1.append(h-p_size)
-# print(w1)
-# print(h1)
- for i in w1:
- for j in h1:
- patches.append(img[i:i+p_size, j:j+p_size,:])
- else:
- patches.append(img)
-
- return patches
-
-
-def imssave(imgs, img_path):
- """
- imgs: list, N images of size WxHxC
- """
- img_name, ext = os.path.splitext(os.path.basename(img_path))
-
- for i, img in enumerate(imgs):
- if img.ndim == 3:
- img = img[:, :, [2, 1, 0]]
- new_path = os.path.join(os.path.dirname(img_path), img_name+str('_s{:04d}'.format(i))+'.png')
- cv2.imwrite(new_path, img)
-
-
-def split_imageset(original_dataroot, taget_dataroot, n_channels=3, p_size=800, p_overlap=96, p_max=1000):
- """
- split the large images from original_dataroot into small overlapped images with size (p_size)x(p_size),
- and save them into taget_dataroot; only the images with larger size than (p_max)x(p_max)
- will be splitted.
- Args:
- original_dataroot:
- taget_dataroot:
- p_size: size of small images
- p_overlap: patch size in training is a good choice
- p_max: images with smaller size than (p_max)x(p_max) keep unchanged.
- """
- paths = get_image_paths(original_dataroot)
- for img_path in paths:
- # img_name, ext = os.path.splitext(os.path.basename(img_path))
- img = imread_uint(img_path, n_channels=n_channels)
- patches = patches_from_image(img, p_size, p_overlap, p_max)
- imssave(patches, os.path.join(taget_dataroot,os.path.basename(img_path)))
- #if original_dataroot == taget_dataroot:
- #del img_path
-
-'''
-# --------------------------------------------
-# makedir
-# --------------------------------------------
-'''
-
-
-def mkdir(path):
- if not os.path.exists(path):
- os.makedirs(path)
-
-
-def mkdirs(paths):
- if isinstance(paths, str):
- mkdir(paths)
- else:
- for path in paths:
- mkdir(path)
-
-
-def mkdir_and_rename(path):
- if os.path.exists(path):
- new_name = path + '_archived_' + get_timestamp()
- print('Path already exists. Rename it to [{:s}]'.format(new_name))
- os.rename(path, new_name)
- os.makedirs(path)
-
-
-'''
-# --------------------------------------------
-# read image from path
-# opencv is fast, but read BGR numpy image
-# --------------------------------------------
-'''
-
-
-# --------------------------------------------
-# get uint8 image of size HxWxn_channles (RGB)
-# --------------------------------------------
-def imread_uint(path, n_channels=3):
- # input: path
- # output: HxWx3(RGB or GGG), or HxWx1 (G)
- if n_channels == 1:
- img = cv2.imread(path, 0) # cv2.IMREAD_GRAYSCALE
- img = np.expand_dims(img, axis=2) # HxWx1
- elif n_channels == 3:
- img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # BGR or G
- if img.ndim == 2:
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) # GGG
- else:
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # RGB
- return img
-
-
-# --------------------------------------------
-# matlab's imwrite
-# --------------------------------------------
-def imsave(img, img_path):
- img = np.squeeze(img)
- if img.ndim == 3:
- img = img[:, :, [2, 1, 0]]
- cv2.imwrite(img_path, img)
-
-def imwrite(img, img_path):
- img = np.squeeze(img)
- if img.ndim == 3:
- img = img[:, :, [2, 1, 0]]
- cv2.imwrite(img_path, img)
-
-
-
-# --------------------------------------------
-# get single image of size HxWxn_channles (BGR)
-# --------------------------------------------
-def read_img(path):
- # read image by cv2
- # return: Numpy float32, HWC, BGR, [0,1]
- img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # cv2.IMREAD_GRAYSCALE
- img = img.astype(np.float32) / 255.
- if img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- # some images have 4 channels
- if img.shape[2] > 3:
- img = img[:, :, :3]
- return img
-
-
-'''
-# --------------------------------------------
-# image format conversion
-# --------------------------------------------
-# numpy(single) <---> numpy(unit)
-# numpy(single) <---> tensor
-# numpy(unit) <---> tensor
-# --------------------------------------------
-'''
-
-
-# --------------------------------------------
-# numpy(single) [0, 1] <---> numpy(unit)
-# --------------------------------------------
-
-
-def uint2single(img):
-
- return np.float32(img/255.)
-
-
-def single2uint(img):
-
- return np.uint8((img.clip(0, 1)*255.).round())
-
-
-def uint162single(img):
-
- return np.float32(img/65535.)
-
-
-def single2uint16(img):
-
- return np.uint16((img.clip(0, 1)*65535.).round())
-
-
-# --------------------------------------------
-# numpy(unit) (HxWxC or HxW) <---> tensor
-# --------------------------------------------
-
-
-# convert uint to 4-dimensional torch tensor
-def uint2tensor4(img):
- if img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.).unsqueeze(0)
-
-
-# convert uint to 3-dimensional torch tensor
-def uint2tensor3(img):
- if img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.)
-
-
-# convert 2/3/4-dimensional torch tensor to uint
-def tensor2uint(img):
- img = img.data.squeeze().float().clamp_(0, 1).cpu().numpy()
- if img.ndim == 3:
- img = np.transpose(img, (1, 2, 0))
- return np.uint8((img*255.0).round())
-
-
-# --------------------------------------------
-# numpy(single) (HxWxC) <---> tensor
-# --------------------------------------------
-
-
-# convert single (HxWxC) to 3-dimensional torch tensor
-def single2tensor3(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float()
-
-
-# convert single (HxWxC) to 4-dimensional torch tensor
-def single2tensor4(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().unsqueeze(0)
-
-
-# convert torch tensor to single
-def tensor2single(img):
- img = img.data.squeeze().float().cpu().numpy()
- if img.ndim == 3:
- img = np.transpose(img, (1, 2, 0))
-
- return img
-
-# convert torch tensor to single
-def tensor2single3(img):
- img = img.data.squeeze().float().cpu().numpy()
- if img.ndim == 3:
- img = np.transpose(img, (1, 2, 0))
- elif img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- return img
-
-
-def single2tensor5(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float().unsqueeze(0)
-
-
-def single32tensor5(img):
- return torch.from_numpy(np.ascontiguousarray(img)).float().unsqueeze(0).unsqueeze(0)
-
-
-def single42tensor4(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float()
-
-
-# from skimage.io import imread, imsave
-def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)):
- '''
- Converts a torch Tensor into an image Numpy array of BGR channel order
- Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order
- Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default)
- '''
- tensor = tensor.squeeze().float().cpu().clamp_(*min_max) # squeeze first, then clamp
- tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0]) # to range [0,1]
- n_dim = tensor.dim()
- if n_dim == 4:
- n_img = len(tensor)
- img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy()
- img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR
- elif n_dim == 3:
- img_np = tensor.numpy()
- img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR
- elif n_dim == 2:
- img_np = tensor.numpy()
- else:
- raise TypeError(
- 'Only support 4D, 3D and 2D tensor. But received with dimension: {:d}'.format(n_dim))
- if out_type == np.uint8:
- img_np = (img_np * 255.0).round()
- # Important. Unlike matlab, numpy.unit8() WILL NOT round by default.
- return img_np.astype(out_type)
-
-
-'''
-# --------------------------------------------
-# Augmentation, flipe and/or rotate
-# --------------------------------------------
-# The following two are enough.
-# (1) augmet_img: numpy image of WxHxC or WxH
-# (2) augment_img_tensor4: tensor image 1xCxWxH
-# --------------------------------------------
-'''
-
-
-def augment_img(img, mode=0):
- '''Kai Zhang (github: https://github.com/cszn)
- '''
- if mode == 0:
- return img
- elif mode == 1:
- return np.flipud(np.rot90(img))
- elif mode == 2:
- return np.flipud(img)
- elif mode == 3:
- return np.rot90(img, k=3)
- elif mode == 4:
- return np.flipud(np.rot90(img, k=2))
- elif mode == 5:
- return np.rot90(img)
- elif mode == 6:
- return np.rot90(img, k=2)
- elif mode == 7:
- return np.flipud(np.rot90(img, k=3))
-
-
-def augment_img_tensor4(img, mode=0):
- '''Kai Zhang (github: https://github.com/cszn)
- '''
- if mode == 0:
- return img
- elif mode == 1:
- return img.rot90(1, [2, 3]).flip([2])
- elif mode == 2:
- return img.flip([2])
- elif mode == 3:
- return img.rot90(3, [2, 3])
- elif mode == 4:
- return img.rot90(2, [2, 3]).flip([2])
- elif mode == 5:
- return img.rot90(1, [2, 3])
- elif mode == 6:
- return img.rot90(2, [2, 3])
- elif mode == 7:
- return img.rot90(3, [2, 3]).flip([2])
-
-
-def augment_img_tensor(img, mode=0):
- '''Kai Zhang (github: https://github.com/cszn)
- '''
- img_size = img.size()
- img_np = img.data.cpu().numpy()
- if len(img_size) == 3:
- img_np = np.transpose(img_np, (1, 2, 0))
- elif len(img_size) == 4:
- img_np = np.transpose(img_np, (2, 3, 1, 0))
- img_np = augment_img(img_np, mode=mode)
- img_tensor = torch.from_numpy(np.ascontiguousarray(img_np))
- if len(img_size) == 3:
- img_tensor = img_tensor.permute(2, 0, 1)
- elif len(img_size) == 4:
- img_tensor = img_tensor.permute(3, 2, 0, 1)
-
- return img_tensor.type_as(img)
-
-
-def augment_img_np3(img, mode=0):
- if mode == 0:
- return img
- elif mode == 1:
- return img.transpose(1, 0, 2)
- elif mode == 2:
- return img[::-1, :, :]
- elif mode == 3:
- img = img[::-1, :, :]
- img = img.transpose(1, 0, 2)
- return img
- elif mode == 4:
- return img[:, ::-1, :]
- elif mode == 5:
- img = img[:, ::-1, :]
- img = img.transpose(1, 0, 2)
- return img
- elif mode == 6:
- img = img[:, ::-1, :]
- img = img[::-1, :, :]
- return img
- elif mode == 7:
- img = img[:, ::-1, :]
- img = img[::-1, :, :]
- img = img.transpose(1, 0, 2)
- return img
-
-
-def augment_imgs(img_list, hflip=True, rot=True):
- # horizontal flip OR rotate
- hflip = hflip and random.random() < 0.5
- vflip = rot and random.random() < 0.5
- rot90 = rot and random.random() < 0.5
-
- def _augment(img):
- if hflip:
- img = img[:, ::-1, :]
- if vflip:
- img = img[::-1, :, :]
- if rot90:
- img = img.transpose(1, 0, 2)
- return img
-
- return [_augment(img) for img in img_list]
-
-
-'''
-# --------------------------------------------
-# modcrop and shave
-# --------------------------------------------
-'''
-
-
-def modcrop(img_in, scale):
- # img_in: Numpy, HWC or HW
- img = np.copy(img_in)
- if img.ndim == 2:
- H, W = img.shape
- H_r, W_r = H % scale, W % scale
- img = img[:H - H_r, :W - W_r]
- elif img.ndim == 3:
- H, W, C = img.shape
- H_r, W_r = H % scale, W % scale
- img = img[:H - H_r, :W - W_r, :]
- else:
- raise ValueError('Wrong img ndim: [{:d}].'.format(img.ndim))
- return img
-
-
-def shave(img_in, border=0):
- # img_in: Numpy, HWC or HW
- img = np.copy(img_in)
- h, w = img.shape[:2]
- img = img[border:h-border, border:w-border]
- return img
-
-
-'''
-# --------------------------------------------
-# image processing process on numpy image
-# channel_convert(in_c, tar_type, img_list):
-# rgb2ycbcr(img, only_y=True):
-# bgr2ycbcr(img, only_y=True):
-# ycbcr2rgb(img):
-# --------------------------------------------
-'''
-
-
-def rgb2ycbcr(img, only_y=True):
- '''same as matlab rgb2ycbcr
- only_y: only return Y channel
- Input:
- uint8, [0, 255]
- float, [0, 1]
- '''
- in_img_type = img.dtype
- img.astype(np.float32)
- if in_img_type != np.uint8:
- img *= 255.
- # convert
- if only_y:
- rlt = np.dot(img, [65.481, 128.553, 24.966]) / 255.0 + 16.0
- else:
- rlt = np.matmul(img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786],
- [24.966, 112.0, -18.214]]) / 255.0 + [16, 128, 128]
- if in_img_type == np.uint8:
- rlt = rlt.round()
- else:
- rlt /= 255.
- return rlt.astype(in_img_type)
-
-
-def ycbcr2rgb(img):
- '''same as matlab ycbcr2rgb
- Input:
- uint8, [0, 255]
- float, [0, 1]
- '''
- in_img_type = img.dtype
- img.astype(np.float32)
- if in_img_type != np.uint8:
- img *= 255.
- # convert
- rlt = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071],
- [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836]
- if in_img_type == np.uint8:
- rlt = rlt.round()
- else:
- rlt /= 255.
- return rlt.astype(in_img_type)
-
-
-def bgr2ycbcr(img, only_y=True):
- '''bgr version of rgb2ycbcr
- only_y: only return Y channel
- Input:
- uint8, [0, 255]
- float, [0, 1]
- '''
- in_img_type = img.dtype
- img.astype(np.float32)
- if in_img_type != np.uint8:
- img *= 255.
- # convert
- if only_y:
- rlt = np.dot(img, [24.966, 128.553, 65.481]) / 255.0 + 16.0
- else:
- rlt = np.matmul(img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786],
- [65.481, -37.797, 112.0]]) / 255.0 + [16, 128, 128]
- if in_img_type == np.uint8:
- rlt = rlt.round()
- else:
- rlt /= 255.
- return rlt.astype(in_img_type)
-
-
-def channel_convert(in_c, tar_type, img_list):
- # conversion among BGR, gray and y
- if in_c == 3 and tar_type == 'gray': # BGR to gray
- gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list]
- return [np.expand_dims(img, axis=2) for img in gray_list]
- elif in_c == 3 and tar_type == 'y': # BGR to y
- y_list = [bgr2ycbcr(img, only_y=True) for img in img_list]
- return [np.expand_dims(img, axis=2) for img in y_list]
- elif in_c == 1 and tar_type == 'RGB': # gray/y to BGR
- return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list]
- else:
- return img_list
-
-
-'''
-# --------------------------------------------
-# metric, PSNR and SSIM
-# --------------------------------------------
-'''
-
-
-# --------------------------------------------
-# PSNR
-# --------------------------------------------
-def calculate_psnr(img1, img2, border=0):
- # img1 and img2 have range [0, 255]
- #img1 = img1.squeeze()
- #img2 = img2.squeeze()
- if not img1.shape == img2.shape:
- raise ValueError('Input images must have the same dimensions.')
- h, w = img1.shape[:2]
- img1 = img1[border:h-border, border:w-border]
- img2 = img2[border:h-border, border:w-border]
-
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
- mse = np.mean((img1 - img2)**2)
- if mse == 0:
- return float('inf')
- return 20 * math.log10(255.0 / math.sqrt(mse))
-
-
-# --------------------------------------------
-# SSIM
-# --------------------------------------------
-def calculate_ssim(img1, img2, border=0):
- '''calculate SSIM
- the same outputs as MATLAB's
- img1, img2: [0, 255]
- '''
- #img1 = img1.squeeze()
- #img2 = img2.squeeze()
- if not img1.shape == img2.shape:
- raise ValueError('Input images must have the same dimensions.')
- h, w = img1.shape[:2]
- img1 = img1[border:h-border, border:w-border]
- img2 = img2[border:h-border, border:w-border]
-
- if img1.ndim == 2:
- return ssim(img1, img2)
- elif img1.ndim == 3:
- if img1.shape[2] == 3:
- ssims = []
- for i in range(3):
- ssims.append(ssim(img1[:,:,i], img2[:,:,i]))
- return np.array(ssims).mean()
- elif img1.shape[2] == 1:
- return ssim(np.squeeze(img1), np.squeeze(img2))
- else:
- raise ValueError('Wrong input image dimensions.')
-
-
-def ssim(img1, img2):
- C1 = (0.01 * 255)**2
- C2 = (0.03 * 255)**2
-
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
- kernel = cv2.getGaussianKernel(11, 1.5)
- window = np.outer(kernel, kernel.transpose())
-
- mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid
- mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5]
- mu1_sq = mu1**2
- mu2_sq = mu2**2
- mu1_mu2 = mu1 * mu2
- sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq
- sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq
- sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2
-
- ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) *
- (sigma1_sq + sigma2_sq + C2))
- return ssim_map.mean()
-
-
-'''
-# --------------------------------------------
-# matlab's bicubic imresize (numpy and torch) [0, 1]
-# --------------------------------------------
-'''
-
-
-# matlab 'imresize' function, now only support 'bicubic'
-def cubic(x):
- absx = torch.abs(x)
- absx2 = absx**2
- absx3 = absx**3
- return (1.5*absx3 - 2.5*absx2 + 1) * ((absx <= 1).type_as(absx)) + \
- (-0.5*absx3 + 2.5*absx2 - 4*absx + 2) * (((absx > 1)*(absx <= 2)).type_as(absx))
-
-
-def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing):
- if (scale < 1) and (antialiasing):
- # Use a modified kernel to simultaneously interpolate and antialias- larger kernel width
- kernel_width = kernel_width / scale
-
- # Output-space coordinates
- x = torch.linspace(1, out_length, out_length)
-
- # Input-space coordinates. Calculate the inverse mapping such that 0.5
- # in output space maps to 0.5 in input space, and 0.5+scale in output
- # space maps to 1.5 in input space.
- u = x / scale + 0.5 * (1 - 1 / scale)
-
- # What is the left-most pixel that can be involved in the computation?
- left = torch.floor(u - kernel_width / 2)
-
- # What is the maximum number of pixels that can be involved in the
- # computation? Note: it's OK to use an extra pixel here; if the
- # corresponding weights are all zero, it will be eliminated at the end
- # of this function.
- P = math.ceil(kernel_width) + 2
-
- # The indices of the input pixels involved in computing the k-th output
- # pixel are in row k of the indices matrix.
- indices = left.view(out_length, 1).expand(out_length, P) + torch.linspace(0, P - 1, P).view(
- 1, P).expand(out_length, P)
-
- # The weights used to compute the k-th output pixel are in row k of the
- # weights matrix.
- distance_to_center = u.view(out_length, 1).expand(out_length, P) - indices
- # apply cubic kernel
- if (scale < 1) and (antialiasing):
- weights = scale * cubic(distance_to_center * scale)
- else:
- weights = cubic(distance_to_center)
- # Normalize the weights matrix so that each row sums to 1.
- weights_sum = torch.sum(weights, 1).view(out_length, 1)
- weights = weights / weights_sum.expand(out_length, P)
-
- # If a column in weights is all zero, get rid of it. only consider the first and last column.
- weights_zero_tmp = torch.sum((weights == 0), 0)
- if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6):
- indices = indices.narrow(1, 1, P - 2)
- weights = weights.narrow(1, 1, P - 2)
- if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6):
- indices = indices.narrow(1, 0, P - 2)
- weights = weights.narrow(1, 0, P - 2)
- weights = weights.contiguous()
- indices = indices.contiguous()
- sym_len_s = -indices.min() + 1
- sym_len_e = indices.max() - in_length
- indices = indices + sym_len_s - 1
- return weights, indices, int(sym_len_s), int(sym_len_e)
-
-
-# --------------------------------------------
-# imresize for tensor image [0, 1]
-# --------------------------------------------
-def imresize(img, scale, antialiasing=True):
- # Now the scale should be the same for H and W
- # input: img: pytorch tensor, CHW or HW [0,1]
- # output: CHW or HW [0,1] w/o round
- need_squeeze = True if img.dim() == 2 else False
- if need_squeeze:
- img.unsqueeze_(0)
- in_C, in_H, in_W = img.size()
- out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale)
- kernel_width = 4
- kernel = 'cubic'
-
- # Return the desired dimension order for performing the resize. The
- # strategy is to perform the resize first along the dimension with the
- # smallest scale factor.
- # Now we do not support this.
-
- # get weights and indices
- weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices(
- in_H, out_H, scale, kernel, kernel_width, antialiasing)
- weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices(
- in_W, out_W, scale, kernel, kernel_width, antialiasing)
- # process H dimension
- # symmetric copying
- img_aug = torch.FloatTensor(in_C, in_H + sym_len_Hs + sym_len_He, in_W)
- img_aug.narrow(1, sym_len_Hs, in_H).copy_(img)
-
- sym_patch = img[:, :sym_len_Hs, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- img_aug.narrow(1, 0, sym_len_Hs).copy_(sym_patch_inv)
-
- sym_patch = img[:, -sym_len_He:, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- img_aug.narrow(1, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv)
-
- out_1 = torch.FloatTensor(in_C, out_H, in_W)
- kernel_width = weights_H.size(1)
- for i in range(out_H):
- idx = int(indices_H[i][0])
- for j in range(out_C):
- out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_H[i])
-
- # process W dimension
- # symmetric copying
- out_1_aug = torch.FloatTensor(in_C, out_H, in_W + sym_len_Ws + sym_len_We)
- out_1_aug.narrow(2, sym_len_Ws, in_W).copy_(out_1)
-
- sym_patch = out_1[:, :, :sym_len_Ws]
- inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(2, inv_idx)
- out_1_aug.narrow(2, 0, sym_len_Ws).copy_(sym_patch_inv)
-
- sym_patch = out_1[:, :, -sym_len_We:]
- inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(2, inv_idx)
- out_1_aug.narrow(2, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv)
-
- out_2 = torch.FloatTensor(in_C, out_H, out_W)
- kernel_width = weights_W.size(1)
- for i in range(out_W):
- idx = int(indices_W[i][0])
- for j in range(out_C):
- out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_W[i])
- if need_squeeze:
- out_2.squeeze_()
- return out_2
-
-
-# --------------------------------------------
-# imresize for numpy image [0, 1]
-# --------------------------------------------
-def imresize_np(img, scale, antialiasing=True):
- # Now the scale should be the same for H and W
- # input: img: Numpy, HWC or HW [0,1]
- # output: HWC or HW [0,1] w/o round
- img = torch.from_numpy(img)
- need_squeeze = True if img.dim() == 2 else False
- if need_squeeze:
- img.unsqueeze_(2)
-
- in_H, in_W, in_C = img.size()
- out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale)
- kernel_width = 4
- kernel = 'cubic'
-
- # Return the desired dimension order for performing the resize. The
- # strategy is to perform the resize first along the dimension with the
- # smallest scale factor.
- # Now we do not support this.
-
- # get weights and indices
- weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices(
- in_H, out_H, scale, kernel, kernel_width, antialiasing)
- weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices(
- in_W, out_W, scale, kernel, kernel_width, antialiasing)
- # process H dimension
- # symmetric copying
- img_aug = torch.FloatTensor(in_H + sym_len_Hs + sym_len_He, in_W, in_C)
- img_aug.narrow(0, sym_len_Hs, in_H).copy_(img)
-
- sym_patch = img[:sym_len_Hs, :, :]
- inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(0, inv_idx)
- img_aug.narrow(0, 0, sym_len_Hs).copy_(sym_patch_inv)
-
- sym_patch = img[-sym_len_He:, :, :]
- inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(0, inv_idx)
- img_aug.narrow(0, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv)
-
- out_1 = torch.FloatTensor(out_H, in_W, in_C)
- kernel_width = weights_H.size(1)
- for i in range(out_H):
- idx = int(indices_H[i][0])
- for j in range(out_C):
- out_1[i, :, j] = img_aug[idx:idx + kernel_width, :, j].transpose(0, 1).mv(weights_H[i])
-
- # process W dimension
- # symmetric copying
- out_1_aug = torch.FloatTensor(out_H, in_W + sym_len_Ws + sym_len_We, in_C)
- out_1_aug.narrow(1, sym_len_Ws, in_W).copy_(out_1)
-
- sym_patch = out_1[:, :sym_len_Ws, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- out_1_aug.narrow(1, 0, sym_len_Ws).copy_(sym_patch_inv)
-
- sym_patch = out_1[:, -sym_len_We:, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- out_1_aug.narrow(1, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv)
-
- out_2 = torch.FloatTensor(out_H, out_W, in_C)
- kernel_width = weights_W.size(1)
- for i in range(out_W):
- idx = int(indices_W[i][0])
- for j in range(out_C):
- out_2[:, i, j] = out_1_aug[:, idx:idx + kernel_width, j].mv(weights_W[i])
- if need_squeeze:
- out_2.squeeze_()
-
- return out_2.numpy()
-
-
-if __name__ == '__main__':
- print('---')
-# img = imread_uint('test.bmp', 3)
-# img = uint2single(img)
-# img_bicubic = imresize_np(img, 1/4)
\ No newline at end of file
diff --git a/spaces/Anonymous-sub/Rerender/src/video_util.py b/spaces/Anonymous-sub/Rerender/src/video_util.py
deleted file mode 100644
index 437d5cf9d06b7ad1f8a3ef68528c6acf8dbb3986..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/src/video_util.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import os
-
-import cv2
-import imageio
-import numpy as np
-
-
-def video_to_frame(video_path: str,
- frame_dir: str,
- filename_pattern: str = 'frame%03d.jpg',
- log: bool = True,
- frame_edit_func=None):
- os.makedirs(frame_dir, exist_ok=True)
-
- vidcap = cv2.VideoCapture(video_path)
- success, image = vidcap.read()
-
- if log:
- print('img shape: ', image.shape[0:2])
-
- count = 0
- while success:
- if frame_edit_func is not None:
- image = frame_edit_func(image)
-
- cv2.imwrite(os.path.join(frame_dir, filename_pattern % count), image)
- success, image = vidcap.read()
- if log:
- print('Read a new frame: ', success, count)
- count += 1
-
- vidcap.release()
-
-
-def frame_to_video(video_path: str, frame_dir: str, fps=30, log=True):
-
- first_img = True
- writer = imageio.get_writer(video_path, fps=fps)
-
- file_list = sorted(os.listdir(frame_dir))
- for file_name in file_list:
- if not (file_name.endswith('jpg') or file_name.endswith('png')):
- continue
-
- fn = os.path.join(frame_dir, file_name)
- curImg = imageio.imread(fn)
-
- if first_img:
- H, W = curImg.shape[0:2]
- if log:
- print('img shape', (H, W))
- first_img = False
-
- writer.append_data(curImg)
-
- writer.close()
-
-
-def get_fps(video_path: str):
- video = cv2.VideoCapture(video_path)
- fps = video.get(cv2.CAP_PROP_FPS)
- video.release()
- return fps
-
-
-def get_frame_count(video_path: str):
- video = cv2.VideoCapture(video_path)
- frame_count = video.get(cv2.CAP_PROP_FRAME_COUNT)
- video.release()
- return frame_count
-
-
-def resize_image(input_image, resolution):
- H, W, C = input_image.shape
- H = float(H)
- W = float(W)
- k = min(float(resolution) / min(H, W), float(768) / max(H, W))
- H *= k
- W *= k
- H = int(np.round(H / 64.0)) * 64
- W = int(np.round(W / 64.0)) * 64
- img = cv2.resize(
- input_image, (W, H),
- interpolation=cv2.INTER_LANCZOS4 if k > 1 else cv2.INTER_AREA)
- return img
-
-
-def prepare_frames(input_path: str, output_dir: str, resolution: int, crop):
- l, r, t, b = crop
-
- def crop_func(frame):
- H, W, C = frame.shape
- left = np.clip(l, 0, W)
- right = np.clip(W - r, left, W)
- top = np.clip(t, 0, H)
- bottom = np.clip(H - b, top, H)
- frame = frame[top:bottom, left:right]
- return resize_image(frame, resolution)
-
- video_to_frame(input_path, output_dir, '%04d.png', False, crop_func)
diff --git a/spaces/Antonpy/stable-diffusion-license/app.py b/spaces/Antonpy/stable-diffusion-license/app.py
deleted file mode 100644
index f6f318530f0aeb268c9f9389e556065beef2ac9e..0000000000000000000000000000000000000000
--- a/spaces/Antonpy/stable-diffusion-license/app.py
+++ /dev/null
@@ -1,14 +0,0 @@
-import streamlit as st
-
-txt_link = "https://huggingface.co/spaces/CompVis/stable-diffusion-license/raw/main/license.txt"
-html_link = "https://huggingface.co/spaces/CompVis/stable-diffusion-license/raw/main/license.html"
-
-st.sidebar.title("Stable Diffusion")
-st.sidebar.markdown("## Stable Diffusion RAIL License v1.0")
-st.sidebar.markdown(f"This is the home of the Stable Diffusion RAIL License v1.0.\
-If you would like to download the license you can get it as [.txt]({txt_link}), or [.html]({html_link}) file.")
-
-with open("license.txt", "r") as f:
- license_html = f.read()
-
-st.markdown(license_html, unsafe_allow_html=True)
diff --git a/spaces/ArnePan/German-LLM-leaderboard/README.md b/spaces/ArnePan/German-LLM-leaderboard/README.md
deleted file mode 100644
index 4e8b340e235d036f00515c293258313530479b6b..0000000000000000000000000000000000000000
--- a/spaces/ArnePan/German-LLM-leaderboard/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: German-LLM-leaderboard
-emoji: 🇩🇪
-colorFrom: yellow
-colorTo: red
-sdk: gradio
-sdk_version: 3.46.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/__init__.py
deleted file mode 100644
index 7802ff158d83eb88e6dbe78d9cd33ca14341662a..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/__init__.py
+++ /dev/null
@@ -1,331 +0,0 @@
-# module pyparsing.py
-#
-# Copyright (c) 2003-2022 Paul T. McGuire
-#
-# Permission is hereby granted, free of charge, to any person obtaining
-# a copy of this software and associated documentation files (the
-# "Software"), to deal in the Software without restriction, including
-# without limitation the rights to use, copy, modify, merge, publish,
-# distribute, sublicense, and/or sell copies of the Software, and to
-# permit persons to whom the Software is furnished to do so, subject to
-# the following conditions:
-#
-# The above copyright notice and this permission notice shall be
-# included in all copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
-# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
-# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
-# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
-# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
-#
-
-__doc__ = """
-pyparsing module - Classes and methods to define and execute parsing grammars
-=============================================================================
-
-The pyparsing module is an alternative approach to creating and
-executing simple grammars, vs. the traditional lex/yacc approach, or the
-use of regular expressions. With pyparsing, you don't need to learn
-a new syntax for defining grammars or matching expressions - the parsing
-module provides a library of classes that you use to construct the
-grammar directly in Python.
-
-Here is a program to parse "Hello, World!" (or any greeting of the form
-``", !"``), built up using :class:`Word`,
-:class:`Literal`, and :class:`And` elements
-(the :meth:`'+'` operators create :class:`And` expressions,
-and the strings are auto-converted to :class:`Literal` expressions)::
-
- from pyparsing import Word, alphas
-
- # define grammar of a greeting
- greet = Word(alphas) + "," + Word(alphas) + "!"
-
- hello = "Hello, World!"
- print(hello, "->", greet.parse_string(hello))
-
-The program outputs the following::
-
- Hello, World! -> ['Hello', ',', 'World', '!']
-
-The Python representation of the grammar is quite readable, owing to the
-self-explanatory class names, and the use of :class:`'+'`,
-:class:`'|'`, :class:`'^'` and :class:`'&'` operators.
-
-The :class:`ParseResults` object returned from
-:class:`ParserElement.parseString` can be
-accessed as a nested list, a dictionary, or an object with named
-attributes.
-
-The pyparsing module handles some of the problems that are typically
-vexing when writing text parsers:
-
- - extra or missing whitespace (the above program will also handle
- "Hello,World!", "Hello , World !", etc.)
- - quoted strings
- - embedded comments
-
-
-Getting Started -
------------------
-Visit the classes :class:`ParserElement` and :class:`ParseResults` to
-see the base classes that most other pyparsing
-classes inherit from. Use the docstrings for examples of how to:
-
- - construct literal match expressions from :class:`Literal` and
- :class:`CaselessLiteral` classes
- - construct character word-group expressions using the :class:`Word`
- class
- - see how to create repetitive expressions using :class:`ZeroOrMore`
- and :class:`OneOrMore` classes
- - use :class:`'+'`, :class:`'|'`, :class:`'^'`,
- and :class:`'&'` operators to combine simple expressions into
- more complex ones
- - associate names with your parsed results using
- :class:`ParserElement.setResultsName`
- - access the parsed data, which is returned as a :class:`ParseResults`
- object
- - find some helpful expression short-cuts like :class:`delimitedList`
- and :class:`oneOf`
- - find more useful common expressions in the :class:`pyparsing_common`
- namespace class
-"""
-from typing import NamedTuple
-
-
-class version_info(NamedTuple):
- major: int
- minor: int
- micro: int
- releaselevel: str
- serial: int
-
- @property
- def __version__(self):
- return (
- "{}.{}.{}".format(self.major, self.minor, self.micro)
- + (
- "{}{}{}".format(
- "r" if self.releaselevel[0] == "c" else "",
- self.releaselevel[0],
- self.serial,
- ),
- "",
- )[self.releaselevel == "final"]
- )
-
- def __str__(self):
- return "{} {} / {}".format(__name__, self.__version__, __version_time__)
-
- def __repr__(self):
- return "{}.{}({})".format(
- __name__,
- type(self).__name__,
- ", ".join("{}={!r}".format(*nv) for nv in zip(self._fields, self)),
- )
-
-
-__version_info__ = version_info(3, 0, 9, "final", 0)
-__version_time__ = "05 May 2022 07:02 UTC"
-__version__ = __version_info__.__version__
-__versionTime__ = __version_time__
-__author__ = "Paul McGuire "
-
-from .util import *
-from .exceptions import *
-from .actions import *
-from .core import __diag__, __compat__
-from .results import *
-from .core import *
-from .core import _builtin_exprs as core_builtin_exprs
-from .helpers import *
-from .helpers import _builtin_exprs as helper_builtin_exprs
-
-from .unicode import unicode_set, UnicodeRangeList, pyparsing_unicode as unicode
-from .testing import pyparsing_test as testing
-from .common import (
- pyparsing_common as common,
- _builtin_exprs as common_builtin_exprs,
-)
-
-# define backward compat synonyms
-if "pyparsing_unicode" not in globals():
- pyparsing_unicode = unicode
-if "pyparsing_common" not in globals():
- pyparsing_common = common
-if "pyparsing_test" not in globals():
- pyparsing_test = testing
-
-core_builtin_exprs += common_builtin_exprs + helper_builtin_exprs
-
-
-__all__ = [
- "__version__",
- "__version_time__",
- "__author__",
- "__compat__",
- "__diag__",
- "And",
- "AtLineStart",
- "AtStringStart",
- "CaselessKeyword",
- "CaselessLiteral",
- "CharsNotIn",
- "Combine",
- "Dict",
- "Each",
- "Empty",
- "FollowedBy",
- "Forward",
- "GoToColumn",
- "Group",
- "IndentedBlock",
- "Keyword",
- "LineEnd",
- "LineStart",
- "Literal",
- "Located",
- "PrecededBy",
- "MatchFirst",
- "NoMatch",
- "NotAny",
- "OneOrMore",
- "OnlyOnce",
- "OpAssoc",
- "Opt",
- "Optional",
- "Or",
- "ParseBaseException",
- "ParseElementEnhance",
- "ParseException",
- "ParseExpression",
- "ParseFatalException",
- "ParseResults",
- "ParseSyntaxException",
- "ParserElement",
- "PositionToken",
- "QuotedString",
- "RecursiveGrammarException",
- "Regex",
- "SkipTo",
- "StringEnd",
- "StringStart",
- "Suppress",
- "Token",
- "TokenConverter",
- "White",
- "Word",
- "WordEnd",
- "WordStart",
- "ZeroOrMore",
- "Char",
- "alphanums",
- "alphas",
- "alphas8bit",
- "any_close_tag",
- "any_open_tag",
- "c_style_comment",
- "col",
- "common_html_entity",
- "counted_array",
- "cpp_style_comment",
- "dbl_quoted_string",
- "dbl_slash_comment",
- "delimited_list",
- "dict_of",
- "empty",
- "hexnums",
- "html_comment",
- "identchars",
- "identbodychars",
- "java_style_comment",
- "line",
- "line_end",
- "line_start",
- "lineno",
- "make_html_tags",
- "make_xml_tags",
- "match_only_at_col",
- "match_previous_expr",
- "match_previous_literal",
- "nested_expr",
- "null_debug_action",
- "nums",
- "one_of",
- "printables",
- "punc8bit",
- "python_style_comment",
- "quoted_string",
- "remove_quotes",
- "replace_with",
- "replace_html_entity",
- "rest_of_line",
- "sgl_quoted_string",
- "srange",
- "string_end",
- "string_start",
- "trace_parse_action",
- "unicode_string",
- "with_attribute",
- "indentedBlock",
- "original_text_for",
- "ungroup",
- "infix_notation",
- "locatedExpr",
- "with_class",
- "CloseMatch",
- "token_map",
- "pyparsing_common",
- "pyparsing_unicode",
- "unicode_set",
- "condition_as_parse_action",
- "pyparsing_test",
- # pre-PEP8 compatibility names
- "__versionTime__",
- "anyCloseTag",
- "anyOpenTag",
- "cStyleComment",
- "commonHTMLEntity",
- "countedArray",
- "cppStyleComment",
- "dblQuotedString",
- "dblSlashComment",
- "delimitedList",
- "dictOf",
- "htmlComment",
- "javaStyleComment",
- "lineEnd",
- "lineStart",
- "makeHTMLTags",
- "makeXMLTags",
- "matchOnlyAtCol",
- "matchPreviousExpr",
- "matchPreviousLiteral",
- "nestedExpr",
- "nullDebugAction",
- "oneOf",
- "opAssoc",
- "pythonStyleComment",
- "quotedString",
- "removeQuotes",
- "replaceHTMLEntity",
- "replaceWith",
- "restOfLine",
- "sglQuotedString",
- "stringEnd",
- "stringStart",
- "traceParseAction",
- "unicodeString",
- "withAttribute",
- "indentedBlock",
- "originalTextFor",
- "infixNotation",
- "locatedExpr",
- "withClass",
- "tokenMap",
- "conditionAsParseAction",
- "autoname_elements",
-]
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/flatten.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/flatten.py
deleted file mode 100644
index f5ba4297567d650f147eebeed361e9d62fab899d..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/flatten.py
+++ /dev/null
@@ -1,330 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import collections
-from dataclasses import dataclass
-from typing import Callable, List, Optional, Tuple
-import torch
-from torch import nn
-
-from detectron2.structures import Boxes, Instances, ROIMasks
-from detectron2.utils.registry import _convert_target_to_string, locate
-
-from .torchscript_patch import patch_builtin_len
-
-
-@dataclass
-class Schema:
- """
- A Schema defines how to flatten a possibly hierarchical object into tuple of
- primitive objects, so it can be used as inputs/outputs of PyTorch's tracing.
-
- PyTorch does not support tracing a function that produces rich output
- structures (e.g. dict, Instances, Boxes). To trace such a function, we
- flatten the rich object into tuple of tensors, and return this tuple of tensors
- instead. Meanwhile, we also need to know how to "rebuild" the original object
- from the flattened results, so we can evaluate the flattened results.
- A Schema defines how to flatten an object, and while flattening it, it records
- necessary schemas so that the object can be rebuilt using the flattened outputs.
-
- The flattened object and the schema object is returned by ``.flatten`` classmethod.
- Then the original object can be rebuilt with the ``__call__`` method of schema.
-
- A Schema is a dataclass that can be serialized easily.
- """
-
- # inspired by FetchMapper in tensorflow/python/client/session.py
-
- @classmethod
- def flatten(cls, obj):
- raise NotImplementedError
-
- def __call__(self, values):
- raise NotImplementedError
-
- @staticmethod
- def _concat(values):
- ret = ()
- sizes = []
- for v in values:
- assert isinstance(v, tuple), "Flattened results must be a tuple"
- ret = ret + v
- sizes.append(len(v))
- return ret, sizes
-
- @staticmethod
- def _split(values, sizes):
- if len(sizes):
- expected_len = sum(sizes)
- assert (
- len(values) == expected_len
- ), f"Values has length {len(values)} but expect length {expected_len}."
- ret = []
- for k in range(len(sizes)):
- begin, end = sum(sizes[:k]), sum(sizes[: k + 1])
- ret.append(values[begin:end])
- return ret
-
-
-@dataclass
-class ListSchema(Schema):
- schemas: List[Schema] # the schemas that define how to flatten each element in the list
- sizes: List[int] # the flattened length of each element
-
- def __call__(self, values):
- values = self._split(values, self.sizes)
- if len(values) != len(self.schemas):
- raise ValueError(
- f"Values has length {len(values)} but schemas " f"has length {len(self.schemas)}!"
- )
- values = [m(v) for m, v in zip(self.schemas, values)]
- return list(values)
-
- @classmethod
- def flatten(cls, obj):
- res = [flatten_to_tuple(k) for k in obj]
- values, sizes = cls._concat([k[0] for k in res])
- return values, cls([k[1] for k in res], sizes)
-
-
-@dataclass
-class TupleSchema(ListSchema):
- def __call__(self, values):
- return tuple(super().__call__(values))
-
-
-@dataclass
-class IdentitySchema(Schema):
- def __call__(self, values):
- return values[0]
-
- @classmethod
- def flatten(cls, obj):
- return (obj,), cls()
-
-
-@dataclass
-class DictSchema(ListSchema):
- keys: List[str]
-
- def __call__(self, values):
- values = super().__call__(values)
- return dict(zip(self.keys, values))
-
- @classmethod
- def flatten(cls, obj):
- for k in obj.keys():
- if not isinstance(k, str):
- raise KeyError("Only support flattening dictionaries if keys are str.")
- keys = sorted(obj.keys())
- values = [obj[k] for k in keys]
- ret, schema = ListSchema.flatten(values)
- return ret, cls(schema.schemas, schema.sizes, keys)
-
-
-@dataclass
-class InstancesSchema(DictSchema):
- def __call__(self, values):
- image_size, fields = values[-1], values[:-1]
- fields = super().__call__(fields)
- return Instances(image_size, **fields)
-
- @classmethod
- def flatten(cls, obj):
- ret, schema = super().flatten(obj.get_fields())
- size = obj.image_size
- if not isinstance(size, torch.Tensor):
- size = torch.tensor(size)
- return ret + (size,), schema
-
-
-@dataclass
-class TensorWrapSchema(Schema):
- """
- For classes that are simple wrapper of tensors, e.g.
- Boxes, RotatedBoxes, BitMasks
- """
-
- class_name: str
-
- def __call__(self, values):
- return locate(self.class_name)(values[0])
-
- @classmethod
- def flatten(cls, obj):
- return (obj.tensor,), cls(_convert_target_to_string(type(obj)))
-
-
-# if more custom structures needed in the future, can allow
-# passing in extra schemas for custom types
-def flatten_to_tuple(obj):
- """
- Flatten an object so it can be used for PyTorch tracing.
- Also returns how to rebuild the original object from the flattened outputs.
-
- Returns:
- res (tuple): the flattened results that can be used as tracing outputs
- schema: an object with a ``__call__`` method such that ``schema(res) == obj``.
- It is a pure dataclass that can be serialized.
- """
- schemas = [
- ((str, bytes), IdentitySchema),
- (list, ListSchema),
- (tuple, TupleSchema),
- (collections.abc.Mapping, DictSchema),
- (Instances, InstancesSchema),
- ((Boxes, ROIMasks), TensorWrapSchema),
- ]
- for klass, schema in schemas:
- if isinstance(obj, klass):
- F = schema
- break
- else:
- F = IdentitySchema
-
- return F.flatten(obj)
-
-
-class TracingAdapter(nn.Module):
- """
- A model may take rich input/output format (e.g. dict or custom classes),
- but `torch.jit.trace` requires tuple of tensors as input/output.
- This adapter flattens input/output format of a model so it becomes traceable.
-
- It also records the necessary schema to rebuild model's inputs/outputs from flattened
- inputs/outputs.
-
- Example:
- ::
- outputs = model(inputs) # inputs/outputs may be rich structure
- adapter = TracingAdapter(model, inputs)
-
- # can now trace the model, with adapter.flattened_inputs, or another
- # tuple of tensors with the same length and meaning
- traced = torch.jit.trace(adapter, adapter.flattened_inputs)
-
- # traced model can only produce flattened outputs (tuple of tensors)
- flattened_outputs = traced(*adapter.flattened_inputs)
- # adapter knows the schema to convert it back (new_outputs == outputs)
- new_outputs = adapter.outputs_schema(flattened_outputs)
- """
-
- flattened_inputs: Tuple[torch.Tensor] = None
- """
- Flattened version of inputs given to this class's constructor.
- """
-
- inputs_schema: Schema = None
- """
- Schema of the inputs given to this class's constructor.
- """
-
- outputs_schema: Schema = None
- """
- Schema of the output produced by calling the given model with inputs.
- """
-
- def __init__(
- self,
- model: nn.Module,
- inputs,
- inference_func: Optional[Callable] = None,
- allow_non_tensor: bool = False,
- ):
- """
- Args:
- model: an nn.Module
- inputs: An input argument or a tuple of input arguments used to call model.
- After flattening, it has to only consist of tensors.
- inference_func: a callable that takes (model, *inputs), calls the
- model with inputs, and return outputs. By default it
- is ``lambda model, *inputs: model(*inputs)``. Can be override
- if you need to call the model differently.
- allow_non_tensor: allow inputs/outputs to contain non-tensor objects.
- This option will filter out non-tensor objects to make the
- model traceable, but ``inputs_schema``/``outputs_schema`` cannot be
- used anymore because inputs/outputs cannot be rebuilt from pure tensors.
- This is useful when you're only interested in the single trace of
- execution (e.g. for flop count), but not interested in
- generalizing the traced graph to new inputs.
- """
- super().__init__()
- if isinstance(model, (nn.parallel.distributed.DistributedDataParallel, nn.DataParallel)):
- model = model.module
- self.model = model
- if not isinstance(inputs, tuple):
- inputs = (inputs,)
- self.inputs = inputs
- self.allow_non_tensor = allow_non_tensor
-
- if inference_func is None:
- inference_func = lambda model, *inputs: model(*inputs) # noqa
- self.inference_func = inference_func
-
- self.flattened_inputs, self.inputs_schema = flatten_to_tuple(inputs)
-
- if all(isinstance(x, torch.Tensor) for x in self.flattened_inputs):
- return
- if self.allow_non_tensor:
- self.flattened_inputs = tuple(
- [x for x in self.flattened_inputs if isinstance(x, torch.Tensor)]
- )
- self.inputs_schema = None
- else:
- for input in self.flattened_inputs:
- if not isinstance(input, torch.Tensor):
- raise ValueError(
- "Inputs for tracing must only contain tensors. "
- f"Got a {type(input)} instead."
- )
-
- def forward(self, *args: torch.Tensor):
- with torch.no_grad(), patch_builtin_len():
- if self.inputs_schema is not None:
- inputs_orig_format = self.inputs_schema(args)
- else:
- if len(args) != len(self.flattened_inputs) or any(
- x is not y for x, y in zip(args, self.flattened_inputs)
- ):
- raise ValueError(
- "TracingAdapter does not contain valid inputs_schema."
- " So it cannot generalize to other inputs and must be"
- " traced with `.flattened_inputs`."
- )
- inputs_orig_format = self.inputs
-
- outputs = self.inference_func(self.model, *inputs_orig_format)
- flattened_outputs, schema = flatten_to_tuple(outputs)
-
- flattened_output_tensors = tuple(
- [x for x in flattened_outputs if isinstance(x, torch.Tensor)]
- )
- if len(flattened_output_tensors) < len(flattened_outputs):
- if self.allow_non_tensor:
- flattened_outputs = flattened_output_tensors
- self.outputs_schema = None
- else:
- raise ValueError(
- "Model cannot be traced because some model outputs "
- "cannot flatten to tensors."
- )
- else: # schema is valid
- if self.outputs_schema is None:
- self.outputs_schema = schema
- else:
- assert self.outputs_schema == schema, (
- "Model should always return outputs with the same "
- "structure so it can be traced!"
- )
- return flattened_outputs
-
- def _create_wrapper(self, traced_model):
- """
- Return a function that has an input/output interface the same as the
- original model, but it calls the given traced model under the hood.
- """
-
- def forward(*args):
- flattened_inputs, _ = flatten_to_tuple(args)
- flattened_outputs = traced_model(*flattened_inputs)
- return self.outputs_schema(flattened_outputs)
-
- return forward
diff --git a/spaces/Ayaka2022/anime-aesthetic-predict/app.py b/spaces/Ayaka2022/anime-aesthetic-predict/app.py
deleted file mode 100644
index 6f0cd457993cc220641a974f27509b94fcace949..0000000000000000000000000000000000000000
--- a/spaces/Ayaka2022/anime-aesthetic-predict/app.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import cv2
-import numpy as np
-import gradio as gr
-import onnxruntime as rt
-from huggingface_hub import hf_hub_download
-
-
-def predict(img):
- img = img.astype(np.float32) / 255
- s = 768
- h, w = img.shape[:-1]
- h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s)
- ph, pw = s - h, s - w
- img_input = np.zeros([s, s, 3], dtype=np.float32)
- img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h))
- img_input = np.transpose(img_input, (2, 0, 1))
- img_input = img_input[np.newaxis, :]
- pred = model.run(None, {"img": img_input})[0].item()
- return pred
-
-
-if __name__ == "__main__":
- model_path = hf_hub_download(repo_id="skytnt/anime-aesthetic", filename="model.onnx")
- model = rt.InferenceSession(model_path, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
- examples = [[f"examples/{x:02d}.jpg"] for x in range(0, 2)]
- app = gr.Interface(predict, gr.Image(label="input image"), gr.Number(label="score"),title="Anime Aesthetic Predict",
- allow_flagging="never", examples=examples, cache_examples=False)
- app.launch()
diff --git a/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/layers_537238KB.py b/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/layers_537238KB.py
deleted file mode 100644
index a38b7bb3ae3136b07eadfc2db445fef4c2de186b..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/layers_537238KB.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-
-from . import spec_utils
-
-
-class Conv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(Conv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nout,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- bias=False,
- ),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class SeperableConv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(SeperableConv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nin,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- groups=nin,
- bias=False,
- ),
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class Encoder(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
- super(Encoder, self).__init__()
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
-
- def __call__(self, x):
- skip = self.conv1(x)
- h = self.conv2(skip)
-
- return h, skip
-
-
-class Decoder(nn.Module):
- def __init__(
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
- ):
- super(Decoder, self).__init__()
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.dropout = nn.Dropout2d(0.1) if dropout else None
-
- def __call__(self, x, skip=None):
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
- if skip is not None:
- skip = spec_utils.crop_center(skip, x)
- x = torch.cat([x, skip], dim=1)
- h = self.conv(x)
-
- if self.dropout is not None:
- h = self.dropout(h)
-
- return h
-
-
-class ASPPModule(nn.Module):
- def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU):
- super(ASPPModule, self).__init__()
- self.conv1 = nn.Sequential(
- nn.AdaptiveAvgPool2d((1, None)),
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
- )
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
- self.conv3 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
- )
- self.conv4 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
- )
- self.conv5 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.conv6 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.conv7 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.bottleneck = nn.Sequential(
- Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
- )
-
- def forward(self, x):
- _, _, h, w = x.size()
- feat1 = F.interpolate(
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
- )
- feat2 = self.conv2(x)
- feat3 = self.conv3(x)
- feat4 = self.conv4(x)
- feat5 = self.conv5(x)
- feat6 = self.conv6(x)
- feat7 = self.conv7(x)
- out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1)
- bottle = self.bottleneck(out)
- return bottle
diff --git a/spaces/Benson/text-generation/Examples/Cheto Hack 8bp Apk Descargar 5.4 5.md b/spaces/Benson/text-generation/Examples/Cheto Hack 8bp Apk Descargar 5.4 5.md
deleted file mode 100644
index 5d0259864eb8fddee9bf7d2bb3ea0a81f121f4d2..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Cheto Hack 8bp Apk Descargar 5.4 5.md
+++ /dev/null
@@ -1,71 +0,0 @@
-
-
Cheto Hack 8BP APK Descargar 5.4 5: Todo lo que necesita saber
-
Si eres un fan de 8 Ball Pool, es posible que hayas oído hablar de Cheto Hack 8BP, una herramienta que puede ayudarte a mejorar tu juego y ganar más partidos. Pero ¿qué es Cheto Hack 8BP exactamente, y cómo se puede descargar y usarlo? En este artículo, responderemos estas preguntas y más, para que pueda decidir si vale la pena probar o no Cheto Hack 8BP.
Cheto Hack 8BP es una herramienta de hackeo para 8 Ball Pool que utiliza el reconocimiento de imágenes AI para extender la guía, apoyar los disparos de cojín y dibujar la trayectoria de la bola y el estado de disparo. También puede predecir el resultado del juego y jugar automáticamente para usted. A diferencia de algunas otras herramientas de hackeo, Cheto Hack 8BP no requiere acceso de root ni modificaciones en los archivos del juego. Funciona en Gameloop PC, un emulador que te permite jugar juegos Android en tu ordenador.
-
Características de Cheto Hack 8BP
-
Algunas de las características que ofrece Cheto Hack 8BP son:
-
-
Guía de extensión automática: puede ver toda la longitud de la guía, incluso más allá de la tabla, para ayudarlo a apuntar mejor.
-
Disparos de cojín de apoyo: Puedes ver la guía para disparos de cojín, que son disparos que rebotan en los rieles antes de golpear la bola objetivo.
-
Dibujar la trayectoria de la bola: Puedes ver la trayectoria de la bola después de golpearla, incluyendo cualquier giro o curva.
-
Dibujar estado de disparo: Puede ver la potencia, el ángulo y el giro de su tiro, así como la posición y la dirección de la bola blanca.
-
Predicción: Puedes ver la probabilidad de ganar o perder el juego basado en la situación actual.
-
Auto-play: Usted puede dejar que la herramienta de corte jugar para usted automáticamente, utilizando los mejores movimientos posibles.
-
-
Cómo descargar e instalar Cheto Hack 8BP APK
-
Para descargar e instalar Cheto Hack 8BP APK, debe seguir estos pasos:
-
-
Descargar Gameloop PC desde su sitio web oficial e instalarlo en su ordenador.
-
-
Abra el Gameloop PC y lance el Ball Pool 8 desde su centro de juego.
-
Abrir Cheto Hack 8BP APK e introduzca la contraseña (reproducción automática o cheto).
-
Seleccione las características que desea utilizar y haga clic en Inicio.
-
¡Disfruta jugando 8 bolas con Cheto Hack 8BP!
-
-
¿Por qué usar Cheto Hack 8BP?
-
Es posible que se pregunte por qué debe utilizar Cheto Hack 8BP en lugar de jugar normalmente. Aquí hay algunas razones por las que es posible que desee probarlo:
-
Beneficios de Cheto Hack 8BP
-
-
Puede mejorar sus habilidades y aprender nuevos trucos al ver cómo juega la herramienta de hackeo.
-
Usted puede ganar más partidos y ganar más monedas y recompensas mediante el uso de las características de la herramienta de corte.
-
Puedes divertirte más y desafiarte jugando contra oponentes más fuertes o usando diferentes modos.
-
Usted puede ahorrar tiempo y esfuerzo dejando que la herramienta de corte jugar para usted automáticamente.
-
-
Riesgos de Cheto Hack 8BP
-
Puede obtener prohibido o reportado por otros jugadores o los desarrolladores del juego para el uso de la herramienta de corte.
-
Puedes perder la diversión y la satisfacción de jugar el juego de forma justa y honesta.
-
Puede dañar su dispositivo o comprometer su seguridad mediante la descarga de un archivo APK falso o malicioso.
-
-
Alternativas a Cheto Hack 8BP
-
Si no está convencido por Cheto Hack 8BP, o si desea probar algo diferente, hay algunas alternativas que puede usar para hackear 8 Ball Pool. Aquí hay dos de ellos:
-
-
Grupo de objetivos - Directriz 8BP
-
Aim Pool - Guideline 8BP es una herramienta de corte que extiende la guía y muestra la trayectoria de la bola para 8 Ball Pool. Funciona tanto en dispositivos Android como iOS, y no requiere root ni jailbreak. También tiene una interfaz sencilla y fácil de usar, y es compatible con varios idiomas. Puedes descargar Aim Pool - Guideline 8BP desde su web oficial o desde la Google Play Store.
-
Guardián del juego
-
-
Conclusión
-
En este artículo, hemos discutido todo lo que necesita saber sobre Cheto Hack 8BP APK Descargar 5.4 5, una herramienta de corte para 8 Ball Pool que utiliza el reconocimiento de imágenes AI para mejorar su juego. Hemos explicado lo que es, cómo funciona, cómo descargarlo e instalarlo, por qué debería usarlo y cuáles son algunas alternativas. Esperamos que haya encontrado este artículo útil e informativo.
-
Resumen del artículo
-
-
Cheto Hack 8BP es una herramienta de hackeo para 8 Ball Pool que utiliza el reconocimiento de imágenes AI para extender la guía, apoyar disparos de cojín, dibujar la trayectoria de la bola y el estado de disparo, predecir el resultado y jugar automáticamente.
-
Funciona en Gameloop PC, un emulador que te permite jugar juegos Android en tu ordenador.
-
Tiene muchas características y beneficios, pero también algunos riesgos y desventajas.
-
Hay algunas alternativas a Cheto Hack 8BP, como Aim Pool - Guía 8BP y Game Guardian.
-
-
Preguntas frecuentes
-
-
¿Es Cheto Hack 8BP libre?
-
No, Cheto Hack 8BP no es gratis. Necesita pagar una cuota de suscripción para usarlo. La tarifa varía dependiendo de la duración y las características que elija.
-
¿Es seguro Cheto Hack 8BP?
-
Cheto Hack 8BP es seguro si lo descarga desde su sitio web oficial o desde una fuente de confianza. Sin embargo, siempre hay un riesgo de ser prohibido o reportado por otros jugadores o los desarrolladores del juego para el uso de una herramienta de hackeo.
-
¿Es Cheto Hack 8BP legal?
-
Cheto Hack 8BP no es legal en algunos países o regiones donde la piratería está prohibida o regulada por ley. Usted debe comprobar las leyes locales antes de usarlo.
-
¿Cheto Hack 8BP funciona en dispositivos móviles?
-
No, Cheto Hack 8BP no funciona en dispositivos móviles. Solo funciona en Gameloop PC, un emulador que te permite jugar juegos Android en tu ordenador.
-
¿Puedo usar Cheto Hack 8BP con otras herramientas de hackeo?
-
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/pyparsing/actions.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/pyparsing/actions.py
deleted file mode 100644
index f72c66e743146c7a5b70a5440e9ab5459f10245b..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/pyparsing/actions.py
+++ /dev/null
@@ -1,207 +0,0 @@
-# actions.py
-
-from .exceptions import ParseException
-from .util import col
-
-
-class OnlyOnce:
- """
- Wrapper for parse actions, to ensure they are only called once.
- """
-
- def __init__(self, method_call):
- from .core import _trim_arity
-
- self.callable = _trim_arity(method_call)
- self.called = False
-
- def __call__(self, s, l, t):
- if not self.called:
- results = self.callable(s, l, t)
- self.called = True
- return results
- raise ParseException(s, l, "OnlyOnce obj called multiple times w/out reset")
-
- def reset(self):
- """
- Allow the associated parse action to be called once more.
- """
-
- self.called = False
-
-
-def match_only_at_col(n):
- """
- Helper method for defining parse actions that require matching at
- a specific column in the input text.
- """
-
- def verify_col(strg, locn, toks):
- if col(locn, strg) != n:
- raise ParseException(strg, locn, "matched token not at column {}".format(n))
-
- return verify_col
-
-
-def replace_with(repl_str):
- """
- Helper method for common parse actions that simply return
- a literal value. Especially useful when used with
- :class:`transform_string` ().
-
- Example::
-
- num = Word(nums).set_parse_action(lambda toks: int(toks[0]))
- na = one_of("N/A NA").set_parse_action(replace_with(math.nan))
- term = na | num
-
- term[1, ...].parse_string("324 234 N/A 234") # -> [324, 234, nan, 234]
- """
- return lambda s, l, t: [repl_str]
-
-
-def remove_quotes(s, l, t):
- """
- Helper parse action for removing quotation marks from parsed
- quoted strings.
-
- Example::
-
- # by default, quotation marks are included in parsed results
- quoted_string.parse_string("'Now is the Winter of our Discontent'") # -> ["'Now is the Winter of our Discontent'"]
-
- # use remove_quotes to strip quotation marks from parsed results
- quoted_string.set_parse_action(remove_quotes)
- quoted_string.parse_string("'Now is the Winter of our Discontent'") # -> ["Now is the Winter of our Discontent"]
- """
- return t[0][1:-1]
-
-
-def with_attribute(*args, **attr_dict):
- """
- Helper to create a validating parse action to be used with start
- tags created with :class:`make_xml_tags` or
- :class:`make_html_tags`. Use ``with_attribute`` to qualify
- a starting tag with a required attribute value, to avoid false
- matches on common tags such as ``
`` or ``
``.
-
- Call ``with_attribute`` with a series of attribute names and
- values. Specify the list of filter attributes names and values as:
-
- - keyword arguments, as in ``(align="right")``, or
- - as an explicit dict with ``**`` operator, when an attribute
- name is also a Python reserved word, as in ``**{"class":"Customer", "align":"right"}``
- - a list of name-value tuples, as in ``(("ns1:class", "Customer"), ("ns2:align", "right"))``
-
- For attribute names with a namespace prefix, you must use the second
- form. Attribute names are matched insensitive to upper/lower case.
-
- If just testing for ``class`` (with or without a namespace), use
- :class:`with_class`.
-
- To verify that the attribute exists, but without specifying a value,
- pass ``with_attribute.ANY_VALUE`` as the value.
-
- Example::
-
- html = '''
-
- Some text
-
1 4 0 1 0
-
1,3 2,3 1,1
-
this has no type
-
-
- '''
- div,div_end = make_html_tags("div")
-
- # only match div tag having a type attribute with value "grid"
- div_grid = div().set_parse_action(with_attribute(type="grid"))
- grid_expr = div_grid + SkipTo(div | div_end)("body")
- for grid_header in grid_expr.search_string(html):
- print(grid_header.body)
-
- # construct a match with any div tag having a type attribute, regardless of the value
- div_any_type = div().set_parse_action(with_attribute(type=with_attribute.ANY_VALUE))
- div_expr = div_any_type + SkipTo(div | div_end)("body")
- for div_header in div_expr.search_string(html):
- print(div_header.body)
-
- prints::
-
- 1 4 0 1 0
-
- 1 4 0 1 0
- 1,3 2,3 1,1
- """
- if args:
- attrs = args[:]
- else:
- attrs = attr_dict.items()
- attrs = [(k, v) for k, v in attrs]
-
- def pa(s, l, tokens):
- for attrName, attrValue in attrs:
- if attrName not in tokens:
- raise ParseException(s, l, "no matching attribute " + attrName)
- if attrValue != with_attribute.ANY_VALUE and tokens[attrName] != attrValue:
- raise ParseException(
- s,
- l,
- "attribute {!r} has value {!r}, must be {!r}".format(
- attrName, tokens[attrName], attrValue
- ),
- )
-
- return pa
-
-
-with_attribute.ANY_VALUE = object()
-
-
-def with_class(classname, namespace=""):
- """
- Simplified version of :class:`with_attribute` when
- matching on a div class - made difficult because ``class`` is
- a reserved word in Python.
-
- Example::
-
- html = '''
-
- Some text
-
1 4 0 1 0
-
1,3 2,3 1,1
-
this <div> has no class
-
-
- '''
- div,div_end = make_html_tags("div")
- div_grid = div().set_parse_action(with_class("grid"))
-
- grid_expr = div_grid + SkipTo(div | div_end)("body")
- for grid_header in grid_expr.search_string(html):
- print(grid_header.body)
-
- div_any_type = div().set_parse_action(with_class(withAttribute.ANY_VALUE))
- div_expr = div_any_type + SkipTo(div | div_end)("body")
- for div_header in div_expr.search_string(html):
- print(div_header.body)
-
- prints::
-
- 1 4 0 1 0
-
- 1 4 0 1 0
- 1,3 2,3 1,1
- """
- classattr = "{}:class".format(namespace) if namespace else "class"
- return with_attribute(**{classattr: classname})
-
-
-# pre-PEP8 compatibility symbols
-replaceWith = replace_with
-removeQuotes = remove_quotes
-withAttribute = with_attribute
-withClass = with_class
-matchOnlyAtCol = match_only_at_col
diff --git a/spaces/BilalSardar/Black-N-White-To-Color/README.md b/spaces/BilalSardar/Black-N-White-To-Color/README.md
deleted file mode 100644
index b7bd0b61bf25b3eb09bd53a01b9234eb603f3e79..0000000000000000000000000000000000000000
--- a/spaces/BilalSardar/Black-N-White-To-Color/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Black N White To Color
-emoji: 🦀
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.20.1
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/execution_policy.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/execution_policy.h
deleted file mode 100644
index 39bbb7927efd9fc1037f3a050429d0769e328ad5..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/execution_policy.h
+++ /dev/null
@@ -1,84 +0,0 @@
-/******************************************************************************
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- * * Neither the name of the NVIDIA CORPORATION nor the
- * names of its contributors may be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- ******************************************************************************/
-#pragma once
-
-// histogram
-// sort (radix-sort, merge-sort)
-
-#include
-#include
-#include
-
-// pass
-// ----------------
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-// fail
-// ----------------
-// fails with mixed types
-#include
-
-// mixed types are not compiling, commented in testing/scan.cu
-#include
-
-// stubs passed
-// ----------------
-#include
-#include
-#include
-#include
-#include
-
-// work in progress
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/sort.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/sort.h
deleted file mode 100644
index 9d4ac199810cd7e8dcc815c8f90c43f36cb84d61..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/sort.h
+++ /dev/null
@@ -1,154 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace generic
-{
-
-
-template
-__host__ __device__
- void sort(thrust::execution_policy &exec,
- RandomAccessIterator first,
- RandomAccessIterator last);
-
-
-template
-__host__ __device__
- void sort(thrust::execution_policy &exec,
- RandomAccessIterator first,
- RandomAccessIterator last,
- StrictWeakOrdering comp);
-
-
-template
-__host__ __device__
- void sort_by_key(thrust::execution_policy &exec,
- RandomAccessIterator1 keys_first,
- RandomAccessIterator1 keys_last,
- RandomAccessIterator2 values_first);
-
-
-template
-__host__ __device__
- void sort_by_key(thrust::execution_policy &exec,
- RandomAccessIterator1 keys_first,
- RandomAccessIterator1 keys_last,
- RandomAccessIterator2 values_first,
- StrictWeakOrdering comp);
-
-
-template
-__host__ __device__
- void stable_sort(thrust::execution_policy &exec,
- RandomAccessIterator first,
- RandomAccessIterator last);
-
-
-// XXX it is an error to call this function; it has no implementation
-template
-__host__ __device__
- void stable_sort(thrust::execution_policy &exec,
- RandomAccessIterator first,
- RandomAccessIterator last,
- StrictWeakOrdering comp);
-
-
-template
-__host__ __device__
- void stable_sort_by_key(thrust::execution_policy &exec,
- RandomAccessIterator1 keys_first,
- RandomAccessIterator1 keys_last,
- RandomAccessIterator2 values_first);
-
-
-// XXX it is an error to call this function; it has no implementation
-template
-__host__ __device__
- void stable_sort_by_key(thrust::execution_policy &exec,
- RandomAccessIterator1 keys_first,
- RandomAccessIterator1 keys_last,
- RandomAccessIterator2 values_first,
- StrictWeakOrdering comp);
-
-
-template
-__host__ __device__
- bool is_sorted(thrust::execution_policy &exec,
- ForwardIterator first,
- ForwardIterator last);
-
-
-template
-__host__ __device__
- bool is_sorted(thrust::execution_policy &exec,
- ForwardIterator first,
- ForwardIterator last,
- Compare comp);
-
-
-template
-__host__ __device__
- ForwardIterator is_sorted_until(thrust::execution_policy &exec,
- ForwardIterator first,
- ForwardIterator last);
-
-
-template
-__host__ __device__
- ForwardIterator is_sorted_until(thrust::execution_policy &exec,
- ForwardIterator first,
- ForwardIterator last,
- Compare comp);
-
-
-} // end generic
-} // end detail
-} // end system
-} // end thrust
-
-#include
-
diff --git a/spaces/CVPR/WALT/mmdet/datasets/pipelines/loading.py b/spaces/CVPR/WALT/mmdet/datasets/pipelines/loading.py
deleted file mode 100644
index 8c1d11f364e29707069b881fdca6f99dc1a52680..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/datasets/pipelines/loading.py
+++ /dev/null
@@ -1,470 +0,0 @@
-import os.path as osp
-
-import mmcv
-import numpy as np
-import pycocotools.mask as maskUtils
-
-from mmdet.core import BitmapMasks, PolygonMasks
-from ..builder import PIPELINES
-
-
-@PIPELINES.register_module()
-class LoadImageFromFile(object):
- """Load an image from file.
-
- Required keys are "img_prefix" and "img_info" (a dict that must contain the
- key "filename"). Added or updated keys are "filename", "img", "img_shape",
- "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`),
- "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1).
-
- Args:
- to_float32 (bool): Whether to convert the loaded image to a float32
- numpy array. If set to False, the loaded image is an uint8 array.
- Defaults to False.
- color_type (str): The flag argument for :func:`mmcv.imfrombytes`.
- Defaults to 'color'.
- file_client_args (dict): Arguments to instantiate a FileClient.
- See :class:`mmcv.fileio.FileClient` for details.
- Defaults to ``dict(backend='disk')``.
- """
-
- def __init__(self,
- to_float32=False,
- color_type='color',
- file_client_args=dict(backend='disk')):
- self.to_float32 = to_float32
- self.color_type = color_type
- self.file_client_args = file_client_args.copy()
- self.file_client = None
-
- def __call__(self, results):
- """Call functions to load image and get image meta information.
-
- Args:
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
-
- Returns:
- dict: The dict contains loaded image and meta information.
- """
-
- if self.file_client is None:
- self.file_client = mmcv.FileClient(**self.file_client_args)
-
- if results['img_prefix'] is not None:
- filename = osp.join(results['img_prefix'],
- results['img_info']['filename'])
- else:
- filename = results['img_info']['filename']
-
- img_bytes = self.file_client.get(filename)
- img = mmcv.imfrombytes(img_bytes, flag=self.color_type)
- if self.to_float32:
- img = img.astype(np.float32)
-
- results['filename'] = filename
- results['ori_filename'] = results['img_info']['filename']
- results['img'] = img
- results['img_shape'] = img.shape
- results['ori_shape'] = img.shape
- results['img_fields'] = ['img']
- return results
-
- def __repr__(self):
- repr_str = (f'{self.__class__.__name__}('
- f'to_float32={self.to_float32}, '
- f"color_type='{self.color_type}', "
- f'file_client_args={self.file_client_args})')
- return repr_str
-
-
-@PIPELINES.register_module()
-class LoadImageFromWebcam(LoadImageFromFile):
- """Load an image from webcam.
-
- Similar with :obj:`LoadImageFromFile`, but the image read from webcam is in
- ``results['img']``.
- """
-
- def __call__(self, results):
- """Call functions to add image meta information.
-
- Args:
- results (dict): Result dict with Webcam read image in
- ``results['img']``.
-
- Returns:
- dict: The dict contains loaded image and meta information.
- """
-
- img = results['img']
- if self.to_float32:
- img = img.astype(np.float32)
-
- results['filename'] = None
- results['ori_filename'] = None
- results['img'] = img
- results['img_shape'] = img.shape
- results['ori_shape'] = img.shape
- results['img_fields'] = ['img']
- return results
-
-
-@PIPELINES.register_module()
-class LoadMultiChannelImageFromFiles(object):
- """Load multi-channel images from a list of separate channel files.
-
- Required keys are "img_prefix" and "img_info" (a dict that must contain the
- key "filename", which is expected to be a list of filenames).
- Added or updated keys are "filename", "img", "img_shape",
- "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`),
- "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1).
-
- Args:
- to_float32 (bool): Whether to convert the loaded image to a float32
- numpy array. If set to False, the loaded image is an uint8 array.
- Defaults to False.
- color_type (str): The flag argument for :func:`mmcv.imfrombytes`.
- Defaults to 'color'.
- file_client_args (dict): Arguments to instantiate a FileClient.
- See :class:`mmcv.fileio.FileClient` for details.
- Defaults to ``dict(backend='disk')``.
- """
-
- def __init__(self,
- to_float32=False,
- color_type='unchanged',
- file_client_args=dict(backend='disk')):
- self.to_float32 = to_float32
- self.color_type = color_type
- self.file_client_args = file_client_args.copy()
- self.file_client = None
-
- def __call__(self, results):
- """Call functions to load multiple images and get images meta
- information.
-
- Args:
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
-
- Returns:
- dict: The dict contains loaded images and meta information.
- """
-
- if self.file_client is None:
- self.file_client = mmcv.FileClient(**self.file_client_args)
-
- if results['img_prefix'] is not None:
- filename = [
- osp.join(results['img_prefix'], fname)
- for fname in results['img_info']['filename']
- ]
- else:
- filename = results['img_info']['filename']
-
- img = []
- for name in filename:
- img_bytes = self.file_client.get(name)
- img.append(mmcv.imfrombytes(img_bytes, flag=self.color_type))
- img = np.stack(img, axis=-1)
- if self.to_float32:
- img = img.astype(np.float32)
-
- results['filename'] = filename
- results['ori_filename'] = results['img_info']['filename']
- results['img'] = img
- results['img_shape'] = img.shape
- results['ori_shape'] = img.shape
- # Set initial values for default meta_keys
- results['pad_shape'] = img.shape
- results['scale_factor'] = 1.0
- num_channels = 1 if len(img.shape) < 3 else img.shape[2]
- results['img_norm_cfg'] = dict(
- mean=np.zeros(num_channels, dtype=np.float32),
- std=np.ones(num_channels, dtype=np.float32),
- to_rgb=False)
- return results
-
- def __repr__(self):
- repr_str = (f'{self.__class__.__name__}('
- f'to_float32={self.to_float32}, '
- f"color_type='{self.color_type}', "
- f'file_client_args={self.file_client_args})')
- return repr_str
-
-
-@PIPELINES.register_module()
-class LoadAnnotations(object):
- """Load mutiple types of annotations.
-
- Args:
- with_bbox (bool): Whether to parse and load the bbox annotation.
- Default: True.
- with_label (bool): Whether to parse and load the label annotation.
- Default: True.
- with_mask (bool): Whether to parse and load the mask annotation.
- Default: False.
- with_seg (bool): Whether to parse and load the semantic segmentation
- annotation. Default: False.
- poly2mask (bool): Whether to convert the instance masks from polygons
- to bitmaps. Default: True.
- file_client_args (dict): Arguments to instantiate a FileClient.
- See :class:`mmcv.fileio.FileClient` for details.
- Defaults to ``dict(backend='disk')``.
- """
-
- def __init__(self,
- with_bbox=True,
- with_label=True,
- with_mask=False,
- with_seg=False,
- poly2mask=True,
- file_client_args=dict(backend='disk')):
- self.with_bbox = with_bbox
- self.with_label = with_label
- self.with_mask = with_mask
- self.with_seg = with_seg
- self.poly2mask = poly2mask
- self.file_client_args = file_client_args.copy()
- self.file_client = None
-
- def _load_bboxes(self, results):
- """Private function to load bounding box annotations.
-
- Args:
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
-
- Returns:
- dict: The dict contains loaded bounding box annotations.
- """
-
- ann_info = results['ann_info']
- results['gt_bboxes'] = ann_info['bboxes'].copy()
-
- gt_bboxes_ignore = ann_info.get('bboxes_ignore', None)
- if gt_bboxes_ignore is not None:
- results['gt_bboxes_ignore'] = gt_bboxes_ignore.copy()
- results['bbox_fields'].append('gt_bboxes_ignore')
- results['bbox_fields'].append('gt_bboxes')
- return results
-
- def _load_labels(self, results):
- """Private function to load label annotations.
-
- Args:
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
-
- Returns:
- dict: The dict contains loaded label annotations.
- """
-
- results['gt_labels'] = results['ann_info']['labels'].copy()
- return results
-
- def _poly2mask(self, mask_ann, img_h, img_w):
- """Private function to convert masks represented with polygon to
- bitmaps.
-
- Args:
- mask_ann (list | dict): Polygon mask annotation input.
- img_h (int): The height of output mask.
- img_w (int): The width of output mask.
-
- Returns:
- numpy.ndarray: The decode bitmap mask of shape (img_h, img_w).
- """
-
- if isinstance(mask_ann, list):
- # polygon -- a single object might consist of multiple parts
- # we merge all parts into one mask rle code
- rles = maskUtils.frPyObjects(mask_ann, img_h, img_w)
- rle = maskUtils.merge(rles)
- elif isinstance(mask_ann['counts'], list):
- # uncompressed RLE
- rle = maskUtils.frPyObjects(mask_ann, img_h, img_w)
- else:
- # rle
- rle = mask_ann
- mask = maskUtils.decode(rle)
- return mask
-
- def process_polygons(self, polygons):
- """Convert polygons to list of ndarray and filter invalid polygons.
-
- Args:
- polygons (list[list]): Polygons of one instance.
-
- Returns:
- list[numpy.ndarray]: Processed polygons.
- """
-
- polygons = [np.array(p) for p in polygons]
- valid_polygons = []
- for polygon in polygons:
- if len(polygon) % 2 == 0 and len(polygon) >= 6:
- valid_polygons.append(polygon)
- return valid_polygons
-
- def _load_masks(self, results):
- """Private function to load mask annotations.
-
- Args:
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
-
- Returns:
- dict: The dict contains loaded mask annotations.
- If ``self.poly2mask`` is set ``True``, `gt_mask` will contain
- :obj:`PolygonMasks`. Otherwise, :obj:`BitmapMasks` is used.
- """
-
- h, w = results['img_info']['height'], results['img_info']['width']
- gt_masks = results['ann_info']['masks']
- if self.poly2mask:
- masks_all =[]
- for mask in gt_masks:
- if 'full' in mask:
- full = self._poly2mask(mask['full'], h, w)*2
- visible = self._poly2mask(mask['visible'], h, w)
- full[visible==1] = 1
- masks_all.append(full)
- else:
- print(mask)
- asas
- visible = self._poly2mask(mask['visible'], h, w)
- masks_all.append(visible)
-
- gt_masks = BitmapMasks(masks_all, h, w)
- else:
- gt_masks = PolygonMasks(
- [self.process_polygons(polygons) for polygons in gt_masks], h,
- w)
- results['gt_masks'] = gt_masks
- results['mask_fields'].append('gt_masks')
- return results
-
- def _load_semantic_seg(self, results):
- """Private function to load semantic segmentation annotations.
-
- Args:
- results (dict): Result dict from :obj:`dataset`.
-
- Returns:
- dict: The dict contains loaded semantic segmentation annotations.
- """
-
- if self.file_client is None:
- self.file_client = mmcv.FileClient(**self.file_client_args)
-
- filename = osp.join(results['seg_prefix'],
- results['ann_info']['seg_map'])
- img_bytes = self.file_client.get(filename)
- results['gt_semantic_seg'] = mmcv.imfrombytes(
- img_bytes, flag='unchanged').squeeze()
- results['seg_fields'].append('gt_semantic_seg')
- return results
-
- def __call__(self, results):
- """Call function to load multiple types annotations.
-
- Args:
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
-
- Returns:
- dict: The dict contains loaded bounding box, label, mask and
- semantic segmentation annotations.
- """
-
- if self.with_bbox:
- results = self._load_bboxes(results)
- if results is None:
- return None
- if self.with_label:
- results = self._load_labels(results)
- if self.with_mask:
- results = self._load_masks(results)
- if self.with_seg:
- results = self._load_semantic_seg(results)
- return results
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(with_bbox={self.with_bbox}, '
- repr_str += f'with_label={self.with_label}, '
- repr_str += f'with_mask={self.with_mask}, '
- repr_str += f'with_seg={self.with_seg}, '
- repr_str += f'poly2mask={self.poly2mask}, '
- repr_str += f'poly2mask={self.file_client_args})'
- return repr_str
-
-
-@PIPELINES.register_module()
-class LoadProposals(object):
- """Load proposal pipeline.
-
- Required key is "proposals". Updated keys are "proposals", "bbox_fields".
-
- Args:
- num_max_proposals (int, optional): Maximum number of proposals to load.
- If not specified, all proposals will be loaded.
- """
-
- def __init__(self, num_max_proposals=None):
- self.num_max_proposals = num_max_proposals
-
- def __call__(self, results):
- """Call function to load proposals from file.
-
- Args:
- results (dict): Result dict from :obj:`mmdet.CustomDataset`.
-
- Returns:
- dict: The dict contains loaded proposal annotations.
- """
-
- proposals = results['proposals']
- if proposals.shape[1] not in (4, 5):
- raise AssertionError(
- 'proposals should have shapes (n, 4) or (n, 5), '
- f'but found {proposals.shape}')
- proposals = proposals[:, :4]
-
- if self.num_max_proposals is not None:
- proposals = proposals[:self.num_max_proposals]
-
- if len(proposals) == 0:
- proposals = np.array([[0, 0, 0, 0]], dtype=np.float32)
- results['proposals'] = proposals
- results['bbox_fields'].append('proposals')
- return results
-
- def __repr__(self):
- return self.__class__.__name__ + \
- f'(num_max_proposals={self.num_max_proposals})'
-
-
-@PIPELINES.register_module()
-class FilterAnnotations(object):
- """Filter invalid annotations.
-
- Args:
- min_gt_bbox_wh (tuple[int]): Minimum width and height of ground truth
- boxes.
- """
-
- def __init__(self, min_gt_bbox_wh):
- # TODO: add more filter options
- self.min_gt_bbox_wh = min_gt_bbox_wh
-
- def __call__(self, results):
- assert 'gt_bboxes' in results
- gt_bboxes = results['gt_bboxes']
- w = gt_bboxes[:, 2] - gt_bboxes[:, 0]
- h = gt_bboxes[:, 3] - gt_bboxes[:, 1]
- keep = (w > self.min_gt_bbox_wh[0]) & (h > self.min_gt_bbox_wh[1])
- if not keep.any():
- return None
- else:
- keys = ('gt_bboxes', 'gt_labels', 'gt_masks', 'gt_semantic_seg')
- for key in keys:
- if key in results:
- results[key] = results[key][keep]
- return results
diff --git a/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_R_101_FPN_400ep_LSJ.py b/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_R_101_FPN_400ep_LSJ.py
deleted file mode 100644
index 63c54ee9a5ce2368494b775cc90fada1439feaa5..0000000000000000000000000000000000000000
--- a/spaces/CVPR/transfiner/configs/new_baselines/mask_rcnn_R_101_FPN_400ep_LSJ.py
+++ /dev/null
@@ -1,14 +0,0 @@
-from .mask_rcnn_R_101_FPN_100ep_LSJ import (
- dataloader,
- lr_multiplier,
- model,
- optimizer,
- train,
-)
-
-train.max_iter *= 4 # 100ep -> 400ep
-
-lr_multiplier.scheduler.milestones = [
- milestone * 4 for milestone in lr_multiplier.scheduler.milestones
-]
-lr_multiplier.scheduler.num_updates = train.max_iter
diff --git a/spaces/Cletrason/Cletrason-toad-in-the-mario-movie/trainer_tf.py b/spaces/Cletrason/Cletrason-toad-in-the-mario-movie/trainer_tf.py
deleted file mode 100644
index dbea9ed5079c3007b151420ad8dba50cb723e5cd..0000000000000000000000000000000000000000
--- a/spaces/Cletrason/Cletrason-toad-in-the-mario-movie/trainer_tf.py
+++ /dev/null
@@ -1,801 +0,0 @@
-# Copyright 2020 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Tensorflow trainer class."""
-
-import datetime
-import math
-import os
-import warnings
-from typing import Callable, Dict, Optional, Tuple
-
-from .utils import ENV_VARS_TRUE_VALUES
-
-
-# Integrations must be imported before ML frameworks:
-# isort: off
-from .integrations import (
- is_comet_available,
- is_wandb_available,
-)
-
-# isort: on
-
-import numpy as np
-import tensorflow as tf
-from tensorflow.python.distribute.values import PerReplica
-
-from .modeling_tf_utils import TFPreTrainedModel
-from .optimization_tf import GradientAccumulator, create_optimizer
-from .trainer_utils import (
- PREFIX_CHECKPOINT_DIR,
- EvalPrediction,
- IntervalStrategy,
- PredictionOutput,
- enable_full_determinism,
- set_seed,
-)
-from .training_args_tf import TFTrainingArguments
-from .utils import logging
-
-
-if is_wandb_available():
- import wandb
-
-if is_comet_available():
- import comet_ml
-
-logger = logging.get_logger(__name__)
-
-
-class TFTrainer:
- """
- TFTrainer is a simple but feature-complete training and eval loop for TensorFlow, optimized for 🤗 Transformers.
-
- Args:
- model ([`TFPreTrainedModel`]):
- The model to train, evaluate or use for predictions.
- args ([`TFTrainingArguments`]):
- The arguments to tweak training.
- train_dataset ([`~tf.data.Dataset`], *optional*):
- The dataset to use for training. The dataset should yield tuples of `(features, labels)` where `features`
- is a dict of input features and `labels` is the labels. If `labels` is a tensor, the loss is calculated by
- the model by calling `model(features, labels=labels)`. If `labels` is a dict, such as when using a
- QuestionAnswering head model with multiple targets, the loss is instead calculated by calling
- `model(features, **labels)`.
- eval_dataset ([`~tf.data.Dataset`], *optional*):
- The dataset to use for evaluation. The dataset should yield tuples of `(features, labels)` where `features`
- is a dict of input features and `labels` is the labels. If `labels` is a tensor, the loss is calculated by
- the model by calling `model(features, labels=labels)`. If `labels` is a dict, such as when using a
- QuestionAnswering head model with multiple targets, the loss is instead calculated by calling
- `model(features, **labels)`.
- compute_metrics (`Callable[[EvalPrediction], Dict]`, *optional*):
- The function that will be used to compute metrics at evaluation. Must take a [`EvalPrediction`] and return
- a dictionary string to metric values.
- tb_writer (`tf.summary.SummaryWriter`, *optional*):
- Object to write to TensorBoard.
- optimizers (`Tuple[tf.keras.optimizers.Optimizer, tf.keras.optimizers.schedules.LearningRateSchedule]`, *optional*):
- A tuple containing the optimizer and the scheduler to use. The optimizer default to an instance of
- [`tf.keras.optimizers.Adam`] if `args.weight_decay_rate` is 0 else an instance of [`AdamWeightDecay`]. The
- scheduler will default to an instance of [`tf.keras.optimizers.schedules.PolynomialDecay`] if
- `args.num_warmup_steps` is 0 else an instance of [`WarmUp`].
- """
-
- def __init__(
- self,
- model: TFPreTrainedModel,
- args: TFTrainingArguments,
- train_dataset: Optional[tf.data.Dataset] = None,
- eval_dataset: Optional[tf.data.Dataset] = None,
- compute_metrics: Optional[Callable[[EvalPrediction], Dict]] = None,
- tb_writer: Optional[tf.summary.SummaryWriter] = None,
- optimizers: Tuple[tf.keras.optimizers.Optimizer, tf.keras.optimizers.schedules.LearningRateSchedule] = (
- None,
- None,
- ),
- ):
- self.model = model
- self.args = args
- self.train_dataset = train_dataset
- self.eval_dataset = eval_dataset
- self.compute_metrics = compute_metrics
- self.optimizer, self.lr_scheduler = optimizers
- self.gradient_accumulator = GradientAccumulator()
- self.global_step = 0
- self.epoch_logging = 0
- self.eval_loss = tf.keras.metrics.Sum()
-
- warnings.warn(
- "The class `TFTrainer` is deprecated and will be removed in version 5 of Transformers. "
- "We recommend using native Keras instead, by calling methods like `fit()` and `predict()` "
- "directly on the model object. Detailed examples of the Keras style can be found in our "
- "examples at https://github.com/huggingface/transformers/tree/main/examples/tensorflow",
- FutureWarning,
- )
-
- if tb_writer is not None:
- self.tb_writer = tb_writer
- else:
- self.tb_writer = tf.summary.create_file_writer(self.args.logging_dir)
-
- if is_wandb_available():
- self.setup_wandb()
- elif os.getenv("WANDB_DISABLED", "").upper() not in ENV_VARS_TRUE_VALUES:
- logger.info(
- "You are instantiating a Trainer but W&B is not installed. To use wandb logging, "
- "run `pip install wandb && wandb login` see https://docs.wandb.com/huggingface."
- )
-
- if is_comet_available():
- self.setup_comet()
- elif os.environ.get("COMET_MODE") != "DISABLED":
- logger.info(
- "To use comet_ml logging, run `pip/conda install comet_ml` "
- "see https://www.comet.ml/docs/python-sdk/huggingface/"
- )
-
- enable_full_determinism(self.args.seed) if self.args.full_determinism else set_seed(self.args.seed)
-
- def get_train_tfdataset(self) -> tf.data.Dataset:
- """
- Returns the training [`~tf.data.Dataset`].
-
- Subclass and override this method if you want to inject some custom behavior.
- """
- if self.train_dataset is None:
- raise ValueError("Trainer: training requires a train_dataset.")
-
- self.total_train_batch_size = self.args.train_batch_size * self.args.gradient_accumulation_steps
- self.num_train_examples = self.train_dataset.cardinality().numpy()
-
- if self.num_train_examples < 0:
- raise ValueError("The training dataset must have an asserted cardinality")
-
- ds = (
- self.train_dataset.repeat()
- .shuffle(self.num_train_examples, seed=self.args.seed)
- .batch(self.total_train_batch_size, drop_remainder=self.args.dataloader_drop_last)
- .prefetch(tf.data.experimental.AUTOTUNE)
- )
-
- return self.args.strategy.experimental_distribute_dataset(ds)
-
- def get_eval_tfdataset(self, eval_dataset: Optional[tf.data.Dataset] = None) -> tf.data.Dataset:
- """
- Returns the evaluation [`~tf.data.Dataset`].
-
- Args:
- eval_dataset ([`~tf.data.Dataset`], *optional*):
- If provided, will override *self.eval_dataset*. The dataset should yield tuples of `(features, labels)`
- where `features` is a dict of input features and `labels` is the labels. If `labels` is a tensor, the
- loss is calculated by the model by calling `model(features, labels=labels)`. If `labels` is a dict,
- such as when using a QuestionAnswering head model with multiple targets, the loss is instead calculated
- by calling `model(features, **labels)`.
-
- Subclass and override this method if you want to inject some custom behavior.
- """
- if eval_dataset is None and self.eval_dataset is None:
- raise ValueError("Trainer: evaluation requires an eval_dataset.")
-
- eval_dataset = eval_dataset if eval_dataset is not None else self.eval_dataset
- num_examples = eval_dataset.cardinality().numpy()
-
- if num_examples < 0:
- raise ValueError("The training dataset must have an asserted cardinality")
-
- approx = math.floor if self.args.dataloader_drop_last else math.ceil
- steps = approx(num_examples / self.args.eval_batch_size)
- ds = (
- eval_dataset.repeat()
- .batch(self.args.eval_batch_size, drop_remainder=self.args.dataloader_drop_last)
- .prefetch(tf.data.experimental.AUTOTUNE)
- )
-
- return self.args.strategy.experimental_distribute_dataset(ds), steps, num_examples
-
- def get_test_tfdataset(self, test_dataset: tf.data.Dataset) -> tf.data.Dataset:
- """
- Returns a test [`~tf.data.Dataset`].
-
- Args:
- test_dataset ([`~tf.data.Dataset`]):
- The dataset to use. The dataset should yield tuples of `(features, labels)` where `features` is a dict
- of input features and `labels` is the labels. If `labels` is a tensor, the loss is calculated by the
- model by calling `model(features, labels=labels)`. If `labels` is a dict, such as when using a
- QuestionAnswering head model with multiple targets, the loss is instead calculated by calling
- `model(features, **labels)`.
-
- Subclass and override this method if you want to inject some custom behavior.
- """
-
- num_examples = test_dataset.cardinality().numpy()
-
- if num_examples < 0:
- raise ValueError("The training dataset must have an asserted cardinality")
-
- steps = math.ceil(num_examples / self.args.eval_batch_size)
- ds = test_dataset.batch(self.args.eval_batch_size).prefetch(tf.data.experimental.AUTOTUNE)
-
- return self.args.strategy.experimental_distribute_dataset(ds), steps, num_examples
-
- def create_optimizer_and_scheduler(self, num_training_steps: int):
- """
- Setup the optimizer and the learning rate scheduler.
-
- We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the
- TFTrainer's init through `optimizers`, or subclass and override this method.
- """
- if not self.optimizer and not self.lr_scheduler:
- warmup_steps = (
- self.args.warmup_steps
- if self.args.warmup_steps > 0
- else math.ceil(num_training_steps * self.args.warmup_ratio)
- )
-
- self.optimizer, self.lr_scheduler = create_optimizer(
- self.args.learning_rate,
- num_training_steps,
- warmup_steps,
- adam_beta1=self.args.adam_beta1,
- adam_beta2=self.args.adam_beta2,
- adam_epsilon=self.args.adam_epsilon,
- weight_decay_rate=self.args.weight_decay,
- power=self.args.poly_power,
- )
-
- def setup_wandb(self):
- """
- Setup the optional Weights & Biases (`wandb`) integration.
-
- One can subclass and override this method to customize the setup if needed. Find more information `here
- `__. You can also override the following environment variables:
-
- Environment:
- WANDB_PROJECT:
- (Optional): str - "huggingface" by default, set this to a custom string to store results in a different
- project.
- WANDB_DISABLED:
- (Optional): boolean - defaults to false, set to "true" to disable wandb entirely.
- """
-
- logger.info('Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"')
- combined_dict = {**self.model.config.to_dict(), **self.args.to_sanitized_dict()}
- wandb.init(project=os.getenv("WANDB_PROJECT", "huggingface"), config=combined_dict, name=self.args.run_name)
-
- def setup_comet(self):
- """
- Setup the optional Comet.ml integration.
-
- Environment:
- COMET_MODE:
- (Optional): str - "OFFLINE", "ONLINE", or "DISABLED"
- COMET_PROJECT_NAME:
- (Optional): str - Comet.ml project name for experiments
- COMET_OFFLINE_DIRECTORY:
- (Optional): str - folder to use for saving offline experiments when `COMET_MODE` is "OFFLINE"
-
- For a number of configurable items in the environment, see `here
- `__
- """
- comet_mode = os.getenv("COMET_MODE", "ONLINE").upper()
- args = {"project_name": os.getenv("COMET_PROJECT_NAME", "huggingface")}
- experiment = None
- if comet_mode == "ONLINE":
- experiment = comet_ml.Experiment(**args)
- logger.info("Automatic Comet.ml online logging enabled")
- elif comet_mode == "OFFLINE":
- args["offline_directory"] = os.getenv("COMET_OFFLINE_DIRECTORY", "./")
- experiment = comet_ml.OfflineExperiment(**args)
- logger.info("Automatic Comet.ml offline logging enabled; use `comet upload` when finished")
- if experiment is not None:
- experiment._set_model_graph(self.model, framework="transformers")
- experiment._log_parameters(self.args, prefix="args/", framework="transformers")
- experiment._log_parameters(self.model.config, prefix="config/", framework="transformers")
-
- def prediction_loop(
- self,
- dataset: tf.data.Dataset,
- steps: int,
- num_examples: int,
- description: str,
- prediction_loss_only: Optional[bool] = None,
- ) -> PredictionOutput:
- """
- Prediction/evaluation loop, shared by [`~TFTrainer.evaluate`] and [`~TFTrainer.predict`].
-
- Works both with or without labels.
- """
-
- prediction_loss_only = (
- prediction_loss_only if prediction_loss_only is not None else self.args.prediction_loss_only
- )
-
- logger.info(f"***** Running {description} *****")
- logger.info(f" Num examples in dataset = {num_examples}")
- if description == "Evaluation":
- logger.info(f" Num examples in used in evaluation = {self.args.eval_batch_size * steps}")
- logger.info(f" Batch size = {self.args.eval_batch_size}")
-
- label_ids: np.ndarray = None
- preds: np.ndarray = None
- self.eval_loss.reset_states()
-
- # Reset the past mems state at the beginning of the evaluation if necessary.
- if self.args.past_index >= 0:
- self._past = None
-
- for step, batch in enumerate(dataset):
- logits = self.distributed_prediction_steps(batch)
- _, labels = batch
-
- if not prediction_loss_only:
- if isinstance(logits, tuple):
- logits = logits[0]
-
- if isinstance(labels, tuple):
- labels = labels[0]
-
- if self.args.n_replicas > 1:
- for val in logits.values:
- if preds is None:
- preds = val.numpy()
- else:
- preds = np.append(preds, val.numpy(), axis=0)
-
- for val in labels.values:
- if label_ids is None:
- label_ids = val.numpy()
- else:
- label_ids = np.append(label_ids, val.numpy(), axis=0)
- else:
- if preds is None:
- preds = logits.numpy()
- else:
- preds = np.append(preds, logits.numpy(), axis=0)
-
- if label_ids is None:
- label_ids = labels.numpy()
- else:
- label_ids = np.append(label_ids, labels.numpy(), axis=0)
-
- if step == steps - 1:
- break
-
- if self.compute_metrics is not None and preds is not None and label_ids is not None:
- metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))
- else:
- metrics = {}
-
- metrics["eval_loss"] = self.eval_loss.result().numpy() / steps
-
- for key in list(metrics.keys()):
- if not key.startswith("eval_"):
- metrics[f"eval_{key}"] = metrics.pop(key)
-
- if self.args.past_index and hasattr(self, "_past"):
- # Clean the state at the end of training
- delattr(self, "_past")
-
- return PredictionOutput(predictions=preds, label_ids=label_ids, metrics=metrics)
-
- def log(self, logs: Dict[str, float]) -> None:
- """
- Log `logs` on the various objects watching training.
-
- Subclass and override this method to inject custom behavior.
-
- Args:
- logs (`Dict[str, float]`):
- The values to log.
- """
- logs["epoch"] = self.epoch_logging
-
- if self.tb_writer:
- with self.tb_writer.as_default():
- for k, v in logs.items():
- tf.summary.scalar(k, v, step=self.global_step)
- self.tb_writer.flush()
-
- if is_wandb_available():
- wandb.log(logs, step=self.global_step)
-
- if is_comet_available():
- experiment = comet_ml.config.get_global_experiment()
- if experiment is not None:
- experiment._log_metrics(
- logs, step=self.global_step, epoch=self.epoch_logging, framework="transformers"
- )
-
- output = {**logs, **{"step": self.global_step}}
-
- logger.info(output)
-
- def evaluate(self, eval_dataset: Optional[tf.data.Dataset] = None) -> Dict[str, float]:
- """
- Run evaluation and returns metrics.
-
- The calling script will be responsible for providing a method to compute metrics, as they are task-dependent
- (pass it to the init `compute_metrics` argument).
-
- Args:
- eval_dataset ([`~tf.data.Dataset`], *optional*):
- Pass a dataset if you wish to override `self.eval_dataset`. The dataset should yield tuples of
- `(features, labels)` where `features` is a dict of input features and `labels` is the labels. If
- `labels` is a tensor, the loss is calculated by the model by calling `model(features, labels=labels)`.
- If `labels` is a dict, such as when using a QuestionAnswering head model with multiple targets, the
- loss is instead calculated by calling `model(features, **labels)`.
-
- Returns:
- A dictionary containing the evaluation loss and the potential metrics computed from the predictions.
- """
- eval_ds, steps, num_examples = self.get_eval_tfdataset(eval_dataset)
-
- output = self.prediction_loop(eval_ds, steps, num_examples, description="Evaluation")
- logs = {**output.metrics}
- logs["epoch"] = self.epoch_logging
-
- self.log(logs)
-
- return output.metrics
-
- def prediction_step(
- self, features: tf.Tensor, labels: tf.Tensor, nb_instances_in_global_batch: tf.Tensor
- ) -> tf.Tensor:
- """
- Compute the prediction on features and update the loss with labels.
-
- Subclass and override to inject some custom behavior.
- """
- per_example_loss, logits = self.run_model(features, labels, False)
- scaled_loss = per_example_loss / tf.cast(nb_instances_in_global_batch, dtype=per_example_loss.dtype)
-
- self.eval_loss.update_state(scaled_loss)
-
- return logits
-
- @tf.function
- def distributed_prediction_steps(self, batch):
- nb_instances_in_batch = self._compute_nb_instances(batch)
- inputs = self._get_step_inputs(batch, nb_instances_in_batch)
-
- logits = self.args.strategy.run(self.prediction_step, inputs)
-
- return logits
-
- def train(self) -> None:
- """
- Train method to train the model.
- """
- train_ds = self.get_train_tfdataset()
-
- if self.args.debug:
- tf.summary.trace_on(graph=True, profiler=True)
-
- self.gradient_accumulator.reset()
-
- num_update_steps_per_epoch = self.num_train_examples / self.total_train_batch_size
-
- # In fact, ``self.args.dataloader_drop_last`` has no effect in `trainer_tf.py`, because
- # the dataset is repeated before being batched.
- # It has the effect only when TPU is used which requires explicit tensor shape in order to make
- # the gradient accumulation implementation work.
- approx = math.floor if self.args.dataloader_drop_last else math.ceil
- num_update_steps_per_epoch = approx(num_update_steps_per_epoch)
-
- # At least one update for each epoch.
- num_update_steps_per_epoch = max(num_update_steps_per_epoch, 1)
- self.steps_per_epoch = num_update_steps_per_epoch
-
- if self.args.max_steps > 0:
- t_total = self.args.max_steps
- epochs = (self.args.max_steps // self.steps_per_epoch) + int(
- self.args.max_steps % self.steps_per_epoch > 0
- )
- else:
- t_total = self.steps_per_epoch * self.args.num_train_epochs
- epochs = self.args.num_train_epochs
-
- # Since ``self.args.num_train_epochs`` can be `float`, we make ``epochs`` be a `float` always.
- epochs = float(epochs)
-
- with self.args.strategy.scope():
- self.create_optimizer_and_scheduler(num_training_steps=t_total)
- folder = os.path.join(self.args.output_dir, PREFIX_CHECKPOINT_DIR)
- ckpt = tf.train.Checkpoint(optimizer=self.optimizer, model=self.model)
- self.model.ckpt_manager = tf.train.CheckpointManager(ckpt, folder, max_to_keep=self.args.save_total_limit)
-
- iterations = self.optimizer.iterations
- epochs_trained = 0
- steps_trained_in_current_epoch = 0
- if self.model.ckpt_manager.latest_checkpoint:
- logger.info(
- f"Checkpoint file {self.model.ckpt_manager.latest_checkpoint} found and restoring from checkpoint"
- )
- ckpt.restore(self.model.ckpt_manager.latest_checkpoint).expect_partial()
-
- self.global_step = iterations.numpy()
-
- epochs_trained = self.global_step // self.steps_per_epoch
- steps_trained_in_current_epoch = self.global_step % self.steps_per_epoch
-
- logger.info(" Continuing training from checkpoint, will skip to saved global_step")
- logger.info(f" Continuing training from epoch {epochs_trained}")
- logger.info(f" Continuing training from global step {self.global_step}")
- logger.info(f" Will skip the first {steps_trained_in_current_epoch} steps in the first epoch")
-
- tf.summary.experimental.set_step(self.global_step)
-
- with self.tb_writer.as_default():
- tf.summary.text("args", self.args.to_json_string())
-
- self.tb_writer.flush()
-
- logger.info("***** Running training *****")
- logger.info(f" Num examples = {self.num_train_examples}")
- # TODO: We might want to print a more precise ``epochs`` if self.args.max_steps > 0 ?
- logger.info(f" Num Epochs = {epochs}")
- logger.info(f" Instantaneous batch size per device = {self.args.per_device_train_batch_size}")
- logger.info(
- f" Total train batch size (w. parallel, distributed & accumulation) = {self.total_train_batch_size}"
- )
- logger.info(f" Gradient Accumulation steps = {self.args.gradient_accumulation_steps}")
- logger.info(f" Steps per epoch = {self.steps_per_epoch}")
- logger.info(f" Total optimization steps = {t_total}")
-
- self.train_loss = tf.keras.metrics.Sum()
- start_time = datetime.datetime.now()
-
- for epoch_iter in range(epochs_trained, int(epochs)):
- # Reset the past mems state at the beginning of each epoch if necessary.
- if self.args.past_index >= 0:
- self._past = None
-
- for step, batch in enumerate(train_ds):
- # Skip past any already trained steps if resuming training
- if steps_trained_in_current_epoch > 0:
- steps_trained_in_current_epoch -= 1
- continue
-
- self.distributed_training_steps(batch)
-
- self.global_step = iterations.numpy()
- self.epoch_logging = epoch_iter + (step + 1) / self.steps_per_epoch
-
- training_loss = self.train_loss.result() / (step + 1)
-
- if self.args.debug:
- logs = {}
- logs["loss"] = training_loss.numpy()
- logs["epoch"] = self.epoch_logging
-
- self.log(logs)
-
- if self.global_step == 1 and self.args.debug:
- with self.tb_writer.as_default():
- tf.summary.trace_export(
- name="training", step=self.global_step, profiler_outdir=self.args.logging_dir
- )
-
- if (
- self.args.eval_steps > 0
- and self.args.evaluation_strategy == IntervalStrategy.STEPS
- and self.global_step % self.args.eval_steps == 0
- ):
- self.evaluate()
-
- if (self.args.logging_steps > 0 and self.global_step % self.args.logging_steps == 0) or (
- self.global_step == 1 and self.args.logging_first_step
- ):
- logs = {}
- logs["loss"] = training_loss.numpy()
- logs["learning_rate"] = self.lr_scheduler(self.global_step).numpy()
- logs["epoch"] = self.epoch_logging
-
- self.log(logs)
-
- if self.args.save_steps > 0 and self.global_step % self.args.save_steps == 0:
- ckpt_save_path = self.model.ckpt_manager.save()
-
- logger.info(f"Saving checkpoint for step {self.global_step} at {ckpt_save_path}")
-
- if self.args.max_steps > 0 and self.global_step >= t_total:
- break
-
- if self.global_step % self.steps_per_epoch == 0:
- break
-
- self.train_loss.reset_states()
-
- if self.args.max_steps > 0 and self.global_step >= self.args.max_steps:
- break
-
- end_time = datetime.datetime.now()
-
- logger.info(f"Training took: {str(end_time - start_time)}")
-
- if self.args.past_index and hasattr(self, "_past"):
- # Clean the state at the end of training
- delattr(self, "_past")
-
- def training_step(self, features, labels, nb_instances_in_global_batch):
- """
- Perform a training step on features and labels.
-
- Subclass and override to inject some custom behavior.
- """
- per_example_loss, _ = self.run_model(features, labels, True)
- scaled_loss = per_example_loss / tf.cast(nb_instances_in_global_batch, dtype=per_example_loss.dtype)
- gradients = tf.gradients(scaled_loss, self.model.trainable_variables)
- gradients = [
- g if g is not None else tf.zeros_like(v) for g, v in zip(gradients, self.model.trainable_variables)
- ]
-
- if self.args.gradient_accumulation_steps > 1:
- self.gradient_accumulator(gradients)
-
- self.train_loss.update_state(scaled_loss)
-
- if self.args.gradient_accumulation_steps == 1:
- return gradients
-
- def apply_gradients(self, features, labels, nb_instances_in_global_batch):
- if self.args.gradient_accumulation_steps == 1:
- gradients = self.training_step(features, labels, nb_instances_in_global_batch)
-
- self.optimizer.apply_gradients(list(zip(gradients, self.model.trainable_variables)))
- else:
- for _ in tf.range(self.args.gradient_accumulation_steps):
- reduced_features = {
- k: ft[: self.args.train_batch_size // self.args.n_replicas] for k, ft in features.items()
- }
-
- if tf.is_tensor(labels):
- reduced_labels = labels[: self.args.train_batch_size // self.args.n_replicas]
- elif isinstance(labels, dict):
- reduced_labels = {
- k: lbl[: self.args.train_batch_size // self.args.n_replicas] for k, lbl in labels.items()
- }
- else:
- raise ValueError("The labels must be either a tf.Tensor or a dict.")
-
- self.training_step(reduced_features, reduced_labels, nb_instances_in_global_batch)
-
- features = {
- k: tf.concat(
- [ft[self.args.train_batch_size // self.args.n_replicas :], reduced_features[k]],
- axis=0,
- )
- for k, ft in features.items()
- }
-
- if tf.is_tensor(labels):
- labels = tf.concat(
- [labels[self.args.train_batch_size // self.args.n_replicas :], reduced_labels], axis=0
- )
- elif isinstance(labels, dict):
- labels = {
- k: tf.concat(
- [lbl[self.args.train_batch_size // self.args.n_replicas :], reduced_labels[k]],
- axis=0,
- )
- for k, lbl in labels.items()
- }
- else:
- raise ValueError("The labels must be either a tf.Tensor or a dict.")
-
- gradients = self.gradient_accumulator.gradients
- gradients = [
- (tf.clip_by_value(grad, -self.args.max_grad_norm, self.args.max_grad_norm)) for grad in gradients
- ]
-
- self.optimizer.apply_gradients(list(zip(gradients, self.model.trainable_variables)))
- self.gradient_accumulator.reset()
-
- @tf.function
- def distributed_training_steps(self, batch):
- with self.args.strategy.scope():
- nb_instances_in_batch = self._compute_nb_instances(batch)
- inputs = self._get_step_inputs(batch, nb_instances_in_batch)
-
- self.args.strategy.run(self.apply_gradients, inputs)
-
- @staticmethod
- def _compute_nb_instances(batch):
- labels = batch[-1]
- if isinstance(labels, PerReplica):
- labels = tf.concat(labels.values, axis=0)
-
- nb_instances = tf.reduce_sum(tf.cast(labels != -100, dtype=tf.int32))
-
- return nb_instances
-
- @staticmethod
- def _get_step_inputs(batch, nb_instances):
- features, labels = batch
-
- if isinstance(labels, PerReplica):
- # need to make a `PerReplica` objects for ``nb_instances``
- nb_instances = PerReplica([nb_instances] * len(labels.values))
-
- step_inputs = (features, labels, nb_instances)
-
- return step_inputs
-
- def run_model(self, features, labels, training):
- """
- Computes the loss of the given features and labels pair.
-
- Subclass and override this method if you want to inject some custom behavior.
-
- Args:
- features (`tf.Tensor`): A batch of input features.
- labels (`tf.Tensor`): A batch of labels.
- training (`bool`): Whether or not to run the model in training mode.
-
- Returns:
- A tuple of two `tf.Tensor`: The loss and logits.
- """
-
- if self.args.past_index >= 0 and getattr(self, "_past", None) is not None:
- features["mems"] = self._past
-
- if isinstance(labels, (dict)):
- outputs = self.model(features, training=training, **labels)[:2]
- else:
- outputs = self.model(features, labels=labels, training=training)[:2]
-
- loss, logits = outputs[:2]
-
- if self.args.past_index >= 0:
- self._past = outputs[self.args.past_index]
-
- return loss, logits
-
- def predict(self, test_dataset: tf.data.Dataset) -> PredictionOutput:
- """
- Run prediction and returns predictions and potential metrics.
-
- Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method
- will also return metrics, like in `evaluate()`.
-
- Args:
- test_dataset ([`~tf.data.Dataset`]):
- Dataset to run the predictions on. The dataset should yield tuples of `(features, labels)` where
- `features` is a dict of input features and `labels` is the labels. If `labels` is a tensor, the loss is
- calculated by the model by calling `model(features, labels=labels)`. If `labels` is a dict, such as
- when using a QuestionAnswering head model with multiple targets, the loss is instead calculated by
- calling `model(features, **labels)`
-
- Returns: *NamedTuple* A namedtuple with the following keys:
-
- - predictions (`np.ndarray`): The predictions on `test_dataset`.
- - label_ids (`np.ndarray`, *optional*): The labels (if the dataset contained some).
- - metrics (`Dict[str, float]`, *optional*): The potential dictionary of metrics (if the dataset contained
- labels).
- """
- test_ds, steps, num_examples = self.get_test_tfdataset(test_dataset)
-
- return self.prediction_loop(test_ds, steps, num_examples, description="Prediction")
-
- def save_model(self, output_dir: Optional[str] = None):
- """
- Will save the model, so you can reload it using `from_pretrained()`.
- """
- output_dir = output_dir if output_dir is not None else self.args.output_dir
-
- logger.info(f"Saving model in {output_dir}")
-
- if not isinstance(self.model, TFPreTrainedModel):
- raise ValueError("Trainer.model appears to not be a PreTrainedModel")
-
- self.model.save_pretrained(output_dir)
\ No newline at end of file
diff --git a/spaces/CoffeeBrewer/CompVis-stable-diffusion-v1-4/app.py b/spaces/CoffeeBrewer/CompVis-stable-diffusion-v1-4/app.py
deleted file mode 100644
index e1e1025c8f06010197c50917ac9dd1ddeaf7e5aa..0000000000000000000000000000000000000000
--- a/spaces/CoffeeBrewer/CompVis-stable-diffusion-v1-4/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/CompVis/stable-diffusion-v1-4").launch()
\ No newline at end of file
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/test.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/test.py
deleted file mode 100644
index ae99b2778f346df88890a0f3e2c1d0b730a5309d..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/test.py
+++ /dev/null
@@ -1,7 +0,0 @@
-#encoding = utf-8
-import numpy as np
-
-assert_true = np.testing.assert_
-assert_equal = np.testing.assert_equal
-assert_array_equal = np.testing.assert_array_equal
-assert_almost_equal = np.testing.assert_almost_equal
diff --git a/spaces/Cyril666/my_abi/app.py b/spaces/Cyril666/my_abi/app.py
deleted file mode 100644
index 36e4bca6c60b2aa7eecb1d978ef035ebb2e60a62..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/my_abi/app.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import os
-os.system('pip install --upgrade gdown')
-import gdown
-gdown.download(id='1mYM_26qHUom_5NU7iutHneB_KHlLjL5y', output='workdir.zip')
-os.system('unzip workdir.zip')
-
-import glob
-import gradio as gr
-from demo import get_model, preprocess, postprocess, load
-from utils import Config, Logger, CharsetMapper
-
-def process_image(image):
- config = Config('configs/train_abinet.yaml')
- config.model_vision_checkpoint = None
- model = get_model(config)
- model = load(model, 'workdir/train-abinet/best-train-abinet.pth')
- charset = CharsetMapper(filename=config.dataset_charset_path, max_length=config.dataset_max_length + 1)
-
- img = image.convert('RGB')
- img = preprocess(img, config.dataset_image_width, config.dataset_image_height)
- res = model(img)
- return postprocess(res, charset, 'alignment')[0][0]
-
-title = "张博强毕设中期展示(文本识别部分)"
-description = "西北工业大学航海学院张博强毕设,目前识别部分进度为复现abinet,本网页为abinet复现的可视化web端展示"
-#article = "
'
-
- # We use ?name2 and ?time.time() to force the browser to reset caches
- img_bot = f'' if Path("cache/pfp_character.png").exists() else ''
- img_me = f'' if Path("cache/pfp_me.png").exists() else ''
-
- for i, _row in enumerate(history[::-1]):
- row = [convert_to_markdown(entry) for entry in _row]
-
- output += f"""
-
"
- return output
-
-
-def chat_html_wrapper(history, name1, name2, mode, reset_cache=False):
- if mode == "cai-chat":
- return generate_cai_chat_html(history, name1, name2, reset_cache)
- elif mode == "chat":
- return generate_chat_html(history, name1, name2)
- elif mode == "instruct":
- return generate_instruct_html(history)
- else:
- return ''
diff --git a/spaces/aquaaaaaaaaaaaa/AI-minato_aqua/inference/slicer.py b/spaces/aquaaaaaaaaaaaa/AI-minato_aqua/inference/slicer.py
deleted file mode 100644
index 35a888b906e7df8634cfdcec914f650c6cefd26a..0000000000000000000000000000000000000000
--- a/spaces/aquaaaaaaaaaaaa/AI-minato_aqua/inference/slicer.py
+++ /dev/null
@@ -1,158 +0,0 @@
-import time
-
-import numpy as np
-import torch
-import torchaudio
-from scipy.ndimage import maximum_filter1d, uniform_filter1d
-
-
-def timeit(func):
- def run(*args, **kwargs):
- t = time.time()
- res = func(*args, **kwargs)
- print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t))
- return res
-
- return run
-
-
-# @timeit
-def _window_maximum(arr, win_sz):
- return maximum_filter1d(arr, size=win_sz)[win_sz // 2: win_sz // 2 + arr.shape[0] - win_sz + 1]
-
-
-# @timeit
-def _window_rms(arr, win_sz):
- filtered = np.sqrt(uniform_filter1d(np.power(arr, 2), win_sz) - np.power(uniform_filter1d(arr, win_sz), 2))
- return filtered[win_sz // 2: win_sz // 2 + arr.shape[0] - win_sz + 1]
-
-
-def level2db(levels, eps=1e-12):
- return 20 * np.log10(np.clip(levels, a_min=eps, a_max=1))
-
-
-def _apply_slice(audio, begin, end):
- if len(audio.shape) > 1:
- return audio[:, begin: end]
- else:
- return audio[begin: end]
-
-
-class Slicer:
- def __init__(self,
- sr: int,
- db_threshold: float = -40,
- min_length: int = 5000,
- win_l: int = 300,
- win_s: int = 20,
- max_silence_kept: int = 500):
- self.db_threshold = db_threshold
- self.min_samples = round(sr * min_length / 1000)
- self.win_ln = round(sr * win_l / 1000)
- self.win_sn = round(sr * win_s / 1000)
- self.max_silence = round(sr * max_silence_kept / 1000)
- if not self.min_samples >= self.win_ln >= self.win_sn:
- raise ValueError('The following condition must be satisfied: min_length >= win_l >= win_s')
- if not self.max_silence >= self.win_sn:
- raise ValueError('The following condition must be satisfied: max_silence_kept >= win_s')
-
- @timeit
- def slice(self, audio):
- samples = audio
- if samples.shape[0] <= self.min_samples:
- return {"0": {"slice": False, "split_time": f"0,{len(audio)}"}}
- # get absolute amplitudes
- abs_amp = np.abs(samples - np.mean(samples))
- # calculate local maximum with large window
- win_max_db = level2db(_window_maximum(abs_amp, win_sz=self.win_ln))
- sil_tags = []
- left = right = 0
- while right < win_max_db.shape[0]:
- if win_max_db[right] < self.db_threshold:
- right += 1
- elif left == right:
- left += 1
- right += 1
- else:
- if left == 0:
- split_loc_l = left
- else:
- sil_left_n = min(self.max_silence, (right + self.win_ln - left) // 2)
- rms_db_left = level2db(_window_rms(samples[left: left + sil_left_n], win_sz=self.win_sn))
- split_win_l = left + np.argmin(rms_db_left)
- split_loc_l = split_win_l + np.argmin(abs_amp[split_win_l: split_win_l + self.win_sn])
- if len(sil_tags) != 0 and split_loc_l - sil_tags[-1][1] < self.min_samples and right < win_max_db.shape[
- 0] - 1:
- right += 1
- left = right
- continue
- if right == win_max_db.shape[0] - 1:
- split_loc_r = right + self.win_ln
- else:
- sil_right_n = min(self.max_silence, (right + self.win_ln - left) // 2)
- rms_db_right = level2db(_window_rms(samples[right + self.win_ln - sil_right_n: right + self.win_ln],
- win_sz=self.win_sn))
- split_win_r = right + self.win_ln - sil_right_n + np.argmin(rms_db_right)
- split_loc_r = split_win_r + np.argmin(abs_amp[split_win_r: split_win_r + self.win_sn])
- sil_tags.append((split_loc_l, split_loc_r))
- right += 1
- left = right
- if left != right:
- sil_left_n = min(self.max_silence, (right + self.win_ln - left) // 2)
- rms_db_left = level2db(_window_rms(samples[left: left + sil_left_n], win_sz=self.win_sn))
- split_win_l = left + np.argmin(rms_db_left)
- split_loc_l = split_win_l + np.argmin(abs_amp[split_win_l: split_win_l + self.win_sn])
- sil_tags.append((split_loc_l, samples.shape[0]))
- if len(sil_tags) == 0:
- return {"0": {"slice": False, "split_time": f"0,{len(audio)}"}}
- else:
- chunks = []
- # 第一段静音并非从头开始,补上有声片段
- if sil_tags[0][0]:
- chunks.append({"slice": False, "split_time": f"0,{sil_tags[0][0]}"})
- for i in range(0, len(sil_tags)):
- # 标识有声片段(跳过第一段)
- if i:
- chunks.append({"slice": False, "split_time": f"{sil_tags[i - 1][1]},{sil_tags[i][0]}"})
- # 标识所有静音片段
- chunks.append({"slice": True, "split_time": f"{sil_tags[i][0]},{sil_tags[i][1]}"})
- # 最后一段静音并非结尾,补上结尾片段
- if sil_tags[-1][1] != len(audio):
- chunks.append({"slice": False, "split_time": f"{sil_tags[-1][1]},{len(audio)}"})
- chunk_dict = {}
- for i in range(len(chunks)):
- chunk_dict[str(i)] = chunks[i]
- return chunk_dict
-
-
-def cut(audio_path, db_thresh=-30, min_len=5000, win_l=300, win_s=20, max_sil_kept=500):
- audio, sr = torchaudio.load(audio_path)
- if len(audio.shape) == 2 and audio.shape[1] >= 2:
- audio = torch.mean(audio, dim=0).unsqueeze(0)
- audio = audio.cpu().numpy()[0]
-
- slicer = Slicer(
- sr=sr,
- db_threshold=db_thresh,
- min_length=min_len,
- win_l=win_l,
- win_s=win_s,
- max_silence_kept=max_sil_kept
- )
- chunks = slicer.slice(audio)
- return chunks
-
-
-def chunks2audio(audio_path, chunks):
- chunks = dict(chunks)
- audio, sr = torchaudio.load(audio_path)
- if len(audio.shape) == 2 and audio.shape[1] >= 2:
- audio = torch.mean(audio, dim=0).unsqueeze(0)
- audio = audio.cpu().numpy()[0]
- result = []
- for k, v in chunks.items():
- tag = v["split_time"].split(",")
- result.append((v["slice"], audio[int(tag[0]):int(tag[1])]))
- return result, sr
-
-
diff --git a/spaces/arch-123/bingo/src/lib/hooks/use-at-bottom.tsx b/spaces/arch-123/bingo/src/lib/hooks/use-at-bottom.tsx
deleted file mode 100644
index d37c8cf4162adcb0064e08ecec24eb731416b045..0000000000000000000000000000000000000000
--- a/spaces/arch-123/bingo/src/lib/hooks/use-at-bottom.tsx
+++ /dev/null
@@ -1,23 +0,0 @@
-import * as React from 'react'
-
-export function useAtBottom(offset = 0) {
- const [isAtBottom, setIsAtBottom] = React.useState(false)
-
- React.useEffect(() => {
- const handleScroll = () => {
- setIsAtBottom(
- window.innerHeight + window.scrollY >=
- document.body.offsetHeight - offset
- )
- }
-
- window.addEventListener('scroll', handleScroll, { passive: true })
- handleScroll()
-
- return () => {
- window.removeEventListener('scroll', handleScroll)
- }
- }, [offset])
-
- return isAtBottom
-}
diff --git a/spaces/arnaucas/wildfire-detection/app.py b/spaces/arnaucas/wildfire-detection/app.py
deleted file mode 100644
index 25554639e109e06aeb14d471bebe91c7405feb51..0000000000000000000000000000000000000000
--- a/spaces/arnaucas/wildfire-detection/app.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import gradio as gr
-import os
-from transformers import pipeline
-from pathlib import Path
-from PIL import Image
-import numpy as np
-
-example_imgs = ["examples/img0.jpg",
- "examples/img1.jpg",
- "examples/img2.jpg",
- "examples/img3.jpg"]
-
-pipe = pipeline("image-classification", model="arnaucas/wildfire-classifier")
-
-def inference(image):
- image = Image.fromarray(np.uint8(image)).convert('RGB')
- output = pipe(image)
- result = {item['label']: item['score'] for item in output}
- return result
-
-gr.Interface(
- fn=inference,
- title="Wildfire Detection",
- description = "Predict whether an image contains wildfire or not",
- inputs="image",
- examples=example_imgs,
- outputs=gr.Label(),
- cache_examples=False,
- theme='earneleh/paris',
- article = "Author: Arnau Castellano",
-).launch(debug=True, enable_queue=True)
\ No newline at end of file
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/SHA3_256.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/SHA3_256.py
deleted file mode 100644
index b4f11ee1f0f082001218c2474f7da773d1492fa3..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/SHA3_256.py
+++ /dev/null
@@ -1,174 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# ===================================================================
-# The contents of this file are dedicated to the public domain. To
-# the extent that dedication to the public domain is not available,
-# everyone is granted a worldwide, perpetual, royalty-free,
-# non-exclusive license to exercise all rights associated with the
-# contents of this file for any purpose whatsoever.
-# No rights are reserved.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
-# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
-# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
-# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-# ===================================================================
-
-from Crypto.Util.py3compat import bord
-
-from Crypto.Util._raw_api import (load_pycryptodome_raw_lib,
- VoidPointer, SmartPointer,
- create_string_buffer,
- get_raw_buffer, c_size_t,
- c_uint8_ptr, c_ubyte)
-
-from Crypto.Hash.keccak import _raw_keccak_lib
-
-class SHA3_256_Hash(object):
- """A SHA3-256 hash object.
- Do not instantiate directly.
- Use the :func:`new` function.
-
- :ivar oid: ASN.1 Object ID
- :vartype oid: string
-
- :ivar digest_size: the size in bytes of the resulting hash
- :vartype digest_size: integer
- """
-
- # The size of the resulting hash in bytes.
- digest_size = 32
-
- # ASN.1 Object ID
- oid = "2.16.840.1.101.3.4.2.8"
-
- # Input block size for HMAC
- block_size = 136
-
- def __init__(self, data, update_after_digest):
- self._update_after_digest = update_after_digest
- self._digest_done = False
- self._padding = 0x06
-
- state = VoidPointer()
- result = _raw_keccak_lib.keccak_init(state.address_of(),
- c_size_t(self.digest_size * 2),
- c_ubyte(24))
- if result:
- raise ValueError("Error %d while instantiating SHA-3/256"
- % result)
- self._state = SmartPointer(state.get(),
- _raw_keccak_lib.keccak_destroy)
- if data:
- self.update(data)
-
- def update(self, data):
- """Continue hashing of a message by consuming the next chunk of data.
-
- Args:
- data (byte string/byte array/memoryview): The next chunk of the message being hashed.
- """
-
- if self._digest_done and not self._update_after_digest:
- raise TypeError("You can only call 'digest' or 'hexdigest' on this object")
-
- result = _raw_keccak_lib.keccak_absorb(self._state.get(),
- c_uint8_ptr(data),
- c_size_t(len(data))
- )
- if result:
- raise ValueError("Error %d while updating SHA-3/256"
- % result)
- return self
-
- def digest(self):
- """Return the **binary** (non-printable) digest of the message that has been hashed so far.
-
- :return: The hash digest, computed over the data processed so far.
- Binary form.
- :rtype: byte string
- """
-
- self._digest_done = True
-
- bfr = create_string_buffer(self.digest_size)
- result = _raw_keccak_lib.keccak_digest(self._state.get(),
- bfr,
- c_size_t(self.digest_size),
- c_ubyte(self._padding))
- if result:
- raise ValueError("Error %d while instantiating SHA-3/256"
- % result)
-
- self._digest_value = get_raw_buffer(bfr)
- return self._digest_value
-
- def hexdigest(self):
- """Return the **printable** digest of the message that has been hashed so far.
-
- :return: The hash digest, computed over the data processed so far.
- Hexadecimal encoded.
- :rtype: string
- """
-
- return "".join(["%02x" % bord(x) for x in self.digest()])
-
- def copy(self):
- """Return a copy ("clone") of the hash object.
-
- The copy will have the same internal state as the original hash
- object.
- This can be used to efficiently compute the digests of strings that
- share a common initial substring.
-
- :return: A hash object of the same type
- """
-
- clone = self.new()
- result = _raw_keccak_lib.keccak_copy(self._state.get(),
- clone._state.get())
- if result:
- raise ValueError("Error %d while copying SHA3-256" % result)
- return clone
-
- def new(self, data=None):
- """Create a fresh SHA3-256 hash object."""
-
- return type(self)(data, self._update_after_digest)
-
-
-def new(*args, **kwargs):
- """Create a new hash object.
-
- Args:
- data (byte string/byte array/memoryview):
- The very first chunk of the message to hash.
- It is equivalent to an early call to :meth:`update`.
- update_after_digest (boolean):
- Whether :meth:`digest` can be followed by another :meth:`update`
- (default: ``False``).
-
- :Return: A :class:`SHA3_256_Hash` hash object
- """
-
- data = kwargs.pop("data", None)
- update_after_digest = kwargs.pop("update_after_digest", False)
- if len(args) == 1:
- if data:
- raise ValueError("Initial data for hash specified twice")
- data = args[0]
-
- if kwargs:
- raise TypeError("Unknown parameters: " + str(kwargs))
-
- return SHA3_256_Hash(data, update_after_digest)
-
-# The size of the resulting hash in bytes.
-digest_size = SHA3_256_Hash.digest_size
-
-# Input block size for HMAC
-block_size = 136
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PcxImagePlugin.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PcxImagePlugin.py
deleted file mode 100644
index 841c18a220002305c6734a16ee40d4ad0facee87..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PcxImagePlugin.py
+++ /dev/null
@@ -1,220 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# PCX file handling
-#
-# This format was originally used by ZSoft's popular PaintBrush
-# program for the IBM PC. It is also supported by many MS-DOS and
-# Windows applications, including the Windows PaintBrush program in
-# Windows 3.
-#
-# history:
-# 1995-09-01 fl Created
-# 1996-05-20 fl Fixed RGB support
-# 1997-01-03 fl Fixed 2-bit and 4-bit support
-# 1999-02-03 fl Fixed 8-bit support (broken in 1.0b1)
-# 1999-02-07 fl Added write support
-# 2002-06-09 fl Made 2-bit and 4-bit support a bit more robust
-# 2002-07-30 fl Seek from to current position, not beginning of file
-# 2003-06-03 fl Extract DPI settings (info["dpi"])
-#
-# Copyright (c) 1997-2003 by Secret Labs AB.
-# Copyright (c) 1995-2003 by Fredrik Lundh.
-#
-# See the README file for information on usage and redistribution.
-#
-
-import io
-import logging
-
-from . import Image, ImageFile, ImagePalette
-from ._binary import i16le as i16
-from ._binary import o8
-from ._binary import o16le as o16
-
-logger = logging.getLogger(__name__)
-
-
-def _accept(prefix):
- return prefix[0] == 10 and prefix[1] in [0, 2, 3, 5]
-
-
-##
-# Image plugin for Paintbrush images.
-
-
-class PcxImageFile(ImageFile.ImageFile):
-
- format = "PCX"
- format_description = "Paintbrush"
-
- def _open(self):
-
- # header
- s = self.fp.read(128)
- if not _accept(s):
- raise SyntaxError("not a PCX file")
-
- # image
- bbox = i16(s, 4), i16(s, 6), i16(s, 8) + 1, i16(s, 10) + 1
- if bbox[2] <= bbox[0] or bbox[3] <= bbox[1]:
- raise SyntaxError("bad PCX image size")
- logger.debug("BBox: %s %s %s %s", *bbox)
-
- # format
- version = s[1]
- bits = s[3]
- planes = s[65]
- provided_stride = i16(s, 66)
- logger.debug(
- "PCX version %s, bits %s, planes %s, stride %s",
- version,
- bits,
- planes,
- provided_stride,
- )
-
- self.info["dpi"] = i16(s, 12), i16(s, 14)
-
- if bits == 1 and planes == 1:
- mode = rawmode = "1"
-
- elif bits == 1 and planes in (2, 4):
- mode = "P"
- rawmode = "P;%dL" % planes
- self.palette = ImagePalette.raw("RGB", s[16:64])
-
- elif version == 5 and bits == 8 and planes == 1:
- mode = rawmode = "L"
- # FIXME: hey, this doesn't work with the incremental loader !!!
- self.fp.seek(-769, io.SEEK_END)
- s = self.fp.read(769)
- if len(s) == 769 and s[0] == 12:
- # check if the palette is linear greyscale
- for i in range(256):
- if s[i * 3 + 1 : i * 3 + 4] != o8(i) * 3:
- mode = rawmode = "P"
- break
- if mode == "P":
- self.palette = ImagePalette.raw("RGB", s[1:])
- self.fp.seek(128)
-
- elif version == 5 and bits == 8 and planes == 3:
- mode = "RGB"
- rawmode = "RGB;L"
-
- else:
- raise OSError("unknown PCX mode")
-
- self.mode = mode
- self._size = bbox[2] - bbox[0], bbox[3] - bbox[1]
-
- # Don't trust the passed in stride.
- # Calculate the approximate position for ourselves.
- # CVE-2020-35653
- stride = (self._size[0] * bits + 7) // 8
-
- # While the specification states that this must be even,
- # not all images follow this
- if provided_stride != stride:
- stride += stride % 2
-
- bbox = (0, 0) + self.size
- logger.debug("size: %sx%s", *self.size)
-
- self.tile = [("pcx", bbox, self.fp.tell(), (rawmode, planes * stride))]
-
-
-# --------------------------------------------------------------------
-# save PCX files
-
-
-SAVE = {
- # mode: (version, bits, planes, raw mode)
- "1": (2, 1, 1, "1"),
- "L": (5, 8, 1, "L"),
- "P": (5, 8, 1, "P"),
- "RGB": (5, 8, 3, "RGB;L"),
-}
-
-
-def _save(im, fp, filename):
-
- try:
- version, bits, planes, rawmode = SAVE[im.mode]
- except KeyError as e:
- raise ValueError(f"Cannot save {im.mode} images as PCX") from e
-
- # bytes per plane
- stride = (im.size[0] * bits + 7) // 8
- # stride should be even
- stride += stride % 2
- # Stride needs to be kept in sync with the PcxEncode.c version.
- # Ideally it should be passed in in the state, but the bytes value
- # gets overwritten.
-
- logger.debug(
- "PcxImagePlugin._save: xwidth: %d, bits: %d, stride: %d",
- im.size[0],
- bits,
- stride,
- )
-
- # under windows, we could determine the current screen size with
- # "Image.core.display_mode()[1]", but I think that's overkill...
-
- screen = im.size
-
- dpi = 100, 100
-
- # PCX header
- fp.write(
- o8(10)
- + o8(version)
- + o8(1)
- + o8(bits)
- + o16(0)
- + o16(0)
- + o16(im.size[0] - 1)
- + o16(im.size[1] - 1)
- + o16(dpi[0])
- + o16(dpi[1])
- + b"\0" * 24
- + b"\xFF" * 24
- + b"\0"
- + o8(planes)
- + o16(stride)
- + o16(1)
- + o16(screen[0])
- + o16(screen[1])
- + b"\0" * 54
- )
-
- assert fp.tell() == 128
-
- ImageFile._save(im, fp, [("pcx", (0, 0) + im.size, 0, (rawmode, bits * planes))])
-
- if im.mode == "P":
- # colour palette
- fp.write(o8(12))
- palette = im.im.getpalette("RGB", "RGB")
- palette += b"\x00" * (768 - len(palette))
- fp.write(palette) # 768 bytes
- elif im.mode == "L":
- # greyscale palette
- fp.write(o8(12))
- for i in range(256):
- fp.write(o8(i) * 3)
-
-
-# --------------------------------------------------------------------
-# registry
-
-
-Image.register_open(PcxImageFile.format, PcxImageFile, _accept)
-Image.register_save(PcxImageFile.format, _save)
-
-Image.register_extension(PcxImageFile.format, ".pcx")
-
-Image.register_mime(PcxImageFile.format, "image/x-pcx")
diff --git a/spaces/ashercn97/AsherTesting/extensions/openai/README.md b/spaces/ashercn97/AsherTesting/extensions/openai/README.md
deleted file mode 100644
index 7bbc1e8311322cc61d175fd1993818e5321c14e2..0000000000000000000000000000000000000000
--- a/spaces/ashercn97/AsherTesting/extensions/openai/README.md
+++ /dev/null
@@ -1,231 +0,0 @@
-# An OpenedAI API (openai like)
-
-This extension creates an API that works kind of like openai (ie. api.openai.com).
-It's incomplete so far but perhaps is functional enough for you.
-
-## Setup & installation
-
-Optional (for flask_cloudflared, embeddings):
-
-```
-pip3 install -r requirements.txt
-```
-
-It listens on tcp port 5001 by default. You can use the OPENEDAI_PORT environment variable to change this.
-
-Make sure you enable it in server launch parameters, it should include:
-
-```
---extensions openai
-```
-
-You can also use the ``--listen`` argument to make the server available on the networ, and/or the ```--share``` argument to enable a public Cloudflare endpoint.
-
-To enable the basic image generation support (txt2img) set the environment variable SD_WEBUI_URL to point to your Stable Diffusion API ([Automatic1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui)).
-
-For example:
-```
-SD_WEBUI_URL=http://127.0.0.1:7861
-```
-
-### Models
-
-This has been successfully tested with Alpaca, Koala, Vicuna, WizardLM and their variants, (ex. gpt4-x-alpaca, GPT4all-snoozy, stable-vicuna, wizard-vicuna, etc.) and many others. Models that have been trained for **Instruction Following** work best. If you test with other models please let me know how it goes. Less than satisfying results (so far) from: RWKV-4-Raven, llama, mpt-7b-instruct/chat.
-
-For best results across all API endpoints, a model like [vicuna-13b-v1.3-GPTQ](https://huggingface.co/TheBloke/vicuna-13b-v1.3-GPTQ), [stable-vicuna-13B-GPTQ](https://huggingface.co/TheBloke/stable-vicuna-13B-GPTQ) or [airoboros-13B-gpt4-1.3-GPTQ](https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.3-GPTQ) is a good start.
-
-For good results with the [Completions](https://platform.openai.com/docs/api-reference/completions) API endpoint, in addition to the above models, you can also try using a base model like [falcon-7b](https://huggingface.co/tiiuae/falcon-7b) or Llama.
-
-For good results with the [ChatCompletions](https://platform.openai.com/docs/api-reference/chat) or [Edits](https://platform.openai.com/docs/api-reference/edits) API endpoints you can use almost any model trained for instruction following - within the limits of the model. Be sure that the proper instruction template is detected and loaded or the results will not be good.
-
-For the proper instruction format to be detected you need to have a matching model entry in your ```models/config.yaml``` file. Be sure to keep this file up to date.
-A matching instruction template file in the characters/instruction-following/ folder will loaded and applied to format messages correctly for the model - this is critical for good results.
-
-For example, the Wizard-Vicuna family of models are trained with the Vicuna 1.1 format. In the models/config.yaml file there is this matching entry:
-
-```
-.*wizard.*vicuna:
- mode: 'instruct'
- instruction_template: 'Vicuna-v1.1'
-```
-
-This refers to ```characters/instruction-following/Vicuna-v1.1.yaml```, which looks like this:
-
-```
-user: "USER:"
-bot: "ASSISTANT:"
-turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|>\n"
-context: "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\n\n"
-```
-
-For most common models this is already setup, but if you are using a new or uncommon model you may need add a matching entry to the models/config.yaml and possibly create your own instruction-following template and for best results.
-
-If you see this in your logs, it probably means that the correct format could not be loaded:
-```
-Warning: Loaded default instruction-following template for model.
-```
-
-### Embeddings (alpha)
-
-Embeddings requires ```sentence-transformers``` installed, but chat and completions will function without it loaded. The embeddings endpoint is currently using the HuggingFace model: ```sentence-transformers/all-mpnet-base-v2``` for embeddings. This produces 768 dimensional embeddings (the same as the text-davinci-002 embeddings), which is different from OpenAI's current default ```text-embedding-ada-002``` model which produces 1536 dimensional embeddings. The model is small-ish and fast-ish. This model and embedding size may change in the future.
-
-| model name | dimensions | input max tokens | speed | size | Avg. performance |
-| --- | --- | --- | --- | --- | --- |
-| text-embedding-ada-002 | 1536 | 8192| - | - | - |
-| text-davinci-002 | 768 | 2046 | - | - | - |
-| all-mpnet-base-v2 | 768 | 384 | 2800 | 420M | 63.3 |
-| all-MiniLM-L6-v2 | 384 | 256 | 14200 | 80M | 58.8 |
-
-In short, the all-MiniLM-L6-v2 model is 5x faster, 5x smaller ram, 2x smaller storage, and still offers good quality. Stats from (https://www.sbert.net/docs/pretrained_models.html). To change the model from the default you can set the environment variable OPENEDAI_EMBEDDING_MODEL, ex. "OPENEDAI_EMBEDDING_MODEL=all-MiniLM-L6-v2".
-
-Warning: You cannot mix embeddings from different models even if they have the same dimensions. They are not comparable.
-
-### Client Application Setup
-
-
-Almost everything you use it with will require you to set a dummy OpenAI API key environment variable.
-
-With the [official python openai client](https://github.com/openai/openai-python), you can set the OPENAI_API_BASE environment variable before you import the openai module, like so:
-
-```
-OPENAI_API_KEY=sk-111111111111111111111111111111111111111111111111
-OPENAI_API_BASE=http://127.0.0.1:5001/v1
-```
-
-If needed, replace 127.0.0.1 with the IP/port of your server.
-
-If using .env files to save the OPENAI_API_BASE and OPENAI_API_KEY variables, you can ensure compatibility by loading the .env file before loading the openai module, like so in python:
-
-```
-from dotenv import load_dotenv
-load_dotenv()
-import openai
-```
-
-With the [official Node.js openai client](https://github.com/openai/openai-node) it is slightly more more complex because the environment variables are not used by default, so small source code changes may be required to use the environment variables, like so:
-
-```
-const openai = OpenAI(Configuration({
- apiKey: process.env.OPENAI_API_KEY,
- basePath: process.env.OPENAI_API_BASE,
-}));
-```
-
-For apps made with the [chatgpt-api Node.js client library](https://github.com/transitive-bullshit/chatgpt-api):
-
-```
-const api = new ChatGPTAPI({
- apiKey: process.env.OPENAI_API_KEY,
- apiBaseUrl: process.env.OPENAI_API_BASE,
-})
-```
-
-## API Documentation & Examples
-
-The OpenAI API is well documented, you can view the documentation here: https://platform.openai.com/docs/api-reference
-
-Examples of how to use the Completions API in Python can be found here: https://platform.openai.com/examples
-Not all of them will work with all models unfortunately, See the notes on Models for how to get the best results.
-
-Here is a simple python example of how you can use the Edit endpoint as a translator.
-
-```python
-import openai
-response = openai.Edit.create(
- model="x",
- instruction="Translate this into French",
- input="Our mission is to ensure that artificial general intelligence benefits all of humanity.",
-)
-print(response['choices'][0]['text'])
-# Sample Output:
-# Notre mission est de garantir que l'intelligence artificielle généralisée profite à tous les membres de l'humanité.
-```
-
-
-
-## Compatibility & not so compatibility
-
-| API endpoint | tested with | notes |
-| --- | --- | --- |
-| /v1/models | openai.Model.list() | Lists models, Currently loaded model first, plus some compatibility options |
-| /v1/models/{id} | openai.Model.get() | returns whatever you ask for, model does nothing yet anyways |
-| /v1/text_completion | openai.Completion.create() | the most tested, only supports single string input so far, variable quality based on the model |
-| /v1/chat/completions | openai.ChatCompletion.create() | Quality depends a lot on the model |
-| /v1/edits | openai.Edit.create() | Works the best of all, perfect for instruction following models |
-| /v1/images/generations | openai.Image.create() | Bare bones, no model configuration, response_format='b64_json' only. |
-| /v1/embeddings | openai.Embedding.create() | Using Sentence Transformer, dimensions are different and may never be directly comparable to openai embeddings. |
-| /v1/moderations | openai.Moderation.create() | does nothing. successfully. |
-| /v1/completions | openai api completions.create | Legacy endpoint (v0.25) |
-| /v1/engines/*/embeddings | python-openai v0.25 | Legacy endpoint |
-| /v1/engines/*/generate | openai engines.generate | Legacy endpoint |
-| /v1/engines | openai engines.list | Legacy Lists models |
-| /v1/engines/{model_name} | openai engines.get -i {model_name} | You can use this legacy endpoint to load models via the api |
-| /v1/images/edits | openai.Image.create_edit() | not yet supported |
-| /v1/images/variations | openai.Image.create_variation() | not yet supported |
-| /v1/audio/\* | openai.Audio.\* | not yet supported |
-| /v1/files\* | openai.Files.\* | not yet supported |
-| /v1/fine-tunes\* | openai.FineTune.\* | not yet supported |
-| /v1/search | openai.search, engines.search | not yet supported |
-
-The model name setting is ignored in completions, but you may need to adjust the maximum token length to fit the model (ie. set to <2048 tokens instead of 4096, 8k, etc). To mitigate some of this, the max_tokens value is halved until it is less than truncation_length for the model (typically 2k).
-
-Streaming, temperature, top_p, max_tokens, stop, should all work as expected, but not all parameters are mapped correctly.
-
-Some hacky mappings:
-
-| OpenAI | text-generation-webui | note |
-| --- | --- | --- |
-| frequency_penalty | encoder_repetition_penalty | this seems to operate with a different scale and defaults, I tried to scale it based on range & defaults, but the results are terrible. hardcoded to 1.18 until there is a better way |
-| presence_penalty | repetition_penalty | same issues as frequency_penalty, hardcoded to 1.0 |
-| best_of | top_k | default is 1 |
-| stop | custom_stopping_strings | this is also stuffed with ['\n###', "\n{user prompt}", "{user prompt}" ] for good measure. |
-| n | 1 | variations are not supported yet. |
-| 1 | num_beams | hardcoded to 1 |
-| 1.0 | typical_p | hardcoded to 1.0 |
-| max_tokens | max_new_tokens | For Text Completions max_tokens is set smaller than the truncation_length minus the prompt length. This can cause no input to be generated if the prompt is too large. For ChatCompletions, the older chat messages may be dropped to fit the max_new_tokens requested |
-| logprobs | - | not supported yet |
-| logit_bias | - | not supported yet |
-| messages.name | - | not supported yet |
-| user | - | not supported yet |
-| functions/function_call | - | function calls are not supported yet |
-
-defaults are mostly from openai, so are different. I use the openai defaults where I can and try to scale them to the webui defaults with the same intent.
-
-### Applications
-
-Almost everything needs the OPENAI_API_KEY environment variable set, for example:
-```
-OPENAI_API_KEY=sk-111111111111111111111111111111111111111111111111
-```
-Some apps are picky about key format, but 'dummy' or 'sk-dummy' also work in most cases.
-Most application will work if you also set:
-```
-OPENAI_API_BASE=http://127.0.0.1:5001/v1
-```
-but there are some exceptions.
-
-| Compatibility | Application/Library | url | notes / setting |
-| --- | --- | --- | --- |
-| ✅❌ | openai-python (v0.25+) | https://github.com/openai/openai-python | only the endpoints from above are working. OPENAI_API_BASE=http://127.0.0.1:5001/v1 |
-| ✅❌ | openai-node | https://github.com/openai/openai-node | only the endpoints from above are working. environment variables don't work by default, but can be configured (see above) |
-| ✅❌ | chatgpt-api | https://github.com/transitive-bullshit/chatgpt-api | only the endpoints from above are working. environment variables don't work by default, but can be configured (see above) |
-| ✅ | anse | https://github.com/anse-app/anse | API Key & URL configurable in UI |
-| ✅ | shell_gpt | https://github.com/TheR1D/shell_gpt | OPENAI_API_HOST=http://127.0.0.1:5001 |
-| ✅ | gpt-shell | https://github.com/jla/gpt-shell | OPENAI_API_BASE=http://127.0.0.1:5001/v1 |
-| ✅ | gpt-discord-bot | https://github.com/openai/gpt-discord-bot | OPENAI_API_BASE=http://127.0.0.1:5001/v1 |
-| ✅ | OpenAI for Notepad++ | https://github.com/Krazal/nppopenai | api_url=http://127.0.0.1:5001 in the config file, or environment variables |
-| ✅ | vscode-openai | https://marketplace.visualstudio.com/items?itemName=AndrewButson.vscode-openai | OPENAI_API_BASE=http://127.0.0.1:5001/v1 |
-| ✅❌ | langchain | https://github.com/hwchase17/langchain | OPENAI_API_BASE=http://127.0.0.1:5001/v1 even with a good 30B-4bit model the result is poor so far. It assumes zero shot python/json coding. Some model tailored prompt formatting improves results greatly. |
-| ✅❌ | Auto-GPT | https://github.com/Significant-Gravitas/Auto-GPT | OPENAI_API_BASE=http://127.0.0.1:5001/v1 Same issues as langchain. Also assumes a 4k+ context |
-| ✅❌ | babyagi | https://github.com/yoheinakajima/babyagi | OPENAI_API_BASE=http://127.0.0.1:5001/v1 |
-| ❌ | guidance | https://github.com/microsoft/guidance | logit_bias and logprobs not yet supported |
-
-## Future plans
-* model changing, esp. something for swapping loras or embedding models
-* consider switching to FastAPI + starlette for SSE (openai SSE seems non-standard)
-
-## Bugs? Feedback? Comments? Pull requests?
-
-To enable debugging and get copious output you can set the OPENEDAI_DEBUG=1 environment variable.
-
-Are all appreciated, please @matatonic and I'll try to get back to you as soon as possible.
diff --git a/spaces/ashutosh1919/quantum-perceptron/quantum_perceptron/utils/plot_utils.py b/spaces/ashutosh1919/quantum-perceptron/quantum_perceptron/utils/plot_utils.py
deleted file mode 100644
index 79260ee986f860d85ee2d017eb241f18d46296f4..0000000000000000000000000000000000000000
--- a/spaces/ashutosh1919/quantum-perceptron/quantum_perceptron/utils/plot_utils.py
+++ /dev/null
@@ -1,41 +0,0 @@
-import numpy as np
-import matplotlib.pyplot as plt
-from quantum_perceptron.utils.data_utils import (
- get_bin_int,
- assert_bits,
- assert_negative
-)
-
-
-def get_img_from_data(data: int, num_qubits: int) -> np.ndarray:
- """
- Get n x n matrix representing the image of the data where n is
- num_qubits.
-
- Args:
- data: `int` representing data value
- (correspponding to input or weight vector)
- num_qubits: `int` representing number of qubits.
-
- Returns: Image in form of `np.ndarray`.
- """
- assert_negative(data)
- assert_bits(data, num_qubits)
- bin_str = get_bin_int(data, num_qubits)
- img = np.zeros((np.power(2, num_qubits)))
-
- for i, bit in enumerate(bin_str):
- if bit == '0':
- img[i] = 255
-
- return img.reshape((num_qubits, num_qubits))
-
-
-def plot_img_from_data(data: int, num_qubits: int):
- """
- Plot image from data.
- """
- img = get_img_from_data(data, num_qubits)
- ax = plt.imshow(img, cmap='gray')
- ax.axes.xaxis.set_visible(False)
- ax.axes.yaxis.set_visible(False)
diff --git a/spaces/auto-academic/auto-draft/wrapper.py b/spaces/auto-academic/auto-draft/wrapper.py
deleted file mode 100644
index 8a38c82f42c53d90084c5acc5f10fcc0a2f09918..0000000000000000000000000000000000000000
--- a/spaces/auto-academic/auto-draft/wrapper.py
+++ /dev/null
@@ -1,57 +0,0 @@
-"""
-This script is used to wrap all generation methods together.
-
-todo:
- A worker keeps running on the server. Monitor the Amazon SQS. Once receive a new message, do the following:
- Download the corresponding configuration files on S3.
- Change Task status from Pending to Running.
- Call `generator_wrapper` and wait for the outputs.
- If `generator_wrapper` returns results:
- evaluate the results; compile it; upload results to S3 ... Change Task status from Running to Completed.
- If anything goes wrong, raise Error.
- If `generator_wrapper` returns nothing or Timeout, or raise any error:
- Change Task status from Running to Failed.
-"""
-from auto_generators import generate_draft
-from utils.file_operations import make_archive
-import yaml
-import uuid
-
-
-def remove_special_characters(s):
- return ''.join(c for c in s if c.isalnum() or c.isspace() or c == ',')
-
-
-def generator_wrapper(config):
- if not isinstance(config, dict):
- with open(config, "r") as file:
- config = yaml.safe_load(file)
- title = config["paper"]["title"]
- generator = config["generator"]
- if generator == "auto_draft":
- folder = generate_draft(title, config["paper"]["description"],
- tldr=config["references"]["tldr"],
- max_kw_refs=config["references"]["max_kw_refs"],
- refs=config["references"]["refs"],
- max_tokens_ref=config["references"]["max_tokens_ref"],
- knowledge_database=config["domain_knowledge"]["knowledge_database"],
- max_tokens_kd=config["domain_knowledge"]["max_tokens_kd"],
- query_counts=config["domain_knowledge"]["query_counts"],
- sections=config["output"]["selected_sections"],
- model=config["output"]["model"],
- template=config["output"]["template"],
- prompts_mode=config["output"]["prompts_mode"],
- )
- else:
- raise NotImplementedError(f"The generator {generator} has not been supported yet.")
- # todo: post processing: translate to Chinese, compile PDF ...
- filename = remove_special_characters(title).replace(" ", "_") + uuid.uuid1().hex + ".zip"
- return make_archive(folder, filename)
-
-
-if __name__ == "__main__":
- pass
- # with open("configurations/default.yaml", 'r') as file:
- # config = yaml.safe_load(file)
- # print(config)
- # generator_wrapper(config)
diff --git a/spaces/avans06/whisper-webui-translate/src/whisper/abstractWhisperContainer.py b/spaces/avans06/whisper-webui-translate/src/whisper/abstractWhisperContainer.py
deleted file mode 100644
index 98cae0679185e2142f3cd3c7bdf35ab67640d5b2..0000000000000000000000000000000000000000
--- a/spaces/avans06/whisper-webui-translate/src/whisper/abstractWhisperContainer.py
+++ /dev/null
@@ -1,115 +0,0 @@
-import abc
-from typing import Any, Callable, List
-
-from src.config import ModelConfig, VadInitialPromptMode
-
-from src.hooks.progressListener import ProgressListener
-from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache
-from src.prompts.abstractPromptStrategy import AbstractPromptStrategy
-
-class AbstractWhisperCallback:
- def __init__(self):
- pass
-
- @abc.abstractmethod
- def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None):
- """
- Peform the transcription of the given audio file or data.
-
- Parameters
- ----------
- audio: Union[str, np.ndarray, torch.Tensor]
- The audio file to transcribe, or the audio data as a numpy array or torch tensor.
- segment_index: int
- The target language of the transcription. If not specified, the language will be inferred from the audio content.
- task: str
- The task - either translate or transcribe.
- progress_listener: ProgressListener
- A callback to receive progress updates.
- """
- raise NotImplementedError()
-
-class LambdaWhisperCallback(AbstractWhisperCallback):
- def __init__(self, callback_lambda: Callable[[Any, int, str, str, ProgressListener], None]):
- super().__init__()
- self.callback_lambda = callback_lambda
-
- def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None):
- return self.callback_lambda(audio, segment_index, prompt, detected_language, progress_listener)
-
-class AbstractWhisperContainer:
- def __init__(self, model_name: str, device: str = None, compute_type: str = "float16",
- download_root: str = None,
- cache: ModelCache = None, models: List[ModelConfig] = []):
- self.model_name = model_name
- self.device = device
- self.compute_type = compute_type
- self.download_root = download_root
- self.cache = cache
-
- # Will be created on demand
- self.model = None
-
- # List of known models
- self.models = models
-
- def get_model(self):
- if self.model is None:
-
- if (self.cache is None):
- self.model = self._create_model()
- else:
- model_key = "WhisperContainer." + self.model_name + ":" + (self.device if self.device else '')
- self.model = self.cache.get(model_key, self._create_model)
- return self.model
-
- @abc.abstractmethod
- def _create_model(self):
- raise NotImplementedError()
-
- def ensure_downloaded(self):
- pass
-
- @abc.abstractmethod
- def create_callback(self, language: str = None, task: str = None,
- prompt_strategy: AbstractPromptStrategy = None,
- **decodeOptions: dict) -> AbstractWhisperCallback:
- """
- Create a WhisperCallback object that can be used to transcript audio files.
-
- Parameters
- ----------
- language: str
- The target language of the transcription. If not specified, the language will be inferred from the audio content.
- task: str
- The task - either translate or transcribe.
- prompt_strategy: AbstractPromptStrategy
- The prompt strategy to use for the transcription.
- decodeOptions: dict
- Additional options to pass to the decoder. Must be pickleable.
-
- Returns
- -------
- A WhisperCallback object.
- """
- raise NotImplementedError()
-
- # This is required for multiprocessing
- def __getstate__(self):
- return {
- "model_name": self.model_name,
- "device": self.device,
- "download_root": self.download_root,
- "models": self.models,
- "compute_type": self.compute_type
- }
-
- def __setstate__(self, state):
- self.model_name = state["model_name"]
- self.device = state["device"]
- self.download_root = state["download_root"]
- self.models = state["models"]
- self.compute_type = state["compute_type"]
- self.model = None
- # Depickled objects must use the global cache
- self.cache = GLOBAL_MODEL_CACHE
\ No newline at end of file
diff --git a/spaces/awacke1/ASR-SOTA-NvidiaSTTMozilla/README.md b/spaces/awacke1/ASR-SOTA-NvidiaSTTMozilla/README.md
deleted file mode 100644
index 61fd6ce1c10809ce663225fd4c494552fc0eaca9..0000000000000000000000000000000000000000
--- a/spaces/awacke1/ASR-SOTA-NvidiaSTTMozilla/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: 🗣️Live ASR Speech Recognition Gradio🧠💾
-emoji: 🗣️Live🧠
-colorFrom: purple
-colorTo: red
-sdk: gradio
-sdk_version: 3.5
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/awacke1/Streamlit-Google-Maps-Minnesota/README.md b/spaces/awacke1/Streamlit-Google-Maps-Minnesota/README.md
deleted file mode 100644
index dd9c01cf5d8c40469365b3973cc7d36108917a09..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Streamlit-Google-Maps-Minnesota/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: 🏥 Minnesota Medical Centers 🌳
-emoji: 🏥🌳
-colorFrom: green
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.28.0
-app_file: app.py
-pinned: true
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/FilmShader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/FilmShader.js
deleted file mode 100644
index 3028fbc330c9971ee9903afae8c4ba4e2520cdc8..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/FilmShader.js
+++ /dev/null
@@ -1,104 +0,0 @@
-/**
- * @author alteredq / http://alteredqualia.com/
- *
- * Film grain & scanlines shader
- *
- * - ported from HLSL to WebGL / GLSL
- * http://www.truevision3d.com/forums/showcase/staticnoise_colorblackwhite_scanline_shaders-t18698.0.html
- *
- * Screen Space Static Postprocessor
- *
- * Produces an analogue noise overlay similar to a film grain / TV static
- *
- * Original implementation and noise algorithm
- * Pat 'Hawthorne' Shearon
- *
- * Optimized scanlines + noise version with intensity scaling
- * Georg 'Leviathan' Steinrohder
- *
- * This version is provided under a Creative Commons Attribution 3.0 License
- * http://creativecommons.org/licenses/by/3.0/
- */
-
-THREE.FilmShader = {
-
- uniforms: {
-
- "tDiffuse": { value: null },
- "time": { value: 0.0 },
- "nIntensity": { value: 0.5 },
- "sIntensity": { value: 0.05 },
- "sCount": { value: 4096 },
- "grayscale": { value: 1 }
-
- },
-
- vertexShader: [
-
- "varying vec2 vUv;",
-
- "void main() {",
-
- "vUv = uv;",
- "gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );",
-
- "}"
-
- ].join( "\n" ),
-
- fragmentShader: [
-
- "#include ",
-
- // control parameter
- "uniform float time;",
-
- "uniform bool grayscale;",
-
- // noise effect intensity value (0 = no effect, 1 = full effect)
- "uniform float nIntensity;",
-
- // scanlines effect intensity value (0 = no effect, 1 = full effect)
- "uniform float sIntensity;",
-
- // scanlines effect count value (0 = no effect, 4096 = full effect)
- "uniform float sCount;",
-
- "uniform sampler2D tDiffuse;",
-
- "varying vec2 vUv;",
-
- "void main() {",
-
- // sample the source
- "vec4 cTextureScreen = texture2D( tDiffuse, vUv );",
-
- // make some noise
- "float dx = rand( vUv + time );",
-
- // add noise
- "vec3 cResult = cTextureScreen.rgb + cTextureScreen.rgb * clamp( 0.1 + dx, 0.0, 1.0 );",
-
- // get us a sine and cosine
- "vec2 sc = vec2( sin( vUv.y * sCount ), cos( vUv.y * sCount ) );",
-
- // add scanlines
- "cResult += cTextureScreen.rgb * vec3( sc.x, sc.y, sc.x ) * sIntensity;",
-
- // interpolate between source and result by intensity
- "cResult = cTextureScreen.rgb + clamp( nIntensity, 0.0,1.0 ) * ( cResult - cTextureScreen.rgb );",
-
- // convert to grayscale if desired
- "if( grayscale ) {",
-
- "cResult = vec3( cResult.r * 0.3 + cResult.g * 0.59 + cResult.b * 0.11 );",
-
- "}",
-
- "gl_FragColor = vec4( cResult, cTextureScreen.a );",
-
- "}"
-
- ].join( "\n" )
-
-};
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/GammaCorrectionShader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/GammaCorrectionShader.js
deleted file mode 100644
index 4c2a373fba16b5702ce657f3130d634dcdeaafb5..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/GammaCorrectionShader.js
+++ /dev/null
@@ -1,45 +0,0 @@
-/**
- * @author WestLangley / http://github.com/WestLangley
- *
- * Gamma Correction Shader
- * http://en.wikipedia.org/wiki/gamma_correction
- */
-
-THREE.GammaCorrectionShader = {
-
- uniforms: {
-
- "tDiffuse": { value: null }
-
- },
-
- vertexShader: [
-
- "varying vec2 vUv;",
-
- "void main() {",
-
- "vUv = uv;",
- "gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );",
-
- "}"
-
- ].join( "\n" ),
-
- fragmentShader: [
-
- "uniform sampler2D tDiffuse;",
-
- "varying vec2 vUv;",
-
- "void main() {",
-
- "vec4 tex = texture2D( tDiffuse, vec2( vUv.x, vUv.y ) );",
-
- "gl_FragColor = LinearToGamma( tex, float( GAMMA_FACTOR ) );",
-
- "}"
-
- ].join( "\n" )
-
-};
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/extras/ShapeUtils.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/extras/ShapeUtils.d.ts
deleted file mode 100644
index ded77c619d2248b2c7e3a9650a2e9882c188e63e..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/extras/ShapeUtils.d.ts
+++ /dev/null
@@ -1,11 +0,0 @@
-interface Vec2 {
- x: number;
- y: number;
-}
-
-export namespace ShapeUtils {
- export function area(contour: Vec2[]): number;
- export function triangulate(contour: Vec2[], indices: boolean): number[];
- export function triangulateShape(contour: Vec2[], holes: Vec2[]): number[][];
- export function isClockWise(pts: Vec2[]): boolean;
-}
diff --git a/spaces/bhn4477/Car_orientation/README.md b/spaces/bhn4477/Car_orientation/README.md
deleted file mode 100644
index 5da8af6c0cafb10692069d7c25fa8c9cafbf3bfa..0000000000000000000000000000000000000000
--- a/spaces/bhn4477/Car_orientation/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Car Orientation
-emoji: 💩
-colorFrom: green
-colorTo: purple
-sdk: streamlit
-sdk_version: 1.15.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/bioriAsaeru/text-to-voice/Crack BEST Agisoft PhotoScan Professional 1.4.3 Build 6529.md b/spaces/bioriAsaeru/text-to-voice/Crack BEST Agisoft PhotoScan Professional 1.4.3 Build 6529.md
deleted file mode 100644
index 345a5cb78ca44ab002cea8d22a56a385a9c62fdf..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Crack BEST Agisoft PhotoScan Professional 1.4.3 Build 6529.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
Agisoft Metashape Professional 1.8.4 Crack It is a software that may assist people with making 3D photographs from in case two photos, as they have an element which is crucial for reconstruction. Agisoft Metashape Download can help thousands of photos, nevertheless all procedures are transported out in your area, with no need to transfer data from your enterprise. The image positioning process, the system queries for common points and finds them, as the geometry developing method, that is depending on the approximate camera practices, displays the images as 3d polygon works. Once you may have created the geometry of an item. effortlessly undertake many designs, that may be used for orthophoto missions.
-
Furthermore, it is really a very easy-to-understand program for all user levels. Professionals, as well as beginners, can efficiently utilize this tool to produce desired 3D content. The Agisoft PhotoScan Cracked with License Code 2022 comes with everything required for professional-grade image editing. Such materials can be utilized in a diverse market field, by the invention of fits into this look of products for civil and design framework. This might find out the projection of the version at the top and also build up a matrix of peaks. This Agisoft PhotoScan pro 2020 Crack supports all the text file formats including JPG, TIF, PNG, BMP, EXR, PPM, MPO, and more.
-
CRACK Agisoft PhotoScan Professional 1.4.3 Build 6529
Agisoft PhotoScan Crack Build 14575 allows you to make multiple types of the 3D geometry from the image. This latest version of PhotoScan comes with enhanced features. Along with some new and improved tools you will be able to create 3D models more easily. The application may provide you with the possibility to select the building material. For example, the wood, concrete, etc. And then you can modify all the models' properties and sets of parameters. In addition, Agisoft PhotoScan 13 Crack with License Code 2015 offers you the ability to add key information to a 3D model. Additionally, you can distinguish between the object and the background. Besides, it permits you to create the unique 3D mesh with autogen frames. It's also possible to configure the included angle when you work with the scene. Along with it you may create a 3D model of the camera. You are able to select the default settings, such as the exposure, the mirror, the focal length and so on. Hence, you will be able to save the projects into standard formats.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/bipin/multipurpose-ai/README.md b/spaces/bipin/multipurpose-ai/README.md
deleted file mode 100644
index 81990d3f92f72c3e1b324db725981cebd1801a0a..0000000000000000000000000000000000000000
--- a/spaces/bipin/multipurpose-ai/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Multipurpose Ai
-emoji: 😻
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/bluuuuuuuu/test02/Dockerfile b/spaces/bluuuuuuuu/test02/Dockerfile
deleted file mode 100644
index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000
--- a/spaces/bluuuuuuuu/test02/Dockerfile
+++ /dev/null
@@ -1,21 +0,0 @@
-FROM node:18-bullseye-slim
-
-RUN apt-get update && \
-
-apt-get install -y git
-
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-
-WORKDIR /app
-
-RUN npm install
-
-COPY Dockerfile greeting.md* .env* ./
-
-RUN npm run build
-
-EXPOSE 7860
-
-ENV NODE_ENV=production
-
-CMD [ "npm", "start" ]
\ No newline at end of file
diff --git a/spaces/boomsss/gamedayspx/model_day.py b/spaces/boomsss/gamedayspx/model_day.py
deleted file mode 100644
index 26a36127ab48f55285a91ca49f655e98e16eb960..0000000000000000000000000000000000000000
--- a/spaces/boomsss/gamedayspx/model_day.py
+++ /dev/null
@@ -1,434 +0,0 @@
-import streamlit as st
-import pandas as pd
-import pandas_datareader as pdr
-import numpy as np
-import yfinance as yf
-import json
-import requests
-from bs4 import BeautifulSoup
-from typing import List
-import xgboost as xgb
-from tqdm import tqdm
-from sklearn import linear_model
-import joblib
-import os
-from sklearn.metrics import roc_auc_score, precision_score, recall_score
-import datetime
-from pandas.tseries.offsets import BDay
-import lightgbm as lgb
-
-def walk_forward_validation(df, target_column, num_training_rows, num_periods):
-
- # Create an XGBRegressor model
- # model = xgb.XGBRegressor(n_estimators=100, objective='reg:squarederror', random_state = 42)
- model = linear_model.LinearRegression()
-
- overall_results = []
- # Iterate over the rows in the DataFrame, one step at a time
- for i in tqdm(range(num_training_rows, df.shape[0] - num_periods + 1),desc='LR Model'):
- # Split the data into training and test sets
- X_train = df.drop(target_column, axis=1).iloc[:i]
- y_train = df[target_column].iloc[:i]
- X_test = df.drop(target_column, axis=1).iloc[i:i+num_periods]
- y_test = df[target_column].iloc[i:i+num_periods]
-
- # Fit the model to the training data
- model.fit(X_train, y_train)
-
- # Make a prediction on the test data
- predictions = model.predict(X_test)
-
- # Create a DataFrame to store the true and predicted values
- result_df = pd.DataFrame({'True': y_test, 'Predicted': predictions}, index=y_test.index)
-
- overall_results.append(result_df)
-
- df_results = pd.concat(overall_results)
- # model.save_model('model_lr.bin')
- # Return the true and predicted values, and fitted model
- return df_results, model
-
-model_cols = [
- 'BigNewsDay',
- 'Quarter',
- 'Perf5Day',
- 'Perf5Day_n1',
- 'DaysGreen',
- 'DaysRed',
- 'CurrentGap',
- 'RangePct',
- 'RangePct_n1',
- 'RangePct_n2',
- 'OHLC4_VIX',
- 'OHLC4_VIX_n1',
- 'OHLC4_VIX_n2',
- 'VIXOpen',
- 'VVIXOpen',
- 'OpenL1',
- 'OpenL2',
- 'OpenH1',
- 'OpenH2',
- 'L1TouchPct',
- 'L2TouchPct',
- 'H1TouchPct',
- 'H2TouchPct',
- 'L1BreakPct',
- 'L2BreakPct',
- 'H1BreakPct',
- 'H2BreakPct',
- 'H1BreakTouchPct',
- 'H2BreakTouchPct',
- 'L1BreakTouchPct',
- 'L2BreakTouchPct'
-]
-
-def walk_forward_validation_seq(df, target_column_clf, target_column_regr, num_training_rows, num_periods):
-
- # Create run the regression model to get its target
- res, model1 = walk_forward_validation(df.drop(columns=[target_column_clf]).dropna(), target_column_regr, num_training_rows, num_periods)
- # joblib.dump(model1, 'model1.bin')
-
- # Merge the result df back on the df for feeding into the classifier
- for_merge = res[['Predicted']]
- for_merge.columns = ['RegrModelOut']
- for_merge['RegrModelOut'] = for_merge['RegrModelOut'] > 0
- df = df.merge(for_merge, left_index=True, right_index=True)
- df = df.drop(columns=[target_column_regr])
- df = df[model_cols + ['RegrModelOut', target_column_clf]]
-
- df[target_column_clf] = df[target_column_clf].astype(bool)
- df['RegrModelOut'] = df['RegrModelOut'].astype(bool)
-
- # Create an XGBRegressor model
- # model2 = xgb.XGBClassifier(n_estimators=10, random_state = 42)
- model2 = lgb.LGBMClassifier(n_estimators=10, random_state=42, verbosity=-1)
- # model = linear_model.LogisticRegression(max_iter=1500)
-
- overall_results = []
- # Iterate over the rows in the DataFrame, one step at a time
- for i in tqdm(range(num_training_rows, df.shape[0] - num_periods + 1),'CLF Model'):
- # Split the data into training and test sets
- X_train = df.drop(target_column_clf, axis=1).iloc[:i]
- y_train = df[target_column_clf].iloc[:i]
- X_test = df.drop(target_column_clf, axis=1).iloc[i:i+num_periods]
- y_test = df[target_column_clf].iloc[i:i+num_periods]
-
- # Fit the model to the training data
- model2.fit(X_train, y_train)
-
- # Make a prediction on the test data
- predictions = model2.predict_proba(X_test)[:,-1]
-
- # Create a DataFrame to store the true and predicted values
- result_df = pd.DataFrame({'True': y_test, 'Predicted': predictions}, index=y_test.index)
-
- overall_results.append(result_df)
-
- df_results = pd.concat(overall_results)
-
- # Calibrate Probabilities
- def get_quantiles(df, col_name, q):
- return df.groupby(pd.cut(df[col_name], q))['True'].mean()
-
- greenprobas = []
- meanprobas = []
- for i, pct in tqdm(enumerate(df_results['Predicted']), desc='Calibrating Probas'):
- try:
- df_q = get_quantiles(df_results.iloc[:i], 'Predicted', 7)
- for q in df_q.index:
- if q.left <= pct <= q.right:
- p = df_q[q]
- c = (q.left + q.right) / 2
- except:
- p = None
- c = None
-
- greenprobas.append(p)
- meanprobas.append(c)
-
- df_results['CalibPredicted'] = greenprobas
-
- return df_results, model1, model2
-
-def seq_predict_proba(df, trained_reg_model, trained_clf_model):
- regr_pred = trained_reg_model.predict(df)
- regr_pred = regr_pred > 0
- new_df = df.copy()
- new_df['RegrModelOut'] = regr_pred
- clf_pred_proba = trained_clf_model.predict_proba(new_df[model_cols + ['RegrModelOut']])[:,-1]
- return clf_pred_proba
-
-def get_data():
- # f = open('settings.json')
- # j = json.load(f)
- # API_KEY_FRED = j["API_KEY_FRED"]
-
- API_KEY_FRED = os.getenv('API_KEY_FRED')
-
- def parse_release_dates(release_id: str) -> List[str]:
- release_dates_url = f'https://api.stlouisfed.org/fred/release/dates?release_id={release_id}&realtime_start=2015-01-01&include_release_dates_with_no_data=true&api_key={API_KEY_FRED}'
- r = requests.get(release_dates_url)
- text = r.text
- soup = BeautifulSoup(text, 'xml')
- dates = []
- for release_date_tag in soup.find_all('release_date', {'release_id': release_id}):
- dates.append(release_date_tag.text)
- return dates
-
- def parse_release_dates_obs(series_id: str) -> List[str]:
- obs_url = f'https://api.stlouisfed.org/fred/series/observations?series_id={series_id}&realtime_start=2015-01-01&include_release_dates_with_no_data=true&api_key={API_KEY_FRED}'
- r = requests.get(obs_url)
- text = r.text
- soup = BeautifulSoup(text, 'xml')
- observations = []
- for observation_tag in soup.find_all('observation'):
- date = observation_tag.get('date')
- value = observation_tag.get('value')
- observations.append((date, value))
- return observations
-
- econ_dfs = {}
-
- econ_tickers = [
- 'WALCL',
- 'NFCI',
- 'WRESBAL'
- ]
-
- for et in tqdm(econ_tickers, desc='getting econ tickers'):
- # p = parse_release_dates_obs(et)
- # df = pd.DataFrame(columns = ['ds',et], data = p)
- df = pdr.get_data_fred(et)
- df.index = df.index.rename('ds')
- # df.index = pd.to_datetime(df.index.rename('ds')).dt.tz_localize(None)
- # df['ds'] = pd.to_datetime(df['ds']).dt.tz_localize(None)
- econ_dfs[et] = df
-
- # walcl = pd.DataFrame(columns = ['ds','WALCL'], data = p)
- # walcl['ds'] = pd.to_datetime(walcl['ds']).dt.tz_localize(None)
-
- # nfci = pd.DataFrame(columns = ['ds','NFCI'], data = p2)
- # nfci['ds'] = pd.to_datetime(nfci['ds']).dt.tz_localize(None)
-
- release_ids = [
- "10", # "Consumer Price Index"
- "46", # "Producer Price Index"
- "50", # "Employment Situation"
- "53", # "Gross Domestic Product"
- "103", # "Discount Rate Meeting Minutes"
- "180", # "Unemployment Insurance Weekly Claims Report"
- "194", # "ADP National Employment Report"
- "323" # "Trimmed Mean PCE Inflation Rate"
- ]
-
- release_names = [
- "CPI",
- "PPI",
- "NFP",
- "GDP",
- "FOMC",
- "UNEMP",
- "ADP",
- "PCE"
- ]
-
- releases = {}
-
- for rid, n in tqdm(zip(release_ids, release_names), total = len(release_ids), desc='Getting release dates'):
- releases[rid] = {}
- releases[rid]['dates'] = parse_release_dates(rid)
- releases[rid]['name'] = n
-
- # Create a DF that has all dates with the name of the col as 1
- # Once merged on the main dataframe, days with econ events will be 1 or None. Fill NA with 0
- # This column serves as the true/false indicator of whether there was economic data released that day.
- for rid in tqdm(release_ids, desc='Making indicators'):
- releases[rid]['df'] = pd.DataFrame(
- index=releases[rid]['dates'],
- data={
- releases[rid]['name']: 1
- })
- releases[rid]['df'].index = pd.DatetimeIndex(releases[rid]['df'].index)
- # releases[rid]['df']['ds'] = pd.to_datetime(releases[rid]['df']['ds']).dt.tz_localize(None)
- # releases[rid]['df'] = releases[rid]['df'].set_index('ds')
-
- vix = yf.Ticker('^VIX')
- vvix = yf.Ticker('^VVIX')
- spx = yf.Ticker('^GSPC')
-
- prices_vix = vix.history(start='2018-07-01', interval='1d')
- prices_spx = spx.history(start='2018-07-01', interval='1d')
- prices_vvix = vvix.history(start='2018-07-01', interval='1d')
-
- prices_spx['index'] = [str(x).split()[0] for x in prices_spx.index]
- prices_spx['index'] = pd.to_datetime(prices_spx['index']).dt.date
- prices_spx.index = prices_spx['index']
- prices_spx = prices_spx.drop(columns='index')
-
- prices_vix['index'] = [str(x).split()[0] for x in prices_vix.index]
- prices_vix['index'] = pd.to_datetime(prices_vix['index']).dt.date
- prices_vix.index = prices_vix['index']
- prices_vix = prices_vix.drop(columns='index')
-
- prices_vvix['index'] = [str(x).split()[0] for x in prices_vvix.index]
- prices_vvix['index'] = pd.to_datetime(prices_vvix['index']).dt.date
- prices_vvix.index = prices_vvix['index']
- prices_vvix = prices_vvix.drop(columns='index')
-
- data = prices_spx.merge(prices_vix[['Open','High','Low','Close']], left_index=True, right_index=True, suffixes=['','_VIX'])
- data = data.merge(prices_vvix[['Open','High','Low','Close']], left_index=True, right_index=True, suffixes=['','_VVIX'])
- data.index = pd.DatetimeIndex(data.index)
-
- # Features
- data['PrevClose'] = data['Close'].shift(1)
- data['Perf5Day'] = data['Close'] > data['Close'].shift(5)
- data['Perf5Day_n1'] = data['Perf5Day'].shift(1).astype(bool)
- data['GreenDay'] = (data['Close'] > data['PrevClose']) * 1
- data['RedDay'] = (data['Close'] <= data['PrevClose']) * 1
- data['VIX5Day'] = data['Close_VIX'] > data['Close_VIX'].shift(5)
- data['VIX5Day_n1'] = data['VIX5Day'].shift(1).astype(bool)
- data['VIXOpen'] = data['Open_VIX'] > data['Close_VIX'].shift(1)
- data['VVIXOpen'] = data['Open_VVIX'] > data['Close_VVIX'].shift(1)
- data['VIXOpen'] = data['VIXOpen'].astype(bool)
- data['VVIXOpen'] = data['VVIXOpen'].astype(bool)
- data['Range'] = data[['Open','High']].max(axis=1) - data[['Low','Open']].min(axis=1)
- data['RangePct'] = data['Range'] / data['Close']
- data['VIXLevel'] = pd.qcut(data['Close_VIX'], 4)
- data['OHLC4_VIX'] = data[['Open_VIX','High_VIX','Low_VIX','Close_VIX']].mean(axis=1)
- data['OHLC4'] = data[['Open','High','Low','Close']].mean(axis=1)
- data['OHLC4_Trend'] = data['OHLC4'] > data['OHLC4'].shift(1)
- data['OHLC4_Trend_n1'] = data['OHLC4_Trend'].shift(1).astype(float)
- data['OHLC4_Trend_n2'] = data['OHLC4_Trend'].shift(2).astype(float)
- data['RangePct_n1'] = data['RangePct'].shift(1)
- data['RangePct_n2'] = data['RangePct'].shift(2)
- data['OHLC4_VIX_n1'] = data['OHLC4_VIX'].shift(1)
- data['OHLC4_VIX_n2'] = data['OHLC4_VIX'].shift(2)
- data['CurrentGap'] = ((data['Open'] - data['PrevClose']) / data['PrevClose']).shift(-1)
- data['DayOfWeek'] = pd.to_datetime(data.index)
- data['DayOfWeek'] = data['DayOfWeek'].dt.day
- data['up'] = 100 * (data['High'].shift(1) - data['Open'].shift(1)) / data['Close'].shift(1)
- data['upSD'] = data['up'].rolling(30).std(ddof=0)
- data['aveUp'] = data['up'].rolling(30).mean()
- data['H1'] = data['Open'] + (data['aveUp'] / 100) * data['Open']
- data['H2'] = data['Open'] + ((data['aveUp'] + data['upSD']) / 100) * data['Open']
- data['down'] = 100 * (data['Open'].shift(1) - data['Low'].shift(1)) / data['Close'].shift(1)
- data['downSD'] = data['down'].rolling(30).std(ddof=0)
- data['aveDown'] = data['down'].rolling(30).mean()
- data['L1'] = data['Open'] - (data['aveDown'] / 100) * data['Open']
- data['L2'] = data['Open'] - ((data['aveDown'] + data['upSD']) / 100) * data['Open']
- data['L1Touch'] = data['Low'] < data['L1']
- data['L2Touch'] = data['Low'] < data['L2']
- data['H1Touch'] = data['High'] > data['H1']
- data['H2Touch'] = data['High'] > data['H2']
- data['L1Break'] = data['Close'] < data['L1']
- data['L2Break'] = data['Close'] < data['L2']
- data['H1Break'] = data['Close'] > data['H1']
- data['H2Break'] = data['Close'] > data['H2']
- data['OpenL1'] = data['Open'] / data['L1']
- data['OpenL2'] = data['Open'] / data['L2']
- data['OpenH1'] = data['Open'] / data['H1']
- data['OpenH2'] = data['Open'] / data['H2']
-
- level_cols = [
- 'L1Touch',
- 'L2Touch',
- 'H1Touch',
- 'H2Touch',
- 'L1Break',
- 'L2Break',
- 'H1Break',
- 'H2Break'
- ]
-
- for col in level_cols:
- data[col+'Pct'] = data[col].rolling(100).mean()
-
- data['H1BreakTouchPct'] = data['H1Break'].rolling(100).sum() / data['H1Touch'].rolling(100).sum()
- data['H2BreakTouchPct'] = data['H2Break'].rolling(100).sum() / data['H2Touch'].rolling(100).sum()
- data['L1BreakTouchPct'] = data['L1Break'].rolling(100).sum() / data['L1Touch'].rolling(100).sum()
- data['L2BreakTouchPct'] = data['L2Break'].rolling(100).sum() / data['L2Touch'].rolling(100).sum()
-
- # Target -- the next day's low
- data['Target'] = (data['OHLC4'] / data['PrevClose']) - 1
- data['Target'] = data['Target'].shift(-1)
- # data['Target'] = data['RangePct'].shift(-1)
-
- # Target for clf -- whether tomorrow will close above or below today's close
- data['Target_clf'] = data['Close'] > data['PrevClose']
- data['Target_clf'] = data['Target_clf'].shift(-1)
- data['DayOfWeek'] = pd.to_datetime(data.index)
- data['Quarter'] = data['DayOfWeek'].dt.quarter
- data['DayOfWeek'] = data['DayOfWeek'].dt.weekday
-
- for rid in tqdm(release_ids, desc='Merging econ data'):
- # Get the name of the release
- n = releases[rid]['name']
- # Merge the corresponding DF of the release
- data = data.merge(releases[rid]['df'], how = 'left', left_index=True, right_index=True)
- # Create a column that shifts the value in the merged column up by 1
- data[f'{n}_shift'] = data[n].shift(-1)
- # Fill the rest with zeroes
- data[n] = data[n].fillna(0)
- data[f'{n}_shift'] = data[f'{n}_shift'].fillna(0)
-
- data['BigNewsDay'] = data[[x for x in data.columns if '_shift' in x]].max(axis=1)
-
- def cumul_sum(col):
- nums = []
- s = 0
- for x in col:
- if x == 1:
- s += 1
- elif x == 0:
- s = 0
- nums.append(s)
- return nums
-
- consec_green = cumul_sum(data['GreenDay'].values)
- consec_red = cumul_sum(data['RedDay'].values)
-
- data['DaysGreen'] = consec_green
- data['DaysRed'] = consec_red
-
- final_row = data.index[-2]
-
- exp_row = data.index[-1]
-
- df_final = data.loc[:final_row,
- [
- 'BigNewsDay',
- 'Quarter',
- 'Perf5Day',
- 'Perf5Day_n1',
- 'DaysGreen',
- 'DaysRed',
- 'CurrentGap',
- 'RangePct',
- 'RangePct_n1',
- 'RangePct_n2',
- 'OHLC4_VIX',
- 'OHLC4_VIX_n1',
- 'OHLC4_VIX_n2',
- 'VIXOpen',
- 'VVIXOpen',
- 'OpenL1',
- 'OpenL2',
- 'OpenH1',
- 'OpenH2',
- 'L1TouchPct',
- 'L2TouchPct',
- 'H1TouchPct',
- 'H2TouchPct',
- 'L1BreakPct',
- 'L2BreakPct',
- 'H1BreakPct',
- 'H2BreakPct',
- 'H1BreakTouchPct',
- 'H2BreakTouchPct',
- 'L1BreakTouchPct',
- 'L2BreakTouchPct',
- 'Target',
- 'Target_clf'
- ]]
- df_final = df_final.dropna(subset=['Target','Target_clf','Perf5Day_n1'])
- return data, df_final, final_row
\ No newline at end of file
diff --git a/spaces/brainblow/MusiCreator/audiocraft/modules/seanet.py b/spaces/brainblow/MusiCreator/audiocraft/modules/seanet.py
deleted file mode 100644
index 3e5998e9153afb6e68ea410d565e00ea835db248..0000000000000000000000000000000000000000
--- a/spaces/brainblow/MusiCreator/audiocraft/modules/seanet.py
+++ /dev/null
@@ -1,258 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import typing as tp
-
-import numpy as np
-import torch.nn as nn
-
-from .conv import StreamableConv1d, StreamableConvTranspose1d
-from .lstm import StreamableLSTM
-
-
-class SEANetResnetBlock(nn.Module):
- """Residual block from SEANet model.
-
- Args:
- dim (int): Dimension of the input/output.
- kernel_sizes (list): List of kernel sizes for the convolutions.
- dilations (list): List of dilations for the convolutions.
- activation (str): Activation function.
- activation_params (dict): Parameters to provide to the activation function.
- norm (str): Normalization method.
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
- causal (bool): Whether to use fully causal convolution.
- pad_mode (str): Padding mode for the convolutions.
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
- true_skip (bool): Whether to use true skip connection or a simple
- (streamable) convolution as the skip connection.
- """
- def __init__(self, dim: int, kernel_sizes: tp.List[int] = [3, 1], dilations: tp.List[int] = [1, 1],
- activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, causal: bool = False,
- pad_mode: str = 'reflect', compress: int = 2, true_skip: bool = True):
- super().__init__()
- assert len(kernel_sizes) == len(dilations), 'Number of kernel sizes should match number of dilations'
- act = getattr(nn, activation)
- hidden = dim // compress
- block = []
- for i, (kernel_size, dilation) in enumerate(zip(kernel_sizes, dilations)):
- in_chs = dim if i == 0 else hidden
- out_chs = dim if i == len(kernel_sizes) - 1 else hidden
- block += [
- act(**activation_params),
- StreamableConv1d(in_chs, out_chs, kernel_size=kernel_size, dilation=dilation,
- norm=norm, norm_kwargs=norm_params,
- causal=causal, pad_mode=pad_mode),
- ]
- self.block = nn.Sequential(*block)
- self.shortcut: nn.Module
- if true_skip:
- self.shortcut = nn.Identity()
- else:
- self.shortcut = StreamableConv1d(dim, dim, kernel_size=1, norm=norm, norm_kwargs=norm_params,
- causal=causal, pad_mode=pad_mode)
-
- def forward(self, x):
- return self.shortcut(x) + self.block(x)
-
-
-class SEANetEncoder(nn.Module):
- """SEANet encoder.
-
- Args:
- channels (int): Audio channels.
- dimension (int): Intermediate representation dimension.
- n_filters (int): Base width for the model.
- n_residual_layers (int): nb of residual layers.
- ratios (Sequence[int]): kernel size and stride ratios. The encoder uses downsampling ratios instead of
- upsampling ratios, hence it will use the ratios in the reverse order to the ones specified here
- that must match the decoder order. We use the decoder order as some models may only employ the decoder.
- activation (str): Activation function.
- activation_params (dict): Parameters to provide to the activation function.
- norm (str): Normalization method.
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
- kernel_size (int): Kernel size for the initial convolution.
- last_kernel_size (int): Kernel size for the initial convolution.
- residual_kernel_size (int): Kernel size for the residual layers.
- dilation_base (int): How much to increase the dilation with each layer.
- causal (bool): Whether to use fully causal convolution.
- pad_mode (str): Padding mode for the convolutions.
- true_skip (bool): Whether to use true skip connection or a simple
- (streamable) convolution as the skip connection in the residual network blocks.
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
- lstm (int): Number of LSTM layers at the end of the encoder.
- disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm.
- For the encoder, it corresponds to the N first blocks.
- """
- def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3,
- ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7,
- last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False,
- pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0,
- disable_norm_outer_blocks: int = 0):
- super().__init__()
- self.channels = channels
- self.dimension = dimension
- self.n_filters = n_filters
- self.ratios = list(reversed(ratios))
- del ratios
- self.n_residual_layers = n_residual_layers
- self.hop_length = np.prod(self.ratios)
- self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks
- self.disable_norm_outer_blocks = disable_norm_outer_blocks
- assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \
- "Number of blocks for which to disable norm is invalid." \
- "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0."
-
- act = getattr(nn, activation)
- mult = 1
- model: tp.List[nn.Module] = [
- StreamableConv1d(channels, mult * n_filters, kernel_size,
- norm='none' if self.disable_norm_outer_blocks >= 1 else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
- # Downsample to raw audio scale
- for i, ratio in enumerate(self.ratios):
- block_norm = 'none' if self.disable_norm_outer_blocks >= i + 2 else norm
- # Add residual layers
- for j in range(n_residual_layers):
- model += [
- SEANetResnetBlock(mult * n_filters, kernel_sizes=[residual_kernel_size, 1],
- dilations=[dilation_base ** j, 1],
- norm=block_norm, norm_params=norm_params,
- activation=activation, activation_params=activation_params,
- causal=causal, pad_mode=pad_mode, compress=compress, true_skip=true_skip)]
-
- # Add downsampling layers
- model += [
- act(**activation_params),
- StreamableConv1d(mult * n_filters, mult * n_filters * 2,
- kernel_size=ratio * 2, stride=ratio,
- norm=block_norm, norm_kwargs=norm_params,
- causal=causal, pad_mode=pad_mode),
- ]
- mult *= 2
-
- if lstm:
- model += [StreamableLSTM(mult * n_filters, num_layers=lstm)]
-
- model += [
- act(**activation_params),
- StreamableConv1d(mult * n_filters, dimension, last_kernel_size,
- norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
-
- self.model = nn.Sequential(*model)
-
- def forward(self, x):
- return self.model(x)
-
-
-class SEANetDecoder(nn.Module):
- """SEANet decoder.
-
- Args:
- channels (int): Audio channels.
- dimension (int): Intermediate representation dimension.
- n_filters (int): Base width for the model.
- n_residual_layers (int): nb of residual layers.
- ratios (Sequence[int]): kernel size and stride ratios.
- activation (str): Activation function.
- activation_params (dict): Parameters to provide to the activation function.
- final_activation (str): Final activation function after all convolutions.
- final_activation_params (dict): Parameters to provide to the activation function.
- norm (str): Normalization method.
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
- kernel_size (int): Kernel size for the initial convolution.
- last_kernel_size (int): Kernel size for the initial convolution.
- residual_kernel_size (int): Kernel size for the residual layers.
- dilation_base (int): How much to increase the dilation with each layer.
- causal (bool): Whether to use fully causal convolution.
- pad_mode (str): Padding mode for the convolutions.
- true_skip (bool): Whether to use true skip connection or a simple.
- (streamable) convolution as the skip connection in the residual network blocks.
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
- lstm (int): Number of LSTM layers at the end of the encoder.
- disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm.
- For the decoder, it corresponds to the N last blocks.
- trim_right_ratio (float): Ratio for trimming at the right of the transposed convolution under the causal setup.
- If equal to 1.0, it means that all the trimming is done at the right.
- """
- def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3,
- ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
- final_activation: tp.Optional[str] = None, final_activation_params: tp.Optional[dict] = None,
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7,
- last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False,
- pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0,
- disable_norm_outer_blocks: int = 0, trim_right_ratio: float = 1.0):
- super().__init__()
- self.dimension = dimension
- self.channels = channels
- self.n_filters = n_filters
- self.ratios = ratios
- del ratios
- self.n_residual_layers = n_residual_layers
- self.hop_length = np.prod(self.ratios)
- self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks
- self.disable_norm_outer_blocks = disable_norm_outer_blocks
- assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \
- "Number of blocks for which to disable norm is invalid." \
- "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0."
-
- act = getattr(nn, activation)
- mult = int(2 ** len(self.ratios))
- model: tp.List[nn.Module] = [
- StreamableConv1d(dimension, mult * n_filters, kernel_size,
- norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
-
- if lstm:
- model += [StreamableLSTM(mult * n_filters, num_layers=lstm)]
-
- # Upsample to raw audio scale
- for i, ratio in enumerate(self.ratios):
- block_norm = 'none' if self.disable_norm_outer_blocks >= self.n_blocks - (i + 1) else norm
- # Add upsampling layers
- model += [
- act(**activation_params),
- StreamableConvTranspose1d(mult * n_filters, mult * n_filters // 2,
- kernel_size=ratio * 2, stride=ratio,
- norm=block_norm, norm_kwargs=norm_params,
- causal=causal, trim_right_ratio=trim_right_ratio),
- ]
- # Add residual layers
- for j in range(n_residual_layers):
- model += [
- SEANetResnetBlock(mult * n_filters // 2, kernel_sizes=[residual_kernel_size, 1],
- dilations=[dilation_base ** j, 1],
- activation=activation, activation_params=activation_params,
- norm=block_norm, norm_params=norm_params, causal=causal,
- pad_mode=pad_mode, compress=compress, true_skip=true_skip)]
-
- mult //= 2
-
- # Add final layers
- model += [
- act(**activation_params),
- StreamableConv1d(n_filters, channels, last_kernel_size,
- norm='none' if self.disable_norm_outer_blocks >= 1 else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
- # Add optional final activation to decoder (eg. tanh)
- if final_activation is not None:
- final_act = getattr(nn, final_activation)
- final_activation_params = final_activation_params or {}
- model += [
- final_act(**final_activation_params)
- ]
- self.model = nn.Sequential(*model)
-
- def forward(self, z):
- y = self.model(z)
- return y
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated.h b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated.h
deleted file mode 100644
index 03f4211003f42f601f0cfcf4a690f5da4a0a1f67..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated.h
+++ /dev/null
@@ -1,115 +0,0 @@
-// Copyright (c) Facebook, Inc. and its affiliates.
-#pragma once
-#include
-
-namespace detectron2 {
-
-at::Tensor ROIAlignRotated_forward_cpu(
- const at::Tensor& input,
- const at::Tensor& rois,
- const float spatial_scale,
- const int pooled_height,
- const int pooled_width,
- const int sampling_ratio);
-
-at::Tensor ROIAlignRotated_backward_cpu(
- const at::Tensor& grad,
- const at::Tensor& rois,
- const float spatial_scale,
- const int pooled_height,
- const int pooled_width,
- const int batch_size,
- const int channels,
- const int height,
- const int width,
- const int sampling_ratio);
-
-#if defined(WITH_CUDA) || defined(WITH_HIP)
-at::Tensor ROIAlignRotated_forward_cuda(
- const at::Tensor& input,
- const at::Tensor& rois,
- const float spatial_scale,
- const int pooled_height,
- const int pooled_width,
- const int sampling_ratio);
-
-at::Tensor ROIAlignRotated_backward_cuda(
- const at::Tensor& grad,
- const at::Tensor& rois,
- const float spatial_scale,
- const int pooled_height,
- const int pooled_width,
- const int batch_size,
- const int channels,
- const int height,
- const int width,
- const int sampling_ratio);
-#endif
-
-// Interface for Python
-inline at::Tensor ROIAlignRotated_forward(
- const at::Tensor& input,
- const at::Tensor& rois,
- const double spatial_scale,
- const int64_t pooled_height,
- const int64_t pooled_width,
- const int64_t sampling_ratio) {
- if (input.is_cuda()) {
-#if defined(WITH_CUDA) || defined(WITH_HIP)
- return ROIAlignRotated_forward_cuda(
- input,
- rois,
- spatial_scale,
- pooled_height,
- pooled_width,
- sampling_ratio);
-#else
- AT_ERROR("Detectron2 is not compiled with GPU support!");
-#endif
- }
- return ROIAlignRotated_forward_cpu(
- input, rois, spatial_scale, pooled_height, pooled_width, sampling_ratio);
-}
-
-inline at::Tensor ROIAlignRotated_backward(
- const at::Tensor& grad,
- const at::Tensor& rois,
- const double spatial_scale,
- const int64_t pooled_height,
- const int64_t pooled_width,
- const int64_t batch_size,
- const int64_t channels,
- const int64_t height,
- const int64_t width,
- const int64_t sampling_ratio) {
- if (grad.is_cuda()) {
-#if defined(WITH_CUDA) || defined(WITH_HIP)
- return ROIAlignRotated_backward_cuda(
- grad,
- rois,
- spatial_scale,
- pooled_height,
- pooled_width,
- batch_size,
- channels,
- height,
- width,
- sampling_ratio);
-#else
- AT_ERROR("Detectron2 is not compiled with GPU support!");
-#endif
- }
- return ROIAlignRotated_backward_cpu(
- grad,
- rois,
- spatial_scale,
- pooled_height,
- pooled_width,
- batch_size,
- channels,
- height,
- width,
- sampling_ratio);
-}
-
-} // namespace detectron2
diff --git a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/__init__.py b/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/__init__.py
deleted file mode 100644
index da53a4d25419f5de3252af664a7aca5551950f3a..0000000000000000000000000000000000000000
--- a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/__init__.py
+++ /dev/null
@@ -1,36 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-utils/initialization
-"""
-
-
-def notebook_init(verbose=True):
- # Check system software and hardware
- print('Checking setup...')
-
- import os
- import shutil
-
- from utils.general import check_requirements, emojis, is_colab
- from utils.torch_utils import select_device # imports
-
- check_requirements(('psutil', 'IPython'))
- import psutil
- from IPython import display # to display images and clear console output
-
- if is_colab():
- shutil.rmtree('/content/sample_data', ignore_errors=True) # remove colab /sample_data directory
-
- # System info
- if verbose:
- gb = 1 << 30 # bytes to GiB (1024 ** 3)
- ram = psutil.virtual_memory().total
- total, used, free = shutil.disk_usage("/")
- display.clear_output()
- s = f'({os.cpu_count()} CPUs, {ram / gb:.1f} GB RAM, {(total - free) / gb:.1f}/{total / gb:.1f} GB disk)'
- else:
- s = ''
-
- select_device(newline=False)
- print(emojis(f'Setup complete ✅ {s}'))
- return display
diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageChops.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageChops.py
deleted file mode 100644
index 70120031797c2493c0ce878c13c3fd3d5554c354..0000000000000000000000000000000000000000
--- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageChops.py
+++ /dev/null
@@ -1,303 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# standard channel operations
-#
-# History:
-# 1996-03-24 fl Created
-# 1996-08-13 fl Added logical operations (for "1" images)
-# 2000-10-12 fl Added offset method (from Image.py)
-#
-# Copyright (c) 1997-2000 by Secret Labs AB
-# Copyright (c) 1996-2000 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-
-from . import Image
-
-
-def constant(image, value):
- """Fill a channel with a given grey level.
-
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- return Image.new("L", image.size, value)
-
-
-def duplicate(image):
- """Copy a channel. Alias for :py:meth:`PIL.Image.Image.copy`.
-
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- return image.copy()
-
-
-def invert(image):
- """
- Invert an image (channel). ::
-
- out = MAX - image
-
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- image.load()
- return image._new(image.im.chop_invert())
-
-
-def lighter(image1, image2):
- """
- Compares the two images, pixel by pixel, and returns a new image containing
- the lighter values. ::
-
- out = max(image1, image2)
-
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- image1.load()
- image2.load()
- return image1._new(image1.im.chop_lighter(image2.im))
-
-
-def darker(image1, image2):
- """
- Compares the two images, pixel by pixel, and returns a new image containing
- the darker values. ::
-
- out = min(image1, image2)
-
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- image1.load()
- image2.load()
- return image1._new(image1.im.chop_darker(image2.im))
-
-
-def difference(image1, image2):
- """
- Returns the absolute value of the pixel-by-pixel difference between the two
- images. ::
-
- out = abs(image1 - image2)
-
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- image1.load()
- image2.load()
- return image1._new(image1.im.chop_difference(image2.im))
-
-
-def multiply(image1, image2):
- """
- Superimposes two images on top of each other.
-
- If you multiply an image with a solid black image, the result is black. If
- you multiply with a solid white image, the image is unaffected. ::
-
- out = image1 * image2 / MAX
-
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- image1.load()
- image2.load()
- return image1._new(image1.im.chop_multiply(image2.im))
-
-
-def screen(image1, image2):
- """
- Superimposes two inverted images on top of each other. ::
-
- out = MAX - ((MAX - image1) * (MAX - image2) / MAX)
-
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- image1.load()
- image2.load()
- return image1._new(image1.im.chop_screen(image2.im))
-
-
-def soft_light(image1, image2):
- """
- Superimposes two images on top of each other using the Soft Light algorithm
-
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- image1.load()
- image2.load()
- return image1._new(image1.im.chop_soft_light(image2.im))
-
-
-def hard_light(image1, image2):
- """
- Superimposes two images on top of each other using the Hard Light algorithm
-
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- image1.load()
- image2.load()
- return image1._new(image1.im.chop_hard_light(image2.im))
-
-
-def overlay(image1, image2):
- """
- Superimposes two images on top of each other using the Overlay algorithm
-
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- image1.load()
- image2.load()
- return image1._new(image1.im.chop_overlay(image2.im))
-
-
-def add(image1, image2, scale=1.0, offset=0):
- """
- Adds two images, dividing the result by scale and adding the
- offset. If omitted, scale defaults to 1.0, and offset to 0.0. ::
-
- out = ((image1 + image2) / scale + offset)
-
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- image1.load()
- image2.load()
- return image1._new(image1.im.chop_add(image2.im, scale, offset))
-
-
-def subtract(image1, image2, scale=1.0, offset=0):
- """
- Subtracts two images, dividing the result by scale and adding the offset.
- If omitted, scale defaults to 1.0, and offset to 0.0. ::
-
- out = ((image1 - image2) / scale + offset)
-
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- image1.load()
- image2.load()
- return image1._new(image1.im.chop_subtract(image2.im, scale, offset))
-
-
-def add_modulo(image1, image2):
- """Add two images, without clipping the result. ::
-
- out = ((image1 + image2) % MAX)
-
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- image1.load()
- image2.load()
- return image1._new(image1.im.chop_add_modulo(image2.im))
-
-
-def subtract_modulo(image1, image2):
- """Subtract two images, without clipping the result. ::
-
- out = ((image1 - image2) % MAX)
-
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- image1.load()
- image2.load()
- return image1._new(image1.im.chop_subtract_modulo(image2.im))
-
-
-def logical_and(image1, image2):
- """Logical AND between two images.
-
- Both of the images must have mode "1". If you would like to perform a
- logical AND on an image with a mode other than "1", try
- :py:meth:`~PIL.ImageChops.multiply` instead, using a black-and-white mask
- as the second image. ::
-
- out = ((image1 and image2) % MAX)
-
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- image1.load()
- image2.load()
- return image1._new(image1.im.chop_and(image2.im))
-
-
-def logical_or(image1, image2):
- """Logical OR between two images.
-
- Both of the images must have mode "1". ::
-
- out = ((image1 or image2) % MAX)
-
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- image1.load()
- image2.load()
- return image1._new(image1.im.chop_or(image2.im))
-
-
-def logical_xor(image1, image2):
- """Logical XOR between two images.
-
- Both of the images must have mode "1". ::
-
- out = ((bool(image1) != bool(image2)) % MAX)
-
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- image1.load()
- image2.load()
- return image1._new(image1.im.chop_xor(image2.im))
-
-
-def blend(image1, image2, alpha):
- """Blend images using constant transparency weight. Alias for
- :py:func:`PIL.Image.blend`.
-
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- return Image.blend(image1, image2, alpha)
-
-
-def composite(image1, image2, mask):
- """Create composite using transparency mask. Alias for
- :py:func:`PIL.Image.composite`.
-
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- return Image.composite(image1, image2, mask)
-
-
-def offset(image, xoffset, yoffset=None):
- """Returns a copy of the image where data has been offset by the given
- distances. Data wraps around the edges. If ``yoffset`` is omitted, it
- is assumed to be equal to ``xoffset``.
-
- :param image: Input image.
- :param xoffset: The horizontal distance.
- :param yoffset: The vertical distance. If omitted, both
- distances are set to the same value.
- :rtype: :py:class:`~PIL.Image.Image`
- """
-
- if yoffset is None:
- yoffset = xoffset
- image.load()
- return image._new(image.im.offset(xoffset, yoffset))
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/common.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/common.py
deleted file mode 100644
index d7bb62bd0d43f0f5f15e09e3cbb5b81f832af168..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/data/common.py
+++ /dev/null
@@ -1,293 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import contextlib
-import copy
-import itertools
-import logging
-import numpy as np
-import pickle
-import random
-from typing import Callable, Union
-import torch.utils.data as data
-from torch.utils.data.sampler import Sampler
-
-from detectron2.utils.serialize import PicklableWrapper
-
-__all__ = ["MapDataset", "DatasetFromList", "AspectRatioGroupedDataset", "ToIterableDataset"]
-
-logger = logging.getLogger(__name__)
-
-
-def _shard_iterator_dataloader_worker(iterable):
- # Shard the iterable if we're currently inside pytorch dataloader worker.
- worker_info = data.get_worker_info()
- if worker_info is None or worker_info.num_workers == 1:
- # do nothing
- yield from iterable
- else:
- yield from itertools.islice(iterable, worker_info.id, None, worker_info.num_workers)
-
-
-class _MapIterableDataset(data.IterableDataset):
- """
- Map a function over elements in an IterableDataset.
-
- Similar to pytorch's MapIterDataPipe, but support filtering when map_func
- returns None.
-
- This class is not public-facing. Will be called by `MapDataset`.
- """
-
- def __init__(self, dataset, map_func):
- self._dataset = dataset
- self._map_func = PicklableWrapper(map_func) # wrap so that a lambda will work
-
- def __len__(self):
- return len(self._dataset)
-
- def __iter__(self):
- for x in map(self._map_func, self._dataset):
- if x is not None:
- yield x
-
-
-class MapDataset(data.Dataset):
- """
- Map a function over the elements in a dataset.
- """
-
- def __init__(self, dataset, map_func):
- """
- Args:
- dataset: a dataset where map function is applied. Can be either
- map-style or iterable dataset. When given an iterable dataset,
- the returned object will also be an iterable dataset.
- map_func: a callable which maps the element in dataset. map_func can
- return None to skip the data (e.g. in case of errors).
- How None is handled depends on the style of `dataset`.
- If `dataset` is map-style, it randomly tries other elements.
- If `dataset` is iterable, it skips the data and tries the next.
- """
- self._dataset = dataset
- self._map_func = PicklableWrapper(map_func) # wrap so that a lambda will work
-
- self._rng = random.Random(42)
- self._fallback_candidates = set(range(len(dataset)))
-
- def __new__(cls, dataset, map_func):
- is_iterable = isinstance(dataset, data.IterableDataset)
- if is_iterable:
- return _MapIterableDataset(dataset, map_func)
- else:
- return super().__new__(cls)
-
- def __getnewargs__(self):
- return self._dataset, self._map_func
-
- def __len__(self):
- return len(self._dataset)
-
- def __getitem__(self, idx):
- retry_count = 0
- cur_idx = int(idx)
-
- while True:
- data = self._map_func(self._dataset[cur_idx])
- if data is not None:
- self._fallback_candidates.add(cur_idx)
- return data
-
- # _map_func fails for this idx, use a random new index from the pool
- retry_count += 1
- self._fallback_candidates.discard(cur_idx)
- cur_idx = self._rng.sample(self._fallback_candidates, k=1)[0]
-
- if retry_count >= 3:
- logger = logging.getLogger(__name__)
- logger.warning(
- "Failed to apply `_map_func` for idx: {}, retry count: {}".format(
- idx, retry_count
- )
- )
-
-
-class NumpySerializedList(object):
- """
- A list-like object whose items are serialized and stored in a Numpy Array. When
- forking a process that has NumpySerializedList, subprocesses can read the same list
- without triggering copy-on-access, therefore they will share RAM for the list. This
- avoids the issue in https://github.com/pytorch/pytorch/issues/13246
- """
-
- def __init__(self, lst: list):
- self._lst = lst
-
- def _serialize(data):
- buffer = pickle.dumps(data, protocol=-1)
- return np.frombuffer(buffer, dtype=np.uint8)
-
- logger.info(
- "Serializing {} elements to byte tensors and concatenating them all ...".format(
- len(self._lst)
- )
- )
- self._lst = [_serialize(x) for x in self._lst]
- self._addr = np.asarray([len(x) for x in self._lst], dtype=np.int64)
- self._addr = np.cumsum(self._addr)
- self._lst = np.concatenate(self._lst)
- logger.info("Serialized dataset takes {:.2f} MiB".format(len(self._lst) / 1024**2))
-
- def __len__(self):
- return len(self._addr)
-
- def __getitem__(self, idx):
- start_addr = 0 if idx == 0 else self._addr[idx - 1].item()
- end_addr = self._addr[idx].item()
- bytes = memoryview(self._lst[start_addr:end_addr])
-
- # @lint-ignore PYTHONPICKLEISBAD
- return pickle.loads(bytes)
-
-
-_DEFAULT_DATASET_FROM_LIST_SERIALIZE_METHOD = NumpySerializedList
-
-
-@contextlib.contextmanager
-def set_default_dataset_from_list_serialize_method(new):
- """
- Context manager for using custom serialize function when creating DatasetFromList
- """
-
- global _DEFAULT_DATASET_FROM_LIST_SERIALIZE_METHOD
- orig = _DEFAULT_DATASET_FROM_LIST_SERIALIZE_METHOD
- _DEFAULT_DATASET_FROM_LIST_SERIALIZE_METHOD = new
- yield
- _DEFAULT_DATASET_FROM_LIST_SERIALIZE_METHOD = orig
-
-
-class DatasetFromList(data.Dataset):
- """
- Wrap a list to a torch Dataset. It produces elements of the list as data.
- """
-
- def __init__(
- self,
- lst: list,
- copy: bool = True,
- serialize: Union[bool, Callable] = True,
- ):
- """
- Args:
- lst (list): a list which contains elements to produce.
- copy (bool): whether to deepcopy the element when producing it,
- so that the result can be modified in place without affecting the
- source in the list.
- serialize (bool or callable): whether to serialize the stroage to other
- backend. If `True`, the default serialize method will be used, if given
- a callable, the callable will be used as serialize method.
- """
- self._lst = lst
- self._copy = copy
- if not isinstance(serialize, (bool, Callable)):
- raise TypeError(f"Unsupported type for argument `serailzie`: {serialize}")
- self._serialize = serialize is not False
-
- if self._serialize:
- serialize_method = (
- serialize
- if isinstance(serialize, Callable)
- else _DEFAULT_DATASET_FROM_LIST_SERIALIZE_METHOD
- )
- logger.info(f"Serializing the dataset using: {serialize_method}")
- self._lst = serialize_method(self._lst)
-
- def __len__(self):
- return len(self._lst)
-
- def __getitem__(self, idx):
- if self._copy and not self._serialize:
- return copy.deepcopy(self._lst[idx])
- else:
- return self._lst[idx]
-
-
-class ToIterableDataset(data.IterableDataset):
- """
- Convert an old indices-based (also called map-style) dataset
- to an iterable-style dataset.
- """
-
- def __init__(self, dataset: data.Dataset, sampler: Sampler, shard_sampler: bool = True):
- """
- Args:
- dataset: an old-style dataset with ``__getitem__``
- sampler: a cheap iterable that produces indices to be applied on ``dataset``.
- shard_sampler: whether to shard the sampler based on the current pytorch data loader
- worker id. When an IterableDataset is forked by pytorch's DataLoader into multiple
- workers, it is responsible for sharding its data based on worker id so that workers
- don't produce identical data.
-
- Most samplers (like our TrainingSampler) do not shard based on dataloader worker id
- and this argument should be set to True. But certain samplers may be already
- sharded, in that case this argument should be set to False.
- """
- assert not isinstance(dataset, data.IterableDataset), dataset
- assert isinstance(sampler, Sampler), sampler
- self.dataset = dataset
- self.sampler = sampler
- self.shard_sampler = shard_sampler
-
- def __iter__(self):
- if not self.shard_sampler:
- sampler = self.sampler
- else:
- # With map-style dataset, `DataLoader(dataset, sampler)` runs the
- # sampler in main process only. But `DataLoader(ToIterableDataset(dataset, sampler))`
- # will run sampler in every of the N worker. So we should only keep 1/N of the ids on
- # each worker. The assumption is that sampler is cheap to iterate so it's fine to
- # discard ids in workers.
- sampler = _shard_iterator_dataloader_worker(self.sampler)
- for idx in sampler:
- yield self.dataset[idx]
-
- def __len__(self):
- return len(self.sampler)
-
-
-class AspectRatioGroupedDataset(data.IterableDataset):
- """
- Batch data that have similar aspect ratio together.
- In this implementation, images whose aspect ratio < (or >) 1 will
- be batched together.
- This improves training speed because the images then need less padding
- to form a batch.
-
- It assumes the underlying dataset produces dicts with "width" and "height" keys.
- It will then produce a list of original dicts with length = batch_size,
- all with similar aspect ratios.
- """
-
- def __init__(self, dataset, batch_size):
- """
- Args:
- dataset: an iterable. Each element must be a dict with keys
- "width" and "height", which will be used to batch data.
- batch_size (int):
- """
- self.dataset = dataset
- self.batch_size = batch_size
- self._buckets = [[] for _ in range(2)]
- # Hard-coded two aspect ratio groups: w > h and w < h.
- # Can add support for more aspect ratio groups, but doesn't seem useful
-
- def __iter__(self):
- for d in self.dataset:
- w, h = d["width"], d["height"]
- bucket_id = 0 if w > h else 1
- bucket = self._buckets[bucket_id]
- bucket.append(d)
- if len(bucket) == self.batch_size:
- data = bucket[:]
- # Clear bucket first, because code after yield is not
- # guaranteed to execute
- del bucket[:]
- yield data
diff --git a/spaces/cbensimon/stable-diffusion-xl/README.md b/spaces/cbensimon/stable-diffusion-xl/README.md
deleted file mode 100644
index 4bd84f8c6a4d1f72159766b8d00b528a45bef148..0000000000000000000000000000000000000000
--- a/spaces/cbensimon/stable-diffusion-xl/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Stable Diffusion Xl
-emoji: 🏢
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.45.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/chansung/LLM-As-Chatbot/scripts/hparams_explore.py b/spaces/chansung/LLM-As-Chatbot/scripts/hparams_explore.py
deleted file mode 100644
index ea01f5b352a5b643166395ed442b349aa4deeca9..0000000000000000000000000000000000000000
--- a/spaces/chansung/LLM-As-Chatbot/scripts/hparams_explore.py
+++ /dev/null
@@ -1,66 +0,0 @@
-import time
-import itertools
-import wandb
-from transformers import GenerationConfig
-
-wandb.login(key="")
-
-PROJECT="txt_gen_test_project"
-
-generation_configs = {
- "temperature": [0.5, 0.7, 0.8, 0.9, 1.0],
- "top_p": [0.5, 0.75, 0.85, 0.95, 1.0],
- "num_beams": [1, 2, 3, 4]
-}
-
-num_gens = 1
-
-# token initialization
-# model initialization
-
-for comb in itertools.product(generation_configs['temperature'],
- generation_configs['top_p'],
- generation_configs['num_beams']):
- temperature = comb[0]
- top_p = comb[1]
- num_beams = comb[2]
-
- generation_config = GenerationConfig(
- temperature=temperature,
- top_p=top_p,
- num_beams=num_beams,
- )
-
- first_columns = [f"gen_txt_{num}" for num in range(num_gens)]
- columns = first_columns + ["temperature", "top_p", "num_beams", "time_delta"]
-
- avg_time_delta = 0
- txt_gens = []
- for i in range(num_gens):
- start = time.time()
- # text generation
- text = "dummy text"
- txt_gens.append(text)
-
- # decode outputs
- end = time.time()
- t_delta = end - start
- avg_time_delta = avg_time_delta + t_delta
-
- avg_time_delta = round(avg_time_delta / num_gens, 4)
-
- wandb.init(
- project=PROJECT,
- name=f"t@{temperature}-tp@{top_p}-nb@{num_beams}",
- config=generation_config,
- )
-
- text_table = wandb.Table(columns=columns)
- text_table.add_data(*txt_gens, temperature, top_p, num_beams, avg_time_delta)
-
- wandb.log({
- "avg_t_delta": avg_time_delta,
- "results": text_table
- })
-
- wandb.finish()
diff --git a/spaces/chilleverydaychill/roop/roop/core.py b/spaces/chilleverydaychill/roop/roop/core.py
deleted file mode 100644
index 05f36bc720bfd7a4fd2741054b50204229c68151..0000000000000000000000000000000000000000
--- a/spaces/chilleverydaychill/roop/roop/core.py
+++ /dev/null
@@ -1,211 +0,0 @@
-#!/usr/bin/env python3
-
-import os
-import sys
-# single thread doubles cuda performance - needs to be set before torch import
-if any(arg.startswith('--execution-provider') for arg in sys.argv):
- os.environ['OMP_NUM_THREADS'] = '1'
-# reduce tensorflow log level
-os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
-import warnings
-from typing import List
-import platform
-import signal
-import shutil
-import argparse
-import torch
-import onnxruntime
-import tensorflow
-
-import roop.globals
-import roop.metadata
-import roop.ui as ui
-from roop.predicter import predict_image, predict_video
-from roop.processors.frame.core import get_frame_processors_modules
-from roop.utilities import has_image_extension, is_image, is_video, detect_fps, create_video, extract_frames, get_temp_frame_paths, restore_audio, create_temp, move_temp, clean_temp, normalize_output_path
-
-if 'ROCMExecutionProvider' in roop.globals.execution_providers:
- del torch
-
-warnings.filterwarnings('ignore', category=FutureWarning, module='insightface')
-warnings.filterwarnings('ignore', category=UserWarning, module='torchvision')
-
-
-def parse_args() -> None:
- signal.signal(signal.SIGINT, lambda signal_number, frame: destroy())
- program = argparse.ArgumentParser(formatter_class=lambda prog: argparse.HelpFormatter(prog, max_help_position=100))
- program.add_argument('-s', '--source', help='select an source image', dest='source_path')
- program.add_argument('-t', '--target', help='select an target image or video', dest='target_path')
- program.add_argument('-o', '--output', help='select output file or directory', dest='output_path')
- program.add_argument('--frame-processor', help='frame processors (choices: face_swapper, face_enhancer, ...)', dest='frame_processor', default=['face_swapper'], nargs='+')
- program.add_argument('--keep-fps', help='keep original fps', dest='keep_fps', action='store_true', default=False)
- program.add_argument('--keep-audio', help='keep original audio', dest='keep_audio', action='store_true', default=True)
- program.add_argument('--keep-frames', help='keep temporary frames', dest='keep_frames', action='store_true', default=False)
- program.add_argument('--many-faces', help='process every face', dest='many_faces', action='store_true', default=False)
- program.add_argument('--video-encoder', help='adjust output video encoder', dest='video_encoder', default='libx264', choices=['libx264', 'libx265', 'libvpx-vp9'])
- program.add_argument('--video-quality', help='adjust output video quality', dest='video_quality', type=int, default=18, choices=range(52), metavar='[0-51]')
- program.add_argument('--max-memory', help='maximum amount of RAM in GB', dest='max_memory', type=int, default=suggest_max_memory())
- program.add_argument('--execution-provider', help='available execution provider (choices: cpu, ...)', dest='execution_provider', default=['cpu'], choices=suggest_execution_providers(), nargs='+')
- program.add_argument('--execution-threads', help='number of execution threads', dest='execution_threads', type=int, default=suggest_execution_threads())
- program.add_argument('-v', '--version', action='version', version=f'{roop.metadata.name} {roop.metadata.version}')
-
- args = program.parse_args()
-
- roop.globals.source_path = args.source_path
- roop.globals.target_path = args.target_path
- roop.globals.output_path = normalize_output_path(roop.globals.source_path, roop.globals.target_path, args.output_path)
- roop.globals.frame_processors = args.frame_processor
- roop.globals.headless = args.source_path or args.target_path or args.output_path
- roop.globals.keep_fps = args.keep_fps
- roop.globals.keep_audio = args.keep_audio
- roop.globals.keep_frames = args.keep_frames
- roop.globals.many_faces = args.many_faces
- roop.globals.video_encoder = args.video_encoder
- roop.globals.video_quality = args.video_quality
- roop.globals.max_memory = args.max_memory
- roop.globals.execution_providers = decode_execution_providers(args.execution_provider)
- roop.globals.execution_threads = args.execution_threads
-
-
-def encode_execution_providers(execution_providers: List[str]) -> List[str]:
- return [execution_provider.replace('ExecutionProvider', '').lower() for execution_provider in execution_providers]
-
-
-def decode_execution_providers(execution_providers: List[str]) -> List[str]:
- return [provider for provider, encoded_execution_provider in zip(onnxruntime.get_available_providers(), encode_execution_providers(onnxruntime.get_available_providers()))
- if any(execution_provider in encoded_execution_provider for execution_provider in execution_providers)]
-
-
-def suggest_max_memory() -> int:
- if platform.system().lower() == 'darwin':
- return 4
- return 16
-
-
-def suggest_execution_providers() -> List[str]:
- return encode_execution_providers(onnxruntime.get_available_providers())
-
-
-def suggest_execution_threads() -> int:
- if 'DmlExecutionProvider' in roop.globals.execution_providers:
- return 1
- if 'ROCMExecutionProvider' in roop.globals.execution_providers:
- return 1
- return 8
-
-
-def limit_resources() -> None:
- # prevent tensorflow memory leak
- gpus = tensorflow.config.experimental.list_physical_devices('GPU')
- for gpu in gpus:
- tensorflow.config.experimental.set_virtual_device_configuration(gpu, [
- tensorflow.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)
- ])
- # limit memory usage
- if roop.globals.max_memory:
- memory = roop.globals.max_memory * 1024 ** 3
- if platform.system().lower() == 'darwin':
- memory = roop.globals.max_memory * 1024 ** 6
- if platform.system().lower() == 'windows':
- import ctypes
- kernel32 = ctypes.windll.kernel32
- kernel32.SetProcessWorkingSetSize(-1, ctypes.c_size_t(memory), ctypes.c_size_t(memory))
- else:
- import resource
- resource.setrlimit(resource.RLIMIT_DATA, (memory, memory))
-
-
-def release_resources() -> None:
- if 'CUDAExecutionProvider' in roop.globals.execution_providers:
- torch.cuda.empty_cache()
-
-
-def pre_check() -> bool:
- if sys.version_info < (3, 9):
- update_status('Python version is not supported - please upgrade to 3.9 or higher.')
- return False
- if not shutil.which('ffmpeg'):
- update_status('ffmpeg is not installed.')
- return False
- return True
-
-
-def update_status(message: str, scope: str = 'ROOP.CORE') -> None:
- print(f'[{scope}] {message}')
- if not roop.globals.headless:
- ui.update_status(message)
-
-
-def start() -> None:
- for frame_processor in get_frame_processors_modules(roop.globals.frame_processors):
- if not frame_processor.pre_start():
- return
- # process image to image
- if has_image_extension(roop.globals.target_path):
- shutil.copy2(roop.globals.target_path, roop.globals.output_path)
- for frame_processor in get_frame_processors_modules(roop.globals.frame_processors):
- update_status('Progressing...', frame_processor.NAME)
- frame_processor.process_image(roop.globals.source_path, roop.globals.output_path, roop.globals.output_path)
- frame_processor.post_process()
- release_resources()
- if is_image(roop.globals.target_path):
- update_status('Processing to image succeed!')
- else:
- update_status('Processing to image failed!')
- return
- # process image to videos
- update_status('Creating temp resources...')
- create_temp(roop.globals.target_path)
- update_status('Extracting frames...')
- extract_frames(roop.globals.target_path)
- temp_frame_paths = get_temp_frame_paths(roop.globals.target_path)
- for frame_processor in get_frame_processors_modules(roop.globals.frame_processors):
- update_status('Progressing...', frame_processor.NAME)
- frame_processor.process_video(roop.globals.source_path, temp_frame_paths)
- frame_processor.post_process()
- release_resources()
- # handles fps
- if roop.globals.keep_fps:
- update_status('Detecting fps...')
- fps = detect_fps(roop.globals.target_path)
- update_status(f'Creating video with {fps} fps...')
- create_video(roop.globals.target_path, fps)
- else:
- update_status('Creating video with 30.0 fps...')
- create_video(roop.globals.target_path)
- # handle audio
- if roop.globals.keep_audio:
- if roop.globals.keep_fps:
- update_status('Restoring audio...')
- else:
- update_status('Restoring audio might cause issues as fps are not kept...')
- restore_audio(roop.globals.target_path, roop.globals.output_path)
- else:
- move_temp(roop.globals.target_path, roop.globals.output_path)
- # clean and validate
- clean_temp(roop.globals.target_path)
- if is_video(roop.globals.target_path):
- update_status('Processing to video succeed!')
- else:
- update_status('Processing to video failed!')
-
-
-def destroy() -> None:
- if roop.globals.target_path:
- clean_temp(roop.globals.target_path)
- quit()
-
-
-def run() -> None:
- parse_args()
- if not pre_check():
- return
- for frame_processor in get_frame_processors_modules(roop.globals.frame_processors):
- if not frame_processor.pre_check():
- return
- limit_resources()
- if roop.globals.headless:
- start()
- else:
- window = ui.init(start, destroy)
- window.mainloop()
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_l_t_a_g.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_l_t_a_g.py
deleted file mode 100644
index 24f5e131f0c615dcf86b0494854d9a3a5a1284f2..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_l_t_a_g.py
+++ /dev/null
@@ -1,64 +0,0 @@
-from fontTools.misc.textTools import bytesjoin, tobytes, safeEval
-from . import DefaultTable
-import struct
-
-# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6ltag.html
-
-
-class table__l_t_a_g(DefaultTable.DefaultTable):
- def __init__(self, tag=None):
- DefaultTable.DefaultTable.__init__(self, tag)
- self.version, self.flags = 1, 0
- self.tags = []
-
- def addTag(self, tag):
- """Add 'tag' to the list of langauge tags if not already there.
-
- Returns the integer index of 'tag' in the list of all tags.
- """
- try:
- return self.tags.index(tag)
- except ValueError:
- self.tags.append(tag)
- return len(self.tags) - 1
-
- def decompile(self, data, ttFont):
- self.version, self.flags, numTags = struct.unpack(">LLL", data[:12])
- assert self.version == 1
- self.tags = []
- for i in range(numTags):
- pos = 12 + i * 4
- offset, length = struct.unpack(">HH", data[pos : pos + 4])
- tag = data[offset : offset + length].decode("ascii")
- self.tags.append(tag)
-
- def compile(self, ttFont):
- dataList = [struct.pack(">LLL", self.version, self.flags, len(self.tags))]
- stringPool = ""
- for tag in self.tags:
- offset = stringPool.find(tag)
- if offset < 0:
- offset = len(stringPool)
- stringPool = stringPool + tag
- offset = offset + 12 + len(self.tags) * 4
- dataList.append(struct.pack(">HH", offset, len(tag)))
- dataList.append(tobytes(stringPool))
- return bytesjoin(dataList)
-
- def toXML(self, writer, ttFont):
- writer.simpletag("version", value=self.version)
- writer.newline()
- writer.simpletag("flags", value=self.flags)
- writer.newline()
- for tag in self.tags:
- writer.simpletag("LanguageTag", tag=tag)
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- if not hasattr(self, "tags"):
- self.tags = []
- if name == "LanguageTag":
- self.tags.append(attrs["tag"])
- elif "value" in attrs:
- value = safeEval(attrs["value"])
- setattr(self, name, value)
diff --git a/spaces/cihyFjudo/fairness-paper-search/Giorgio.Vanni.2014.Super.Hits..Il.Meglio.del.Megli championsleague zitt Experience the Magic of Giorgio Vannis Super Hits.md b/spaces/cihyFjudo/fairness-paper-search/Giorgio.Vanni.2014.Super.Hits..Il.Meglio.del.Megli championsleague zitt Experience the Magic of Giorgio Vannis Super Hits.md
deleted file mode 100644
index 11d56e49a4b4981b760487fb467cb4d963de50d4..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Giorgio.Vanni.2014.Super.Hits..Il.Meglio.del.Megli championsleague zitt Experience the Magic of Giorgio Vannis Super Hits.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Janome Digitizer Pro Software Download Torrent Download Downloadl Explore the Features and Benefits of the Software.md b/spaces/cihyFjudo/fairness-paper-search/Janome Digitizer Pro Software Download Torrent Download Downloadl Explore the Features and Benefits of the Software.md
deleted file mode 100644
index 961264aac7024b2b24d58ecac4a7959bcb094a91..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Janome Digitizer Pro Software Download Torrent Download Downloadl Explore the Features and Benefits of the Software.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
For customers that do not have a CD-ROM Drive in your computer, below are links to download the software. In order to activate the software, an activation code is required. See your local authorized dealer to purchase the software and receive an activation code.
-
Janome Digitizer Pro Software Download Torrent Download Downloadl
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cleanmaster/so-vits-svc-akagi/app.py b/spaces/cleanmaster/so-vits-svc-akagi/app.py
deleted file mode 100644
index 1bcbed71e114bb3da48f841c2d13fc4ca13d5377..0000000000000000000000000000000000000000
--- a/spaces/cleanmaster/so-vits-svc-akagi/app.py
+++ /dev/null
@@ -1,58 +0,0 @@
-from inference.infer_tool_grad import VitsSvc
-import gradio as gr
-import os
-
-class VitsGradio:
- def __init__(self):
- self.so = VitsSvc()
- self.lspk = []
- self.modelPaths = []
- for root,dirs,files in os.walk("checkpoints"):
- for dir in dirs:
- self.modelPaths.append(dir)
- with gr.Blocks() as self.Vits:
- with gr.Tab("转换"):
- with gr.Row(visible=False) as self.VoiceConversion:
- with gr.Column():
- with gr.Row():
- with gr.Column():
- self.srcaudio = gr.Audio(label = "输入音频")
- self.record = gr.Audio(source="microphone", label="或者录制你的声音")
- self.btnVC = gr.Button("说话人转换(上传的音频)")
- self.btnVC2 = gr.Button("说话人转换(录制的音频)")
- with gr.Column():
- self.dsid = gr.Dropdown(label = "目标角色", choices = self.lspk)
- self.tran = gr.Slider(label = "升降调(男声输入需微调,女声输入需降低8~12)", maximum = 60, minimum = -60, step = 1, value = 0)
- self.th = gr.Slider(label = "切片阈值", maximum = 32767, minimum = -32768, step = 0.1, value = -40)
- with gr.Row():
- self.VCOutputs = gr.Audio()
- self.btnVC.click(self.so.inference, inputs=[self.srcaudio,self.dsid,self.tran,self.th], outputs=[self.VCOutputs])
- self.btnVC2.click(self.so.inference, inputs=[self.record,self.dsid,self.tran,self.th], outputs=[self.VCOutputs])
- with gr.Tab("选择模型"):
- with gr.Column():
- modelstrs = gr.Dropdown(label = "模型", choices = self.modelPaths, value = self.modelPaths[0], type = "value")
- devicestrs = gr.Dropdown(label = "设备(只能选择cpu)", choices = ["cpu","cuda"], value = "cpu", type = "value")
- btnMod = gr.Button("载入模型")
- btnMod.click(self.loadModel, inputs=[modelstrs,devicestrs], outputs = [self.dsid,self.VoiceConversion])
-
- def loadModel(self, path, device):
- self.lspk = []
- self.so.set_device(device)
- self.so.loadCheckpoint(path)
- for spk, sid in self.so.hps.spk.items():
- self.lspk.append(spk)
- VChange = gr.update(visible = True)
- SDChange = gr.update(choices = self.lspk, value = self.lspk[0])
- return [SDChange,VChange]
-
- def chooseAudio(self, record, srcaudio, dsid, tran, th):
- if not record is None:
- self.file=record
- elif not srcaudio is None:
- self.file=srcaudio
- return(self.so.inference(self.file,self.dsid,self.tran,self.th))
-
-
-grVits = VitsGradio()
-
-grVits.Vits.launch()
\ No newline at end of file
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/av1_parse.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/av1_parse.h
deleted file mode 100644
index f4a5d2830ec00c112d69f184dd11770bbfbe3463..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/av1_parse.h
+++ /dev/null
@@ -1,184 +0,0 @@
-/*
- * AV1 common parsing code
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_AV1_PARSE_H
-#define AVCODEC_AV1_PARSE_H
-
-#include
-#include
-
-#include "libavutil/error.h"
-#include "libavutil/intmath.h"
-#include "libavutil/macros.h"
-
-#include "av1.h"
-#include "get_bits.h"
-
-// OBU header fields + max leb128 length
-#define MAX_OBU_HEADER_SIZE (2 + 8)
-
-typedef struct AV1OBU {
- /** Size of payload */
- int size;
- const uint8_t *data;
-
- /**
- * Size, in bits, of just the data, excluding the trailing_one_bit and
- * any trailing padding.
- */
- int size_bits;
-
- /** Size of entire OBU, including header */
- int raw_size;
- const uint8_t *raw_data;
-
- /** GetBitContext initialized to the start of the payload */
- GetBitContext gb;
-
- int type;
-
- int temporal_id;
- int spatial_id;
-} AV1OBU;
-
-/** An input packet split into OBUs */
-typedef struct AV1Packet {
- AV1OBU *obus;
- int nb_obus;
- int obus_allocated;
- unsigned obus_allocated_size;
-} AV1Packet;
-
-/**
- * Extract an OBU from a raw bitstream.
- *
- * @note This function does not copy or store any bitstream data. All
- * the pointers in the AV1OBU structure will be valid as long
- * as the input buffer also is.
- */
-int ff_av1_extract_obu(AV1OBU *obu, const uint8_t *buf, int length,
- void *logctx);
-
-/**
- * Split an input packet into OBUs.
- *
- * @note This function does not copy or store any bitstream data. All
- * the pointers in the AV1Packet structure will be valid as
- * long as the input buffer also is.
- */
-int ff_av1_packet_split(AV1Packet *pkt, const uint8_t *buf, int length,
- void *logctx);
-
-/**
- * Free all the allocated memory in the packet.
- */
-void ff_av1_packet_uninit(AV1Packet *pkt);
-
-static inline int64_t leb128(GetBitContext *gb) {
- int64_t ret = 0;
- int i;
-
- for (i = 0; i < 8; i++) {
- int byte = get_bits(gb, 8);
- ret |= (int64_t)(byte & 0x7f) << (i * 7);
- if (!(byte & 0x80))
- break;
- }
- return ret;
-}
-
-static inline int parse_obu_header(const uint8_t *buf, int buf_size,
- int64_t *obu_size, int *start_pos, int *type,
- int *temporal_id, int *spatial_id)
-{
- GetBitContext gb;
- int ret, extension_flag, has_size_flag;
- int64_t size;
-
- ret = init_get_bits8(&gb, buf, FFMIN(buf_size, MAX_OBU_HEADER_SIZE));
- if (ret < 0)
- return ret;
-
- if (get_bits1(&gb) != 0) // obu_forbidden_bit
- return AVERROR_INVALIDDATA;
-
- *type = get_bits(&gb, 4);
- extension_flag = get_bits1(&gb);
- has_size_flag = get_bits1(&gb);
- skip_bits1(&gb); // obu_reserved_1bit
-
- if (extension_flag) {
- *temporal_id = get_bits(&gb, 3);
- *spatial_id = get_bits(&gb, 2);
- skip_bits(&gb, 3); // extension_header_reserved_3bits
- } else {
- *temporal_id = *spatial_id = 0;
- }
-
- *obu_size = has_size_flag ? leb128(&gb)
- : buf_size - 1 - extension_flag;
-
- if (get_bits_left(&gb) < 0)
- return AVERROR_INVALIDDATA;
-
- *start_pos = get_bits_count(&gb) / 8;
-
- size = *obu_size + *start_pos;
-
- if (size > buf_size)
- return AVERROR_INVALIDDATA;
-
- return size;
-}
-
-static inline int get_obu_bit_length(const uint8_t *buf, int size, int type)
-{
- int v;
-
- /* There are no trailing bits on these */
- if (type == AV1_OBU_TILE_GROUP ||
- type == AV1_OBU_TILE_LIST ||
- type == AV1_OBU_FRAME) {
- if (size > INT_MAX / 8)
- return AVERROR(ERANGE);
- else
- return size * 8;
- }
-
- while (size > 0 && buf[size - 1] == 0)
- size--;
-
- if (!size)
- return 0;
-
- v = buf[size - 1];
-
- if (size > INT_MAX / 8)
- return AVERROR(ERANGE);
- size *= 8;
-
- /* Remove the trailing_one_bit and following trailing zeros */
- if (v)
- size -= ff_ctz(v) + 1;
-
- return size;
-}
-
-#endif /* AVCODEC_AV1_PARSE_H */
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libx265.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libx265.c
deleted file mode 100644
index 420d0953af158022055d90b78add5bd6e70581c5..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libx265.c
+++ /dev/null
@@ -1,909 +0,0 @@
-/*
- * libx265 encoder
- *
- * Copyright (c) 2013-2014 Derek Buitenhuis
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#if defined(_MSC_VER)
-#define X265_API_IMPORTS 1
-#endif
-
-#include
-#include
-
-#include "libavutil/avassert.h"
-#include "libavutil/buffer.h"
-#include "libavutil/internal.h"
-#include "libavutil/common.h"
-#include "libavutil/opt.h"
-#include "libavutil/pixdesc.h"
-#include "avcodec.h"
-#include "codec_internal.h"
-#include "encode.h"
-#include "internal.h"
-#include "packet_internal.h"
-#include "atsc_a53.h"
-#include "sei.h"
-
-typedef struct ReorderedData {
-#if FF_API_REORDERED_OPAQUE
- int64_t reordered_opaque;
-#endif
- int64_t duration;
-
- void *frame_opaque;
- AVBufferRef *frame_opaque_ref;
-
- int in_use;
-} ReorderedData;
-
-typedef struct libx265Context {
- const AVClass *class;
-
- x265_encoder *encoder;
- x265_param *params;
- const x265_api *api;
-
- float crf;
- int cqp;
- int forced_idr;
- char *preset;
- char *tune;
- char *profile;
- AVDictionary *x265_opts;
-
- void *sei_data;
- int sei_data_size;
- int udu_sei;
- int a53_cc;
-
- ReorderedData *rd;
- int nb_rd;
-
- /**
- * If the encoder does not support ROI then warn the first time we
- * encounter a frame with ROI side data.
- */
- int roi_warned;
-} libx265Context;
-
-static int is_keyframe(NalUnitType naltype)
-{
- switch (naltype) {
- case NAL_UNIT_CODED_SLICE_BLA_W_LP:
- case NAL_UNIT_CODED_SLICE_BLA_W_RADL:
- case NAL_UNIT_CODED_SLICE_BLA_N_LP:
- case NAL_UNIT_CODED_SLICE_IDR_W_RADL:
- case NAL_UNIT_CODED_SLICE_IDR_N_LP:
- case NAL_UNIT_CODED_SLICE_CRA:
- return 1;
- default:
- return 0;
- }
-}
-
-static int rd_get(libx265Context *ctx)
-{
- const int add = 16;
-
- ReorderedData *tmp;
- int idx;
-
- for (int i = 0; i < ctx->nb_rd; i++)
- if (!ctx->rd[i].in_use) {
- ctx->rd[i].in_use = 1;
- return i;
- }
-
- tmp = av_realloc_array(ctx->rd, ctx->nb_rd + add, sizeof(*ctx->rd));
- if (!tmp)
- return AVERROR(ENOMEM);
- memset(tmp + ctx->nb_rd, 0, sizeof(*tmp) * add);
-
- ctx->rd = tmp;
- ctx->nb_rd += add;
-
- idx = ctx->nb_rd - add;
- ctx->rd[idx].in_use = 1;
-
- return idx;
-}
-
-static void rd_release(libx265Context *ctx, int idx)
-{
- av_assert0(idx >= 0 && idx < ctx->nb_rd);
- av_buffer_unref(&ctx->rd[idx].frame_opaque_ref);
- memset(&ctx->rd[idx], 0, sizeof(ctx->rd[idx]));
-}
-
-static av_cold int libx265_encode_close(AVCodecContext *avctx)
-{
- libx265Context *ctx = avctx->priv_data;
-
- ctx->api->param_free(ctx->params);
- av_freep(&ctx->sei_data);
-
- for (int i = 0; i < ctx->nb_rd; i++)
- rd_release(ctx, i);
- av_freep(&ctx->rd);
-
- if (ctx->encoder)
- ctx->api->encoder_close(ctx->encoder);
-
- return 0;
-}
-
-static av_cold int libx265_param_parse_float(AVCodecContext *avctx,
- const char *key, float value)
-{
- libx265Context *ctx = avctx->priv_data;
- char buf[256];
-
- snprintf(buf, sizeof(buf), "%2.2f", value);
- if (ctx->api->param_parse(ctx->params, key, buf) == X265_PARAM_BAD_VALUE) {
- av_log(avctx, AV_LOG_ERROR, "Invalid value %2.2f for param \"%s\".\n", value, key);
- return AVERROR(EINVAL);
- }
-
- return 0;
-}
-
-static av_cold int libx265_param_parse_int(AVCodecContext *avctx,
- const char *key, int value)
-{
- libx265Context *ctx = avctx->priv_data;
- char buf[256];
-
- snprintf(buf, sizeof(buf), "%d", value);
- if (ctx->api->param_parse(ctx->params, key, buf) == X265_PARAM_BAD_VALUE) {
- av_log(avctx, AV_LOG_ERROR, "Invalid value %d for param \"%s\".\n", value, key);
- return AVERROR(EINVAL);
- }
-
- return 0;
-}
-
-static av_cold int libx265_encode_init(AVCodecContext *avctx)
-{
- libx265Context *ctx = avctx->priv_data;
- AVCPBProperties *cpb_props = NULL;
- const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(avctx->pix_fmt);
- int ret;
-
- ctx->api = x265_api_get(desc->comp[0].depth);
- if (!ctx->api)
- ctx->api = x265_api_get(0);
-
- ctx->params = ctx->api->param_alloc();
- if (!ctx->params) {
- av_log(avctx, AV_LOG_ERROR, "Could not allocate x265 param structure.\n");
- return AVERROR(ENOMEM);
- }
-
- if (ctx->api->param_default_preset(ctx->params, ctx->preset, ctx->tune) < 0) {
- int i;
-
- av_log(avctx, AV_LOG_ERROR, "Error setting preset/tune %s/%s.\n", ctx->preset, ctx->tune);
- av_log(avctx, AV_LOG_INFO, "Possible presets:");
- for (i = 0; x265_preset_names[i]; i++)
- av_log(avctx, AV_LOG_INFO, " %s", x265_preset_names[i]);
-
- av_log(avctx, AV_LOG_INFO, "\n");
- av_log(avctx, AV_LOG_INFO, "Possible tunes:");
- for (i = 0; x265_tune_names[i]; i++)
- av_log(avctx, AV_LOG_INFO, " %s", x265_tune_names[i]);
-
- av_log(avctx, AV_LOG_INFO, "\n");
-
- return AVERROR(EINVAL);
- }
-
- ctx->params->frameNumThreads = avctx->thread_count;
- if (avctx->framerate.num > 0 && avctx->framerate.den > 0) {
- ctx->params->fpsNum = avctx->framerate.num;
- ctx->params->fpsDenom = avctx->framerate.den;
- } else {
- ctx->params->fpsNum = avctx->time_base.den;
- ctx->params->fpsDenom = avctx->time_base.num * avctx->ticks_per_frame;
- }
- ctx->params->sourceWidth = avctx->width;
- ctx->params->sourceHeight = avctx->height;
- ctx->params->bEnablePsnr = !!(avctx->flags & AV_CODEC_FLAG_PSNR);
- ctx->params->bOpenGOP = !(avctx->flags & AV_CODEC_FLAG_CLOSED_GOP);
-
- /* Tune the CTU size based on input resolution. */
- if (ctx->params->sourceWidth < 64 || ctx->params->sourceHeight < 64)
- ctx->params->maxCUSize = 32;
- if (ctx->params->sourceWidth < 32 || ctx->params->sourceHeight < 32)
- ctx->params->maxCUSize = 16;
- if (ctx->params->sourceWidth < 16 || ctx->params->sourceHeight < 16) {
- av_log(avctx, AV_LOG_ERROR, "Image size is too small (%dx%d).\n",
- ctx->params->sourceWidth, ctx->params->sourceHeight);
- return AVERROR(EINVAL);
- }
-
-
- ctx->params->vui.bEnableVideoSignalTypePresentFlag = 1;
-
- if (avctx->color_range != AVCOL_RANGE_UNSPECIFIED)
- ctx->params->vui.bEnableVideoFullRangeFlag =
- avctx->color_range == AVCOL_RANGE_JPEG;
- else
- ctx->params->vui.bEnableVideoFullRangeFlag =
- (desc->flags & AV_PIX_FMT_FLAG_RGB) ||
- avctx->pix_fmt == AV_PIX_FMT_YUVJ420P ||
- avctx->pix_fmt == AV_PIX_FMT_YUVJ422P ||
- avctx->pix_fmt == AV_PIX_FMT_YUVJ444P;
-
- if ((avctx->color_primaries <= AVCOL_PRI_SMPTE432 &&
- avctx->color_primaries != AVCOL_PRI_UNSPECIFIED) ||
- (avctx->color_trc <= AVCOL_TRC_ARIB_STD_B67 &&
- avctx->color_trc != AVCOL_TRC_UNSPECIFIED) ||
- (avctx->colorspace <= AVCOL_SPC_ICTCP &&
- avctx->colorspace != AVCOL_SPC_UNSPECIFIED)) {
-
- ctx->params->vui.bEnableColorDescriptionPresentFlag = 1;
-
- // x265 validates the parameters internally
- ctx->params->vui.colorPrimaries = avctx->color_primaries;
- ctx->params->vui.transferCharacteristics = avctx->color_trc;
-#if X265_BUILD >= 159
- if (avctx->color_trc == AVCOL_TRC_ARIB_STD_B67)
- ctx->params->preferredTransferCharacteristics = ctx->params->vui.transferCharacteristics;
-#endif
- ctx->params->vui.matrixCoeffs = avctx->colorspace;
- }
-
- // chroma sample location values are to be ignored in case of non-4:2:0
- // according to the specification, so we only write them out in case of
- // 4:2:0 (log2_chroma_{w,h} == 1).
- ctx->params->vui.bEnableChromaLocInfoPresentFlag =
- avctx->chroma_sample_location != AVCHROMA_LOC_UNSPECIFIED &&
- desc->log2_chroma_w == 1 && desc->log2_chroma_h == 1;
-
- if (ctx->params->vui.bEnableChromaLocInfoPresentFlag) {
- ctx->params->vui.chromaSampleLocTypeTopField =
- ctx->params->vui.chromaSampleLocTypeBottomField =
- avctx->chroma_sample_location - 1;
- }
-
- if (avctx->sample_aspect_ratio.num > 0 && avctx->sample_aspect_ratio.den > 0) {
- char sar[12];
- int sar_num, sar_den;
-
- av_reduce(&sar_num, &sar_den,
- avctx->sample_aspect_ratio.num,
- avctx->sample_aspect_ratio.den, 65535);
- snprintf(sar, sizeof(sar), "%d:%d", sar_num, sar_den);
- if (ctx->api->param_parse(ctx->params, "sar", sar) == X265_PARAM_BAD_VALUE) {
- av_log(avctx, AV_LOG_ERROR, "Invalid SAR: %d:%d.\n", sar_num, sar_den);
- return AVERROR_INVALIDDATA;
- }
- }
-
- switch (desc->log2_chroma_w) {
- // 4:4:4, RGB. gray
- case 0:
- // gray
- if (desc->nb_components == 1) {
- if (ctx->api->api_build_number < 85) {
- av_log(avctx, AV_LOG_ERROR,
- "libx265 version is %d, must be at least 85 for gray encoding.\n",
- ctx->api->api_build_number);
- return AVERROR_INVALIDDATA;
- }
- ctx->params->internalCsp = X265_CSP_I400;
- break;
- }
-
- // set identity matrix for RGB
- if (desc->flags & AV_PIX_FMT_FLAG_RGB) {
- ctx->params->vui.matrixCoeffs = AVCOL_SPC_RGB;
- ctx->params->vui.bEnableVideoSignalTypePresentFlag = 1;
- ctx->params->vui.bEnableColorDescriptionPresentFlag = 1;
- }
-
- ctx->params->internalCsp = X265_CSP_I444;
- break;
- // 4:2:0, 4:2:2
- case 1:
- ctx->params->internalCsp = desc->log2_chroma_h == 1 ?
- X265_CSP_I420 : X265_CSP_I422;
- break;
- default:
- av_log(avctx, AV_LOG_ERROR,
- "Pixel format '%s' cannot be mapped to a libx265 CSP!\n",
- desc->name);
- return AVERROR_BUG;
- }
-
- if (ctx->crf >= 0) {
- char crf[6];
-
- snprintf(crf, sizeof(crf), "%2.2f", ctx->crf);
- if (ctx->api->param_parse(ctx->params, "crf", crf) == X265_PARAM_BAD_VALUE) {
- av_log(avctx, AV_LOG_ERROR, "Invalid crf: %2.2f.\n", ctx->crf);
- return AVERROR(EINVAL);
- }
- } else if (avctx->bit_rate > 0) {
- ctx->params->rc.bitrate = avctx->bit_rate / 1000;
- ctx->params->rc.rateControlMode = X265_RC_ABR;
- } else if (ctx->cqp >= 0) {
- ret = libx265_param_parse_int(avctx, "qp", ctx->cqp);
- if (ret < 0)
- return ret;
- }
-
- if (avctx->qmin >= 0) {
- ret = libx265_param_parse_int(avctx, "qpmin", avctx->qmin);
- if (ret < 0)
- return ret;
- }
- if (avctx->qmax >= 0) {
- ret = libx265_param_parse_int(avctx, "qpmax", avctx->qmax);
- if (ret < 0)
- return ret;
- }
- if (avctx->max_qdiff >= 0) {
- ret = libx265_param_parse_int(avctx, "qpstep", avctx->max_qdiff);
- if (ret < 0)
- return ret;
- }
- if (avctx->qblur >= 0) {
- ret = libx265_param_parse_float(avctx, "qblur", avctx->qblur);
- if (ret < 0)
- return ret;
- }
- if (avctx->qcompress >= 0) {
- ret = libx265_param_parse_float(avctx, "qcomp", avctx->qcompress);
- if (ret < 0)
- return ret;
- }
- if (avctx->i_quant_factor >= 0) {
- ret = libx265_param_parse_float(avctx, "ipratio", avctx->i_quant_factor);
- if (ret < 0)
- return ret;
- }
- if (avctx->b_quant_factor >= 0) {
- ret = libx265_param_parse_float(avctx, "pbratio", avctx->b_quant_factor);
- if (ret < 0)
- return ret;
- }
-
- ctx->params->rc.vbvBufferSize = avctx->rc_buffer_size / 1000;
- ctx->params->rc.vbvMaxBitrate = avctx->rc_max_rate / 1000;
-
- cpb_props = ff_add_cpb_side_data(avctx);
- if (!cpb_props)
- return AVERROR(ENOMEM);
- cpb_props->buffer_size = ctx->params->rc.vbvBufferSize * 1000;
- cpb_props->max_bitrate = ctx->params->rc.vbvMaxBitrate * 1000LL;
- cpb_props->avg_bitrate = ctx->params->rc.bitrate * 1000LL;
-
- if (!(avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER))
- ctx->params->bRepeatHeaders = 1;
-
- if (avctx->gop_size >= 0) {
- ret = libx265_param_parse_int(avctx, "keyint", avctx->gop_size);
- if (ret < 0)
- return ret;
- }
- if (avctx->keyint_min > 0) {
- ret = libx265_param_parse_int(avctx, "min-keyint", avctx->keyint_min);
- if (ret < 0)
- return ret;
- }
- if (avctx->max_b_frames >= 0) {
- ret = libx265_param_parse_int(avctx, "bframes", avctx->max_b_frames);
- if (ret < 0)
- return ret;
- }
- if (avctx->refs >= 0) {
- ret = libx265_param_parse_int(avctx, "ref", avctx->refs);
- if (ret < 0)
- return ret;
- }
-
- {
- AVDictionaryEntry *en = NULL;
- while ((en = av_dict_get(ctx->x265_opts, "", en, AV_DICT_IGNORE_SUFFIX))) {
- int parse_ret = ctx->api->param_parse(ctx->params, en->key, en->value);
-
- switch (parse_ret) {
- case X265_PARAM_BAD_NAME:
- av_log(avctx, AV_LOG_WARNING,
- "Unknown option: %s.\n", en->key);
- break;
- case X265_PARAM_BAD_VALUE:
- av_log(avctx, AV_LOG_WARNING,
- "Invalid value for %s: %s.\n", en->key, en->value);
- break;
- default:
- break;
- }
- }
- }
-
- if (ctx->params->rc.vbvBufferSize && avctx->rc_initial_buffer_occupancy > 1000 &&
- ctx->params->rc.vbvBufferInit == 0.9) {
- ctx->params->rc.vbvBufferInit = (float)avctx->rc_initial_buffer_occupancy / 1000;
- }
-
- if (ctx->profile) {
- if (ctx->api->param_apply_profile(ctx->params, ctx->profile) < 0) {
- int i;
- av_log(avctx, AV_LOG_ERROR, "Invalid or incompatible profile set: %s.\n", ctx->profile);
- av_log(avctx, AV_LOG_INFO, "Possible profiles:");
- for (i = 0; x265_profile_names[i]; i++)
- av_log(avctx, AV_LOG_INFO, " %s", x265_profile_names[i]);
- av_log(avctx, AV_LOG_INFO, "\n");
- return AVERROR(EINVAL);
- }
- }
-
- ctx->encoder = ctx->api->encoder_open(ctx->params);
- if (!ctx->encoder) {
- av_log(avctx, AV_LOG_ERROR, "Cannot open libx265 encoder.\n");
- libx265_encode_close(avctx);
- return AVERROR_INVALIDDATA;
- }
-
- if (avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER) {
- x265_nal *nal;
- int nnal;
-
- avctx->extradata_size = ctx->api->encoder_headers(ctx->encoder, &nal, &nnal);
- if (avctx->extradata_size <= 0) {
- av_log(avctx, AV_LOG_ERROR, "Cannot encode headers.\n");
- libx265_encode_close(avctx);
- return AVERROR_INVALIDDATA;
- }
-
- avctx->extradata = av_malloc(avctx->extradata_size + AV_INPUT_BUFFER_PADDING_SIZE);
- if (!avctx->extradata) {
- av_log(avctx, AV_LOG_ERROR,
- "Cannot allocate HEVC header of size %d.\n", avctx->extradata_size);
- libx265_encode_close(avctx);
- return AVERROR(ENOMEM);
- }
-
- memcpy(avctx->extradata, nal[0].payload, avctx->extradata_size);
- memset(avctx->extradata + avctx->extradata_size, 0, AV_INPUT_BUFFER_PADDING_SIZE);
- }
-
- return 0;
-}
-
-static av_cold int libx265_encode_set_roi(libx265Context *ctx, const AVFrame *frame, x265_picture* pic)
-{
- AVFrameSideData *sd = av_frame_get_side_data(frame, AV_FRAME_DATA_REGIONS_OF_INTEREST);
- if (sd) {
- if (ctx->params->rc.aqMode == X265_AQ_NONE) {
- if (!ctx->roi_warned) {
- ctx->roi_warned = 1;
- av_log(ctx, AV_LOG_WARNING, "Adaptive quantization must be enabled to use ROI encoding, skipping ROI.\n");
- }
- } else {
- /* 8x8 block when qg-size is 8, 16*16 block otherwise. */
- int mb_size = (ctx->params->rc.qgSize == 8) ? 8 : 16;
- int mbx = (frame->width + mb_size - 1) / mb_size;
- int mby = (frame->height + mb_size - 1) / mb_size;
- int qp_range = 51 + 6 * (pic->bitDepth - 8);
- int nb_rois;
- const AVRegionOfInterest *roi;
- uint32_t roi_size;
- float *qoffsets; /* will be freed after encode is called. */
-
- roi = (const AVRegionOfInterest*)sd->data;
- roi_size = roi->self_size;
- if (!roi_size || sd->size % roi_size != 0) {
- av_log(ctx, AV_LOG_ERROR, "Invalid AVRegionOfInterest.self_size.\n");
- return AVERROR(EINVAL);
- }
- nb_rois = sd->size / roi_size;
-
- qoffsets = av_calloc(mbx * mby, sizeof(*qoffsets));
- if (!qoffsets)
- return AVERROR(ENOMEM);
-
- // This list must be iterated in reverse because the first
- // region in the list applies when regions overlap.
- for (int i = nb_rois - 1; i >= 0; i--) {
- int startx, endx, starty, endy;
- float qoffset;
-
- roi = (const AVRegionOfInterest*)(sd->data + roi_size * i);
-
- starty = FFMIN(mby, roi->top / mb_size);
- endy = FFMIN(mby, (roi->bottom + mb_size - 1)/ mb_size);
- startx = FFMIN(mbx, roi->left / mb_size);
- endx = FFMIN(mbx, (roi->right + mb_size - 1)/ mb_size);
-
- if (roi->qoffset.den == 0) {
- av_free(qoffsets);
- av_log(ctx, AV_LOG_ERROR, "AVRegionOfInterest.qoffset.den must not be zero.\n");
- return AVERROR(EINVAL);
- }
- qoffset = roi->qoffset.num * 1.0f / roi->qoffset.den;
- qoffset = av_clipf(qoffset * qp_range, -qp_range, +qp_range);
-
- for (int y = starty; y < endy; y++)
- for (int x = startx; x < endx; x++)
- qoffsets[x + y*mbx] = qoffset;
- }
-
- pic->quantOffsets = qoffsets;
- }
- }
- return 0;
-}
-
-static void free_picture(libx265Context *ctx, x265_picture *pic)
-{
- x265_sei *sei = &pic->userSEI;
- for (int i = 0; i < sei->numPayloads; i++)
- av_free(sei->payloads[i].payload);
-
- if (pic->userData) {
- int idx = (int)(intptr_t)pic->userData - 1;
- rd_release(ctx, idx);
- pic->userData = NULL;
- }
-
- av_freep(&pic->quantOffsets);
- sei->numPayloads = 0;
-}
-
-static int libx265_encode_frame(AVCodecContext *avctx, AVPacket *pkt,
- const AVFrame *pic, int *got_packet)
-{
- libx265Context *ctx = avctx->priv_data;
- x265_picture x265pic;
- x265_picture x265pic_out = { 0 };
- x265_nal *nal;
- x265_sei *sei;
- uint8_t *dst;
- int pict_type;
- int payload = 0;
- int nnal;
- int ret;
- int i;
-
- ctx->api->picture_init(ctx->params, &x265pic);
-
- sei = &x265pic.userSEI;
- sei->numPayloads = 0;
-
- if (pic) {
- ReorderedData *rd;
- int rd_idx;
-
- for (i = 0; i < 3; i++) {
- x265pic.planes[i] = pic->data[i];
- x265pic.stride[i] = pic->linesize[i];
- }
-
- x265pic.pts = pic->pts;
- x265pic.bitDepth = av_pix_fmt_desc_get(avctx->pix_fmt)->comp[0].depth;
-
- x265pic.sliceType = pic->pict_type == AV_PICTURE_TYPE_I ?
- (ctx->forced_idr ? X265_TYPE_IDR : X265_TYPE_I) :
- pic->pict_type == AV_PICTURE_TYPE_P ? X265_TYPE_P :
- pic->pict_type == AV_PICTURE_TYPE_B ? X265_TYPE_B :
- X265_TYPE_AUTO;
-
- ret = libx265_encode_set_roi(ctx, pic, &x265pic);
- if (ret < 0)
- return ret;
-
- rd_idx = rd_get(ctx);
- if (rd_idx < 0) {
- free_picture(ctx, &x265pic);
- return rd_idx;
- }
- rd = &ctx->rd[rd_idx];
-
- rd->duration = pic->duration;
-#if FF_API_REORDERED_OPAQUE
-FF_DISABLE_DEPRECATION_WARNINGS
- rd->reordered_opaque = pic->reordered_opaque;
-FF_ENABLE_DEPRECATION_WARNINGS
-#endif
- if (avctx->flags & AV_CODEC_FLAG_COPY_OPAQUE) {
- rd->frame_opaque = pic->opaque;
- ret = av_buffer_replace(&rd->frame_opaque_ref, pic->opaque_ref);
- if (ret < 0) {
- rd_release(ctx, rd_idx);
- free_picture(ctx, &x265pic);
- return ret;
- }
- }
-
- x265pic.userData = (void*)(intptr_t)(rd_idx + 1);
-
- if (ctx->a53_cc) {
- void *sei_data;
- size_t sei_size;
-
- ret = ff_alloc_a53_sei(pic, 0, &sei_data, &sei_size);
- if (ret < 0) {
- av_log(ctx, AV_LOG_ERROR, "Not enough memory for closed captions, skipping\n");
- } else if (sei_data) {
- void *tmp;
- x265_sei_payload *sei_payload;
-
- tmp = av_fast_realloc(ctx->sei_data,
- &ctx->sei_data_size,
- (sei->numPayloads + 1) * sizeof(*sei_payload));
- if (!tmp) {
- av_free(sei_data);
- free_picture(ctx, &x265pic);
- return AVERROR(ENOMEM);
- }
- ctx->sei_data = tmp;
- sei->payloads = ctx->sei_data;
- sei_payload = &sei->payloads[sei->numPayloads];
- sei_payload->payload = sei_data;
- sei_payload->payloadSize = sei_size;
- sei_payload->payloadType = SEI_TYPE_USER_DATA_REGISTERED_ITU_T_T35;
- sei->numPayloads++;
- }
- }
-
- if (ctx->udu_sei) {
- for (i = 0; i < pic->nb_side_data; i++) {
- AVFrameSideData *side_data = pic->side_data[i];
- void *tmp;
- x265_sei_payload *sei_payload;
-
- if (side_data->type != AV_FRAME_DATA_SEI_UNREGISTERED)
- continue;
-
- tmp = av_fast_realloc(ctx->sei_data,
- &ctx->sei_data_size,
- (sei->numPayloads + 1) * sizeof(*sei_payload));
- if (!tmp) {
- free_picture(ctx, &x265pic);
- return AVERROR(ENOMEM);
- }
- ctx->sei_data = tmp;
- sei->payloads = ctx->sei_data;
- sei_payload = &sei->payloads[sei->numPayloads];
- sei_payload->payload = av_memdup(side_data->data, side_data->size);
- if (!sei_payload->payload) {
- free_picture(ctx, &x265pic);
- return AVERROR(ENOMEM);
- }
- sei_payload->payloadSize = side_data->size;
- /* Equal to libx265 USER_DATA_UNREGISTERED */
- sei_payload->payloadType = SEI_TYPE_USER_DATA_UNREGISTERED;
- sei->numPayloads++;
- }
- }
- }
-
- ret = ctx->api->encoder_encode(ctx->encoder, &nal, &nnal,
- pic ? &x265pic : NULL, &x265pic_out);
-
- for (i = 0; i < sei->numPayloads; i++)
- av_free(sei->payloads[i].payload);
- av_freep(&x265pic.quantOffsets);
-
- if (ret < 0)
- return AVERROR_EXTERNAL;
-
- if (!nnal)
- return 0;
-
- for (i = 0; i < nnal; i++)
- payload += nal[i].sizeBytes;
-
- ret = ff_get_encode_buffer(avctx, pkt, payload, 0);
- if (ret < 0) {
- av_log(avctx, AV_LOG_ERROR, "Error getting output packet.\n");
- return ret;
- }
- dst = pkt->data;
-
- for (i = 0; i < nnal; i++) {
- memcpy(dst, nal[i].payload, nal[i].sizeBytes);
- dst += nal[i].sizeBytes;
-
- if (is_keyframe(nal[i].type))
- pkt->flags |= AV_PKT_FLAG_KEY;
- }
-
- pkt->pts = x265pic_out.pts;
- pkt->dts = x265pic_out.dts;
-
- switch (x265pic_out.sliceType) {
- case X265_TYPE_IDR:
- case X265_TYPE_I:
- pict_type = AV_PICTURE_TYPE_I;
- break;
- case X265_TYPE_P:
- pict_type = AV_PICTURE_TYPE_P;
- break;
- case X265_TYPE_B:
- case X265_TYPE_BREF:
- pict_type = AV_PICTURE_TYPE_B;
- break;
- default:
- av_log(avctx, AV_LOG_ERROR, "Unknown picture type encountered.\n");
- return AVERROR_EXTERNAL;
- }
-
-#if X265_BUILD >= 130
- if (x265pic_out.sliceType == X265_TYPE_B)
-#else
- if (x265pic_out.frameData.sliceType == 'b')
-#endif
- pkt->flags |= AV_PKT_FLAG_DISPOSABLE;
-
- ff_side_data_set_encoder_stats(pkt, x265pic_out.frameData.qp * FF_QP2LAMBDA, NULL, 0, pict_type);
-
- if (x265pic_out.userData) {
- int idx = (int)(intptr_t)x265pic_out.userData - 1;
- ReorderedData *rd = &ctx->rd[idx];
-
-#if FF_API_REORDERED_OPAQUE
-FF_DISABLE_DEPRECATION_WARNINGS
- avctx->reordered_opaque = rd->reordered_opaque;
-FF_ENABLE_DEPRECATION_WARNINGS
-#endif
- pkt->duration = rd->duration;
-
- if (avctx->flags & AV_CODEC_FLAG_COPY_OPAQUE) {
- pkt->opaque = rd->frame_opaque;
- pkt->opaque_ref = rd->frame_opaque_ref;
- rd->frame_opaque_ref = NULL;
- }
-
- rd_release(ctx, idx);
- }
-#if FF_API_REORDERED_OPAQUE
- else {
-FF_DISABLE_DEPRECATION_WARNINGS
- avctx->reordered_opaque = 0;
-FF_ENABLE_DEPRECATION_WARNINGS
- }
-#endif
-
- *got_packet = 1;
- return 0;
-}
-
-static const enum AVPixelFormat x265_csp_eight[] = {
- AV_PIX_FMT_YUV420P,
- AV_PIX_FMT_YUVJ420P,
- AV_PIX_FMT_YUV422P,
- AV_PIX_FMT_YUVJ422P,
- AV_PIX_FMT_YUV444P,
- AV_PIX_FMT_YUVJ444P,
- AV_PIX_FMT_GBRP,
- AV_PIX_FMT_GRAY8,
- AV_PIX_FMT_NONE
-};
-
-static const enum AVPixelFormat x265_csp_ten[] = {
- AV_PIX_FMT_YUV420P,
- AV_PIX_FMT_YUVJ420P,
- AV_PIX_FMT_YUV422P,
- AV_PIX_FMT_YUVJ422P,
- AV_PIX_FMT_YUV444P,
- AV_PIX_FMT_YUVJ444P,
- AV_PIX_FMT_GBRP,
- AV_PIX_FMT_YUV420P10,
- AV_PIX_FMT_YUV422P10,
- AV_PIX_FMT_YUV444P10,
- AV_PIX_FMT_GBRP10,
- AV_PIX_FMT_GRAY8,
- AV_PIX_FMT_GRAY10,
- AV_PIX_FMT_NONE
-};
-
-static const enum AVPixelFormat x265_csp_twelve[] = {
- AV_PIX_FMT_YUV420P,
- AV_PIX_FMT_YUVJ420P,
- AV_PIX_FMT_YUV422P,
- AV_PIX_FMT_YUVJ422P,
- AV_PIX_FMT_YUV444P,
- AV_PIX_FMT_YUVJ444P,
- AV_PIX_FMT_GBRP,
- AV_PIX_FMT_YUV420P10,
- AV_PIX_FMT_YUV422P10,
- AV_PIX_FMT_YUV444P10,
- AV_PIX_FMT_GBRP10,
- AV_PIX_FMT_YUV420P12,
- AV_PIX_FMT_YUV422P12,
- AV_PIX_FMT_YUV444P12,
- AV_PIX_FMT_GBRP12,
- AV_PIX_FMT_GRAY8,
- AV_PIX_FMT_GRAY10,
- AV_PIX_FMT_GRAY12,
- AV_PIX_FMT_NONE
-};
-
-static av_cold void libx265_encode_init_csp(FFCodec *codec)
-{
- if (x265_api_get(12))
- codec->p.pix_fmts = x265_csp_twelve;
- else if (x265_api_get(10))
- codec->p.pix_fmts = x265_csp_ten;
- else if (x265_api_get(8))
- codec->p.pix_fmts = x265_csp_eight;
-}
-
-#define OFFSET(x) offsetof(libx265Context, x)
-#define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM
-static const AVOption options[] = {
- { "crf", "set the x265 crf", OFFSET(crf), AV_OPT_TYPE_FLOAT, { .dbl = -1 }, -1, FLT_MAX, VE },
- { "qp", "set the x265 qp", OFFSET(cqp), AV_OPT_TYPE_INT, { .i64 = -1 }, -1, INT_MAX, VE },
- { "forced-idr", "if forcing keyframes, force them as IDR frames", OFFSET(forced_idr),AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
- { "preset", "set the x265 preset", OFFSET(preset), AV_OPT_TYPE_STRING, { 0 }, 0, 0, VE },
- { "tune", "set the x265 tune parameter", OFFSET(tune), AV_OPT_TYPE_STRING, { 0 }, 0, 0, VE },
- { "profile", "set the x265 profile", OFFSET(profile), AV_OPT_TYPE_STRING, { 0 }, 0, 0, VE },
- { "udu_sei", "Use user data unregistered SEI if available", OFFSET(udu_sei), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
- { "a53cc", "Use A53 Closed Captions (if available)", OFFSET(a53_cc), AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, VE },
- { "x265-params", "set the x265 configuration using a :-separated list of key=value parameters", OFFSET(x265_opts), AV_OPT_TYPE_DICT, { 0 }, 0, 0, VE },
- { NULL }
-};
-
-static const AVClass class = {
- .class_name = "libx265",
- .item_name = av_default_item_name,
- .option = options,
- .version = LIBAVUTIL_VERSION_INT,
-};
-
-static const FFCodecDefault x265_defaults[] = {
- { "b", "0" },
- { "bf", "-1" },
- { "g", "-1" },
- { "keyint_min", "-1" },
- { "refs", "-1" },
- { "qmin", "-1" },
- { "qmax", "-1" },
- { "qdiff", "-1" },
- { "qblur", "-1" },
- { "qcomp", "-1" },
- { "i_qfactor", "-1" },
- { "b_qfactor", "-1" },
- { NULL },
-};
-
-FFCodec ff_libx265_encoder = {
- .p.name = "libx265",
- CODEC_LONG_NAME("libx265 H.265 / HEVC"),
- .p.type = AVMEDIA_TYPE_VIDEO,
- .p.id = AV_CODEC_ID_HEVC,
- .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_DELAY |
- AV_CODEC_CAP_OTHER_THREADS |
- AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE,
- .p.priv_class = &class,
- .p.wrapper_name = "libx265",
- .init = libx265_encode_init,
- .init_static_data = libx265_encode_init_csp,
- FF_CODEC_ENCODE_CB(libx265_encode_frame),
- .close = libx265_encode_close,
- .priv_data_size = sizeof(libx265Context),
- .defaults = x265_defaults,
- .caps_internal = FF_CODEC_CAP_NOT_INIT_THREADSAFE |
- FF_CODEC_CAP_AUTO_THREADS,
-};
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Assoluto Racing MOD APK Everything You Need to Know about the Game with Dinheiro Infinito.md b/spaces/congsaPfin/Manga-OCR/logs/Assoluto Racing MOD APK Everything You Need to Know about the Game with Dinheiro Infinito.md
deleted file mode 100644
index 215f75a1218e89d8290ebe47d33c065f9d5e8f4d..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Assoluto Racing MOD APK Everything You Need to Know about the Game with Dinheiro Infinito.md
+++ /dev/null
@@ -1,94 +0,0 @@
-
-
Assoluto Racing Mod Apk Dinheiro Infinito: A Complete Guide
-
If you are a fan of racing games, you might have heard of Assoluto Racing, one of the most realistic and immersive racing simulators on mobile devices. But did you know that you can enjoy this game even more with the mod apk dinheiro infinito version? In this article, we will tell you everything you need to know about Assoluto Racing Mod Apk Dinheiro Infinito, including what it is, how to download and install it, why you should play it, and some tips and tricks to help you win every race. Let's get started!
-
What is Assoluto Racing?
-
Assoluto Racing is a racing game developed by Infinity Vector Ltd, a studio based in Hong Kong. The game was released in 2016 and has since gained millions of downloads and positive reviews from players and critics alike. The game aims to provide a realistic and authentic racing experience, with accurate physics, stunning graphics, and licensed cars from famous brands like Ferrari, Lamborghini, BMW, Nissan, and more. You can choose from various modes, such as career, arcade, drift, time attack, or online multiplayer, and race on different tracks around the world. You can also customize your car with different parts, colors, decals, and tuning options.
Realistic physics engine that simulates the behavior of real cars on different surfaces and conditions.
-
High-quality graphics that showcase the details of the cars, tracks, environments, and effects.
-
Licensed cars from over 20 manufacturers, each with their own specifications and performance.
-
Customizable cars with hundreds of parts, colors, decals, and tuning options.
-
Different modes to suit your preference and skill level, such as career, arcade, drift, time attack, or online multiplayer.
-
Different tracks from around the world, each with their own layout, scenery, and challenges.
-
Online leaderboards and events where you can compete with other players and win rewards.
-
-
How to download and install Assoluto Racing Mod Apk Dinheiro Infinito
-
If you want to enjoy Assoluto Racing with unlimited money and coins, you will need to download and install the mod apk dinheiro infinito version. Here are the steps to do so:
-
-
Go to [this link](^1^) and download the mod apk file. You may need to enable unknown sources in your device settings to allow the installation of third-party apps.
-
Once the download is complete, locate the file in your device storage and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to finish.
-
Launch the game and enjoy!
-
-
Why you should play Assoluto Racing Mod Apk Dinheiro Infinito
-
There are many reasons why you should play Assoluto Racing Mod Apk Dinheiro Infinito. Here are some of them:
-
assoluto racing hack apk unlimited money
-assoluto racing mod apk download latest version
-assoluto racing mod apk android 1
-assoluto racing mod apk revdl
-assoluto racing mod apk obb
-assoluto racing mod apk free shopping
-assoluto racing mod apk all cars unlocked
-assoluto racing mod apk offline
-assoluto racing mod apk rexdl
-assoluto racing mod apk no root
-assoluto racing mod apk data
-assoluto racing mod apk pure
-assoluto racing mod apk unlimited coins
-assoluto racing mod apk unlimited gold
-assoluto racing mod apk unlimited gems
-assoluto racing mod apk unlimited nitro
-assoluto racing mod apk unlimited fuel
-assoluto racing mod apk unlimited cash
-assoluto racing mod apk unlimited tokens
-assoluto racing mod apk unlimited everything
-assoluto racing mod apk full version
-assoluto racing mod apk premium
-assoluto racing mod apk pro
-assoluto racing mod apk vip
-assoluto racing mod apk mega
-assoluto racing mod apk super
-assoluto racing mod apk ultra
-assoluto racing mod apk extreme
-assoluto racing mod apk real physics engine
-assoluto racing mod apk realistic graphics
-assoluto racing mod apk realistic driving physics
-assoluto racing mod apk realistic car sounds
-assoluto racing mod apk realistic damage system
-assoluto racing mod apk realistic weather effects
-assoluto racing mod apk realistic tracks and cars
-assoluto racing mod apk best simulation game
-assoluto racing mod apk best car game
-assoluto racing mod apk best graphics game
-assoluto racing mod apk best physics game
-assoluto racing mod apk best android game
-download assoluto racing mod apk dinheiro infinito gratis
-download assoluto racing mod apk dinheiro infinito 2023
-download assoluto racing mod apk dinheiro infinito atualizado
-download assoluto racing mod apk dinheiro infinito mediafıre
-download assoluto racing mod apk dinheiro infinito mega
-download assoluto racing mod apk dinheiro infinito google drive
-download assoluto racing mod apk dinheiro infinito zippyshare
-download assoluto racing mod apk dinheiro infinito dropbox
-download assoluto racing mod apk dinheiro infinito uptodown
-
Unlimited money and coins
-
With the mod apk dinheiro infinito version, you will have access to unlimited money and coins in the game. This means that you can buy any car you want, upgrade it to the max and more. It also gives you unlimited money and coins to buy and customize any car and track you want. You can download and install the mod apk dinheiro infinito version easily and enjoy the game without any limitations or restrictions. You can also follow some tips and tricks to improve your skills and win more races. Assoluto Racing Mod Apk Dinheiro Infinito is a must-try game for any racing enthusiast. Download it now and start your racing adventure!
-
FAQs
-
Here are some frequently asked questions about Assoluto Racing Mod Apk Dinheiro Infinito:
-
-
Q: Is Assoluto Racing Mod Apk Dinheiro Infinito safe to download and install?
-
A: Yes, Assoluto Racing Mod Apk Dinheiro Infinito is safe to download and install, as long as you use a trusted source like [this link]. However, you should always be careful when downloading and installing any third-party apps, as they may contain malware or viruses that can harm your device or data.
-
Q: Is Assoluto Racing Mod Apk Dinheiro Infinito compatible with my device?
-
A: Assoluto Racing Mod Apk Dinheiro Infinito is compatible with most Android devices that have Android 4.2 or higher. However, some devices may not support the game or the mod apk due to hardware or software limitations. You can check the compatibility of your device before downloading and installing the game.
-
Q: How can I update Assoluto Racing Mod Apk Dinheiro Infinito?
-
A: Assoluto Racing Mod Apk Dinheiro Infinito is updated regularly with new features, cars, tracks, events, and bug fixes. You can update the game by downloading and installing the latest mod apk file from [this link]. You may need to uninstall the previous version of the game before installing the new one.
-
Q: How can I contact the developers of Assoluto Racing?
-
A: You can contact the developers of Assoluto Racing by visiting their official website, Facebook page, Twitter account, or Instagram account. You can also send them an email at support@assolutogames.com. You can give them your feedback, suggestions, questions, or report any issues or problems you encounter in the game.
-
Q: How can I support the developers of Assoluto Racing?
-
A: You can support the developers of Assoluto Racing by rating and reviewing the game on Google Play Store, sharing it with your friends and family, and following them on their social media accounts. You can also buy some in-game items or coins with real money to support their development costs.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Old Telugu Songs Free Download 2020 - Naa Songs Presents the Classic Hits of Telugu Cinema.md b/spaces/congsaPfin/Manga-OCR/logs/Old Telugu Songs Free Download 2020 - Naa Songs Presents the Classic Hits of Telugu Cinema.md
deleted file mode 100644
index 77bd595fbf7c95ba9e6d9a95fecd646b8b0d39eb..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Old Telugu Songs Free Download 2020 - Naa Songs Presents the Classic Hits of Telugu Cinema.md
+++ /dev/null
@@ -1,135 +0,0 @@
-
-
Old Songs Telugu Naa Songs Free Download 2020
-
If you are a fan of old Telugu songs, you might be looking for ways to download them for free. Old Telugu songs have a charm and nostalgia that is hard to resist. They are melodious, meaningful, and memorable. Whether you want to relive your childhood memories, enjoy some evergreen classics, or discover new gems, old Telugu songs are a treasure trove of music.
In this article, we will tell you why you should listen to old Telugu songs, how to download them for free, and what are the best sources for old Telugu songs free download 2020. We will also answer some frequently asked questions about old Telugu songs. So, let's get started!
-
Why listen to old Telugu songs?
-
There are many reasons why you should listen to old Telugu songs. Here are some of them:
-
-
Old Telugu songs are timeless and evergreen. They have a universal appeal that transcends generations and cultures.
-
Old Telugu songs are rich in lyrics and emotions. They convey deep and meaningful messages that touch your heart and soul.
-
Old Telugu songs are soothing and relaxing. They can help you cope with stress, anxiety, and depression. They can also uplift your mood and spirit.
-
Old Telugu songs are diverse and versatile. They cover various genres, themes, and styles. You can find old Telugu songs for every occasion, mood, and taste.
-
Old Telugu songs are nostalgic and sentimental. They can remind you of your past, your loved ones, and your happy moments.
-
-
How to download old Telugu songs for free?
-
Downloading old Telugu songs for free is not difficult if you know where to look. There are many websites and apps that offer old Telugu songs free download 2020. However, not all of them are safe, legal, and reliable. Some of them may contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some of them may also violate the copyrights of the original artists and producers.
-
Therefore, you should be careful and selective when choosing a source for old Telugu songs free download 2020. You should look for sources that are trustworthy, reputable, and user-friendly. You should also check the quality, quantity, and variety of the old Telugu songs available on the source. You should also read the reviews, ratings, and feedback of other users before downloading any song.
-
Best sources for old Telugu songs free download 2020
-
To help you find the best sources for old Telugu songs free download 2020, we have compiled a list of some of the most popular and recommended ones. These sources have been tested and verified by us and many other users. They offer high-quality, legal, and safe downloads of old Telugu songs. They also have a large collection of old Telugu songs from different eras, genres, artists, and movies. Here are the best sources for old Telugu songs free download 2020:
-
Gaana.com
-
Gaana.com is one of the leading music streaming platforms in India. It offers millions of songs in various languages, including Telugu. It also has a dedicated section for old Telugu songs that features hundreds of playlists and albums curated by experts and users.
-
old telugu songs mp3 download 90s & 2000s
-telugu top 100 songs 2020 free download
-golden 70s telugu songs playlist download
-old telugu hit songs free download naa songs
-2020 telugu songs mp3 download old movies
-old telugu melody songs free download naa
-new telugu songs 2020 download old singers
-old telugu devotional songs free download naa
-latest telugu songs 2020 free download old style
-old telugu folk songs free download naa
-best telugu songs 2020 download old classics
-old telugu love songs free download naa
-new telugu movie songs 2020 download old remixes
-old telugu sad songs free download naa
-top telugu songs 2020 free download old versions
-old telugu duet songs free download naa
-latest telugu movie songs 2020 download old hits
-old telugu wedding songs free download naa
-best telugu movie songs 2020 download old melodies
-old telugu patriotic songs free download naa
-new telugu video songs 2020 download old quality
-old telugu comedy songs free download naa
-latest telugu video songs 2020 free download old format
-old telugu dance songs free download naa
-best telugu video songs 2020 download old scenes
-old telugu birthday songs free download naa
-new telugu audio songs 2020 free download old albums
-old telugu lullaby songs free download naa
-latest telugu audio songs 2020 download old lyrics
-old telugu rap songs free download naa
-best telugu audio songs 2020 free download old tunes
-old telugu ghazal songs free download naa
-new telugu mp3 songs 2020 free download old collection
-old telugu qawwali songs free download naa
-latest telugu mp3 songs 2020 download old singers
-old telugu rock songs free download naa
-best telugu mp3 songs 2020 free download old music
-old telugu pop songs free download naa
-new telugu hd video songs 2020 download old movies
-old telugu jazz songs free download naa
-latest telugu hd video songs 2020 free download old quality
-old telugu reggae songs free download naa
-best telugu hd video songs 2020 download old clips
-old telugu metal songs free download naa
-new telugu hd audio songs 2020 free download old soundtracks
-old telugu country songs free download naa
-latest telugu hd audio songs 2020 download old beats
-old telugu disco songs free download naa
-
Features of Gaana.com
-
-
Gaana.com offers unlimited online streaming and offline downloads of old Telugu songs for free.
-
Gaana.com has a user-friendly interface and a powerful search
How to download old Telugu songs from Gaana.com?
-
To download old Telugu songs from Gaana.com, you need to follow these simple steps:
-
-
Download and install the Gaana app on your device from the Google Play Store or the App Store.
-
Sign up or log in with your email, phone number, or social media account.
-
Go to the old Telugu songs section and browse through the playlists and albums.
-
Select the song you want to download and tap on the download icon.
-
Choose the quality of the download and wait for it to finish.
-
Enjoy listening to your downloaded old Telugu song offline.
-
-
Wynk Music
-
Wynk Music is another popular music streaming platform in India. It offers over 6 million songs in various languages, including Telugu. It also has a special category for old Telugu songs that showcases the best of retro music from Tollywood.
-
Features of Wynk Music
-
-
Wynk Music offers unlimited online streaming and offline downloads of old Telugu songs for free for Airtel users. For non-Airtel users, it offers a subscription plan that starts from Rs. 49 per month.
-
Wynk Music has a user-friendly interface and a smart search function that helps you find your favorite old Telugu songs easily.
-
Wynk Music has a personalized recommendation system that suggests you old Telugu songs based on your listening history and preferences.
-
Wynk Music has a social feature that allows you to share your old Telugu songs with your friends and family via WhatsApp, Facebook, Twitter, and other platforms.
-
-
How to download old Telugu songs from Wynk Music?
-
To download old Telugu songs from Wynk Music, you need to follow these simple steps:
-
-
Download and install the Wynk Music app on your device from the Google Play Store or the App Store.
-
Sign up or log in with your email, phone number, or social media account.
-
Go to the old Telugu songs category and browse through the songs and albums.
-
Select the song you want to download and tap on the download icon.
-
Choose the quality of the download and wait for it to finish.
-
Enjoy listening to your downloaded old Telugu song offline.
-
-
Other options for old Telugu songs free download 2020
-
Besides Gaana.com and Wynk Music, there are some other options for old Telugu songs free download 2020. Some of them are:
-
-
Naa Songs: Naa Songs is a website that offers a huge collection of old and new Telugu songs for free download. You can find old Telugu songs from various movies, singers, composers, and genres. You can also request for any song that is not available on the website. To download old Telugu songs from Naa Songs, you just need to visit the website, search for the song, and click on the download link.
-
Saavn: Saavn is another music streaming platform that offers over 50 million songs in various languages, including Telugu. It also has a section for old Telugu songs that features some of the most popular and classic hits from Tollywood. To download old Telugu songs from Saavn, you need to subscribe to its premium plan that costs Rs. 99 per month. You can then access unlimited downloads of old Telugu songs in high quality.
-
Hungama: Hungama is a digital entertainment platform that offers music, movies, videos, and games in various languages, including Telugu. It also has a library of old Telugu songs that spans across different eras, genres, artists, and movies. To download old Telugu songs from Hungama, you need to buy coins or subscribe to its pro plan that costs Rs. 99 per month. You can then redeem your coins or use your pro plan to download old Telugu songs in high quality.
-
-
Conclusion
-
In conclusion, old Telugu songs are a great way to enjoy some of the best music from Tollywood. They have a charm and nostalgia that is hard to resist. They are also easy to download for free from various sources such as Gaana.com, Wynk Music, Naa Songs, Saavn, and Hungama. However, you should be careful and selective when choosing a source for old Telugu songs free download 2020. You should look for sources that are trustworthy, reputable, and user-friendly. You should also check the quality, quantity, and variety of the old Telugu songs available on the source. You should also read the reviews, ratings, and feedback of other users before downloading any song.
-
We hope this article has helped you find the best sources for old Telugu songs free download 2020. If you have any questions or suggestions, please feel free to leave a comment below. Happy listening!
-
FAQs
-
Here are some of the most frequently asked questions about old Telugu songs free download 2020:
-
-
What are some of the best old Telugu songs?
-
Some of the best old Telugu songs are:
-
-
Neele Gagan Ke Tale from Hamraaz (1967)
-
Prema Nagarilo from Prema Nagar (1971)
-
Chukkalle Thochave from Nireekshana (1982)
-
Abbanee Tiyyani from Jagadeka Veerudu Athiloka Sundari (1990)
-
Priyathama Neevachata Kusalama from Guna (1991)
-
-
How can I listen to old Telugu songs online?
-
You can listen to old Telugu songs online by using music streaming platforms such as Gaana.com, Wynk Music, Saavn, Hungama, Spotify, YouTube Music, JioSaavn, and Amazon Music. You can also listen to old Telugu songs online by using radio stations such as Radio Mirchi, Radio City, Red FM, Big FM, and All India Radio.
-
How can I convert old Telugu songs to MP3 format?
-
You can convert old Telugu songs to MP3 format by using online converters such as Online Audio Converter, Online Video Converter, Convertio, Zamzar, and CloudConvert. You can also convert old Telugu songs to MP3 format by using software such as VLC Media Player, iTunes, Windows Media Player, and Audacity.
-
How can I transfer old Telugu songs to my phone or computer?
-
You can transfer old Telugu songs to your phone or computer by using USB cables, Bluetooth, Wi-Fi, cloud storage services such as Google Drive, Dropbox, OneDrive, and iCloud, or file sharing apps such as SHAREit, Xender, Zapya, and AirDroid.
-
How can I make a playlist of old Telugu songs?
-
You can make a playlist of old Telugu songs by using music streaming platforms such as Gaana.com, Wynk Music, Saavn, Hungama, Spotify, YouTube Music, JioSaavn, and Amazon Music. You can also make a playlist of old Telugu songs by using music players such as VLC Media Player, iTunes, Windows Media Player, and Audacity.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Shadow Fight 2 Mod APK 2.12 0 Everything You Need to Know About Max Level and Titan Mode.md b/spaces/congsaPfin/Manga-OCR/logs/Shadow Fight 2 Mod APK 2.12 0 Everything You Need to Know About Max Level and Titan Mode.md
deleted file mode 100644
index 9095c0f93345b072ea992b3bb9b237c03e3b44e4..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Shadow Fight 2 Mod APK 2.12 0 Everything You Need to Know About Max Level and Titan Mode.md
+++ /dev/null
@@ -1,133 +0,0 @@
-
-
Shadow Fight 2 Mod APK 2.12 0 Max Level: Everything You Need to Know
-
If you are a fan of fighting games, you might have heard of Shadow Fight 2, a popular mobile game that has millions of downloads worldwide. But did you know that there is a modified version of the game that gives you unlimited money, weapons, and access to the max level? In this article, we will tell you everything you need to know about Shadow Fight 2 Mod APK, including what it is, how to download and install it, and how to play it. Let's get started!
-
What is Shadow Fight 2?
-
Shadow Fight 2 is a mobile fighting game developed by NEKKI, a Russian game studio. The game is set in a world where shadows are the only form of existence, and you play as a nameless warrior who must fight his way through various enemies and bosses to restore his human form. The game combines elements of RPG, action, and martial arts, and has a unique art style that uses silhouettes and realistic physics.
The gameplay of Shadow Fight 2 is simple but addictive. You control your character using a virtual joystick and buttons for punching, kicking, jumping, and blocking. You can also use weapons and magic to enhance your combat skills. You can customize your character with different outfits, helmets, armor, and accessories. You can also upgrade your weapons and learn new moves as you progress through the game.
-
The game has six different modes: story, tournament, survival, duel, ascension, and underworld. In story mode, you follow the main plot and face various enemies and bosses. In tournament mode, you compete against other fighters in a series of matches. In survival mode, you fight against waves of enemies until you lose. In duel mode, you fight against random opponents with random rules. In ascension mode, you fight against special enemies with special rewards. In underworld mode, you team up with other players online and fight against powerful bosses.
-
The features of Shadow Fight 2
-
Shadow Fight 2 has many features that make it an enjoyable and challenging game. Some of the features are:
-
-
A captivating storyline that immerses you in the world of shadows.
-
A variety of weapons and equipment to choose from, such as swords, axes, nunchaku, daggers, shuriken, and more.
-
A diverse range of enemies and bosses with different fighting styles and abilities.
-
A realistic combat system that uses physics and animation.
-
A stunning graphics and sound design that create a dark and atmospheric mood.
-
A social aspect that allows you to chat with other players, join clans, and participate in raids.
-
-
What is Shadow Fight 2 Mod APK?
-
Shadow Fight 2 Mod APK is a modified version of the original game that provides some extra benefits for the players. It is not an official version of the game, but rather a fan-made one that is created by modifying the original game files. It is also not available on the Google Play Store or the App Store, but rather on third-party websites.
-
The benefits
The benefits of Shadow Fight 2 Mod APK
-
Shadow Fight 2 Mod APK has some advantages over the original game that make it more appealing for some players. Some of the benefits are:
-
-
You get unlimited money and gems, which you can use to buy and upgrade weapons, equipment, and skills.
-
You get access to the max level, which is 52, and unlock all the features and modes of the game.
-
You get unlimited energy, which means you can play as long as you want without waiting for the energy bar to refill.
-
You get all the premium items and bonuses for free, such as the special edition, the raid tickets, the enchantments, and the booster packs.
-
You get to enjoy the game without any ads or interruptions.
-
-
The drawbacks of Shadow Fight 2 Mod APK
-
However, Shadow Fight 2 Mod APK also has some disadvantages that you should be aware of before downloading and installing it. Some of the drawbacks are:
-
-
You may face some compatibility issues with your device or operating system, as the mod apk may not be updated regularly or optimized for all devices.
-
You may encounter some bugs or glitches in the game, such as crashes, freezes, or errors.
-
You may risk losing your progress or data if you uninstall the mod apk or switch to the original game.
-
You may violate the terms and conditions of the original game and get banned from playing online or accessing some features.
-
You may expose your device to malware or viruses that may harm your device or steal your personal information.
-
-
How to download and install Shadow Fight 2 Mod APK?
-
If you are interested in trying out Shadow Fight 2 Mod APK, you will need to follow some steps to download and install it on your device. Here are the steps:
The steps to download and install Shadow Fight 2 Mod APK
-
To download and install Shadow Fight 2 Mod APK on your Android device, you can follow these steps:
-
shadow fight 2 mod apk unlimited money and gems max level
-shadow fight 2 mod apk titan mode max level download
-shadow fight 2 mod apk latest version max level unlocked
-shadow fight 2 mod apk all weapons unlocked max level
-shadow fight 2 mod apk special edition max level
-shadow fight 2 mod apk god mode max level android
-shadow fight 2 mod apk hack max level and coins
-shadow fight 2 mod apk unlimited energy and orbs max level
-shadow fight 2 mod apk mega mod max level offline
-shadow fight 2 mod apk free shopping and enchantments max level
-shadow fight 2 mod apk no root required max level
-shadow fight 2 mod apk all bosses unlocked max level
-shadow fight 2 mod apk unlimited everything and max level
-shadow fight 2 mod apk premium features unlocked max level
-shadow fight 2 mod apk high damage and defense max level
-shadow fight 2 mod apk anti ban and cheat detection max level
-shadow fight 2 mod apk easy win and bonus rewards max level
-shadow fight 2 mod apk full game unlocked max level
-shadow fight 2 mod apk all maps and modes available max level
-shadow fight 2 mod apk super weapons and armor max level
-shadow fight 2 mod apk unlimited gems and coins max level
-shadow fight 2 mod apk all characters and skills unlocked max level
-shadow fight 2 mod apk best graphics and sound quality max level
-shadow fight 2 mod apk no ads and in-app purchases max level
-shadow fight 2 mod apk fast download and installation max level
-
-
Go to a reputable website that offers the Shadow Fight 2 Mod APK file, such as apkcombo.com or apkpure.com. You can use the web tool to generate the download link by pasting the Google Play Store URL of the original game, or you can search for the mod apk file on the website.
-
Tap the download button to start downloading the Shadow Fight 2 Mod APK file to your device. You may need to allow your browser to download unknown apps from the settings.
-
Once the download is complete, locate the Shadow Fight 2 Mod APK file on your device using a file manager app. You can use the default file manager app on your device, or you can download one from the Google Play Store, such as Cx File Explorer or File Manager.
-
Tap the Shadow Fight 2 Mod APK file to open it. You may need to enable the installation of unknown apps from the settings if you haven't done so already.
-
Follow the instructions on the screen to install the Shadow Fight 2 Mod APK on your device. You may need to grant some permissions to the app during the installation process.
-
After the installation is finished, you can launch the Shadow Fight 2 Mod APK from your app drawer or home screen. Enjoy playing the game with unlimited money, weapons, and max level!
-
-
The precautions to take before downloading and installing Shadow Fight 2 Mod APK
-
Before you download and install Shadow Fight 2 Mod APK on your device, you should take some precautions to avoid any problems or risks. Here are some of them:
-
-
Make sure you have enough storage space on your device for the Shadow Fight 2 Mod APK file and the game data. The mod apk file is about 150 MB in size, and the game data may vary depending on your device and version.
-
Make sure you have a stable internet connection for downloading and installing the Shadow Fight 2 Mod APK file. You may also need an internet connection for playing some modes of the game, such as underworld mode.
-
Make sure you have a backup of your original game data and progress before installing the Shadow Fight 2 Mod APK. You can use a cloud service or a local backup app to save your game data. You may lose your progress or data if you uninstall the mod apk or switch to the original game.
-
Make sure you download the Shadow Fight 2 Mod APK file from a reliable and trustworthy website. Avoid downloading from unknown or suspicious sources that may contain malware or viruses. You can check the reviews and ratings of the website before downloading.
-
Make sure you scan the Shadow Fight 2 Mod APK file with an antivirus app before installing it on your device. You can use a reputable antivirus app from the Google Play Store, such as Avast Mobile Security or AVG Antivirus.
-
How to play Shadow Fight 2 Mod APK?
-
Now that you have downloaded and installed Shadow Fight 2 Mod APK on your device, you may wonder how to play it and enjoy its features. Here are some tips and tricks to help you play Shadow Fight 2 Mod APK:
-
The tips and tricks to play Shadow Fight 2 Mod APK
-
Here are some tips and tricks to play Shadow Fight 2 Mod APK:
-
-
Use the unlimited money and gems wisely. You can buy and upgrade any weapon, equipment, or skill you want, but don't forget to balance your attack, defense, and speed. You can also use the gems to buy booster packs, enchantments, and raid tickets.
-
Use the max level to your advantage. You can unlock all the features and modes of the game, such as underworld mode, eclipse mode, and special edition. You can also challenge any enemy or boss without fear of losing.
-
Use the unlimited energy to practice and improve your skills. You can play as long as you want without waiting for the energy bar to refill. You can also replay any level or mode you want to earn more coins and experience.
-
Use the premium items and bonuses to enhance your gameplay. You can use the special edition to access exclusive weapons, outfits, and storylines. You can also use the raid tickets to join raids with other players online and fight against powerful bosses.
-
Use the ad-free feature to enjoy the game without interruptions. You can play the game without any ads or pop-ups that may distract you or slow down your device.
-
-
The best weapons and characters to use in Shadow Fight 2 Mod APK
-
Here are some of the best weapons and characters to use in Shadow Fight 2 Mod APK:
-
-
Weapon
Description
-
Kusarigama
A weapon that consists of a sickle and a chain. It has a long range and high damage, but low speed. It is good for keeping enemies at bay and dealing critical hits.
-
Sai
A weapon that consists of a pair of daggers with forked blades. It has a medium range and high speed, but low damage. It is good for blocking attacks and stunning enemies.
-
Daisho
A weapon that consists of a katana and a wakizashi. It has a short range and high speed, but medium damage. It is good for slashing enemies and performing combos.
-
Composite Sword
A weapon that consists of a sword that can split into two blades. It has a medium range and high damage, but low speed. It is good for surprising enemies and dealing massive damage.
-
Magic
A weapon that consists of various spells that can be cast by tapping the magic button. It has a long range and high damage, but low speed. It is good for attacking enemies from afar and causing different effects.
-
-
-
Character
Description
-
Lynx
The first boss of the game. He is a member of the Shadow Order who uses claws as his weapon. He has high speed and stealth skills, but low defense. He can also summon his bodyguards to help him fight.
-
Hermit
The second boss of the game. He is a master of magic who uses swords as his weapon. He has high damage and magic skills, but low speed. He can also cast various spells to attack or defend himself.
-
Butcher
The third boss of the game. He is a ruthless leader of a gang who uses axes as his weapon. He has high damage and defense skills, but low speed. He can also throw his axes at enemies or smash them with his fists.
-
Wasp
The fourth boss of the game. She is a pirate queen who uses daggers as her weapon. She has high speed and agility skills, but low damage. She can also fly with her wings or summon her crew to help her fight.
-
Widow
The fifth boss of the game. She is a seductive assassin who uses fans as her weapon. She has high speed and charm skills, but low defense. She can also hypnotize enemies or poison them with her fans.
-
Shogun
The sixth boss of the game. He is a tyrant who uses katanas as his weapon. He has high damage and defense skills, but low speed. He can also summon his soldiers or use his cannon to help him fight.
-
Titan
The final boss of the game. He is a godlike being who uses a huge sword as his weapon. He has high damage, defense, and magic skills, but low speed. He can also use his power to manipulate the environment or create illusions.
-
-
Conclusion
-
Shadow Fight 2 is a great game that offers a lot of fun and challenge for fighting game fans. However, if you want to experience the game with more features and benefits, you can try Shadow Fight 2 Mod APK, a modified version of the game that gives you unlimited money, weapons, and max level. However, you should also be careful of the drawbacks and risks of using the mod apk, such as compatibility issues, bugs, data loss, bans, and malware. Therefore, you should follow the steps and precautions we provided in this article to download and install Shadow Fight 2 Mod APK safely and enjoyably.
-
We hope this article was helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy fighting!
-
FAQs
-
Here are some frequently asked questions about Shadow Fight 2 Mod APK:
-
-
Is Shadow Fight 2 Mod APK safe to use?
-
Shadow Fight 2 Mod APK is not an official version of the game, but rather a fan-made one that is created by modifying the original game files. Therefore, it may not be safe to use, as it may contain malware or viruses that may harm your device or steal your personal information. You should also be careful of violating the terms and conditions of the original game and getting banned from playing online or accessing some features. Therefore, you should only download and install Shadow Fight 2 Mod APK from reputable and trustworthy websites, and scan it with an antivirus app before installing it on your device.
-
Can I play Shadow Fight 2 Mod APK online?
-
Shadow Fight 2 Mod APK allows you to play some modes of the game online, such as underworld mode and raids. However, you may not be able to play other modes online, such as tournament mode and duel mode. You may also face some problems or errors when playing online, such as connection issues, lagging, or crashing. You may also risk getting banned from playing online or accessing some features if the game detects that you are using a mod apk.
-
Can I switch between Shadow Fight 2 Mod APK and the original game?
-
You can switch between Shadow Fight 2 Mod APK and the original game by uninstalling one and installing the other. However, you may lose your progress or data if you do so, as the mod apk and the original game have different game files and save data. Therefore, you should make a backup of your original game data and progress before installing the mod apk or switching to the original game.
-
Can I update Shadow Fight 2 Mod APK?
-
You can update Shadow Fight 2 Mod APK by downloading and installing the latest version of the mod apk from the same website where you downloaded it before. However, you may not be able to update it as frequently or easily as the original game, as the mod apk may not be updated regularly or optimized for all devices. You may also lose your progress or data if you update the mod apk without making a backup.
-
Can I use Shadow Fight 2 Mod APK on iOS devices?
-
No, you cannot use Shadow Fight 2 Mod APK on iOS devices, as it is only compatible with Android devices. You will need to jailbreak your iOS device and use a third-party app installer to install Shadow Fight 2 Mod APK on your iOS device. However, this is not recommended, as it may damage your device or void your warranty.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/XAPK Downloader How to Download XAPK Files from Any Website.md b/spaces/congsaPfin/Manga-OCR/logs/XAPK Downloader How to Download XAPK Files from Any Website.md
deleted file mode 100644
index 6d6e299fa02a24682d493fb05b72791126d413a0..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/XAPK Downloader How to Download XAPK Files from Any Website.md
+++ /dev/null
@@ -1,94 +0,0 @@
-
-
What Is an XAPK File and How Do You Install One on Android?
-
If you are an Android user, you are probably familiar with APK files, which are the standard format for installing apps on your device. But have you ever encountered an XAPK file and wondered what it is and how to install it? In this article, we will explain everything you need to know about XAPK files, how they differ from APK files, and how you can install them on your Android device.
An XAPK file is a file extension format in a standard ZIP format, allowing all data related to an app to be saved into a single file for quick installation. Unlike APKs, XAPK contains both the APK file and the OBB (Opaque Binary Blob) data—as well as caches, the app icon, and other miscellaneous information that the app requires to function.
-
OBB files are additional data files that contain graphics, media, or other large resources that are not included in the APK file. Some apps or games require OBB files to run properly, especially those that have high-quality graphics or large content. For example, PUBG Mobile, Call of Duty Mobile, Asphalt 9, and Genshin Impact are some popular games that use OBB files.
-
In some cases, certain XAPK files are also bundles containing more than one APK file, better known as Split APKs. Split APKs are a way of distributing apps that have multiple components or modules, such as base APK, configuration APK, language APK, etc. This allows developers to optimize their apps for different devices, screen sizes, architectures, and languages. For example, Netflix, Spotify, Facebook, and Google Play Services are some apps that use Split APKs.
-
How to install xapk files on android
-What is the difference between xapk and apk
-Xapk installer apk download for android
-How to convert xapk to apk and obb
-Xapk file opener for windows 10
-Best xapk games for android 2023
-How to create xapk files from apk and obb
-Xapk vs split apk: which one is better
-How to install xapk files on pc using bluestacks
-Xapk file manager app for android
-How to extract xapk files on mac
-Xapk file validator tool online
-How to fix xapk file validation failed error
-Xapk file editor software for windows
-How to install multiple apks via adb
-Xapk file compressor online free
-How to update xapk files on android
-Xapk file format specification and documentation
-How to uninstall xapk files on android
-Xapk file size reducer online free
-How to install xapk files on ios
-Xapk file viewer for linux
-How to sign xapk files for android
-Xapk file splitter online free
-How to install xapk files on firestick
-Xapk file merger online free
-How to download xapk files from google play store
-Xapk file analyzer tool online
-How to install xapk files on android tv box
-Xapk file encrypter and decrypter online free
-How to install xapk files on chromebook
-Xapk file converter online free
-How to install xapk files on android without root
-Xapk file generator online free
-How to install xapk files on android emulator
-Xapk file checker tool online
-How to install xapk files on android auto
-Xapk file downloader app for android
-How to install xapk files on android wear os
-Xapk file scanner tool online
-
How to Install an XAPK File on Android?
-
Using a Third-Party App Installer
-
One of the easiest ways to install an XAPK file on your Android device is to use a third-party app installer such as XAPK Installer, APKPure, or SAI. These apps can automatically detect and extract the XAPK file and install the app on your device. Here are the steps to follow:
-
-
Download and install one of the app installers from their official websites or trusted sources.
-
Download the XAPK file of the app or game you want to install from a reliable source.
-
Open the app installer and grant it the necessary permissions to access your storage.
-
Locate the XAPK file in your device's storage and tap on it.
-
Follow the instructions on the screen to install the app or game.
-
Launch the app or game and enjoy.
-
Using a File Manager and ADB
-
Another way to install an XAPK file on your Android device is to use a file manager and ADB (Android Debug Bridge). This method requires some technical skills and a computer with ADB installed. Here are the steps to follow:
-
-
Download the XAPK file of the app or game you want to install from a reliable source.
-
Extract the XAPK file using a ZIP extractor such as WinRAR or 7-Zip. You should see an APK file and an OBB file or multiple APK files inside the extracted folder.
-
Copy the APK file and the OBB file (if any) to your device's storage. You can use a USB cable or a wireless transfer app such as AirDroid or ShareIt.
-
Enable USB debugging on your device by going to Settings > Developer options. If you don't see Developer options, go to Settings > About phone and tap on Build number seven times.
-
Connect your device to your computer using a USB cable.
-
Open a command prompt or terminal window on your computer and navigate to the folder where you have ADB installed.
-
Type the following command to install the APK file: adb install -r path/to/apk/file. Replace path/to/apk/file with the actual path of the APK file on your device.
-
If you have an OBB file, type the following command to copy it to your device: adb push path/to/obb/file /sdcard/Android/obb/package.name. Replace path/to/obb/file with the actual path of the OBB file on your device, and package.name with the package name of the app or game. You can find the package name by looking at the APK file name or by using an app such as App Inspector.
-
If you have multiple APK files, type the following command to install them: adb install-multiple -r path/to/apk/files. Replace path/to/apk/files with the actual paths of all the APK files on your device, separated by spaces.
-
Disconnect your device from your computer and launch the app or game.
-
-
What Are the Advantages and Disadvantages of XAPK Files?
-
XAPK files have some advantages and disadvantages compared to APK files. Here are some of them:
-
-
Advantages
Disadvantages
-
- They can reduce the file size of apps or games by compressing them into a single file.
- They are not supported by default by Android devices and require additional steps or tools to install them.
-
- They can speed up the download process of apps or games by avoiding multiple downloads or waiting for additional data.
- They may not be compatible with some devices, especially older ones, that do not support Split APKs or OBB files.
-
- They can ensure that all the necessary data for apps or games are available and up-to-date, preventing errors or crashes.
- They may pose security risks if downloaded from untrusted sources, as they may contain malware or viruses.
-
- They can offer more flexibility and customization for developers and users, as they can choose which modules or languages to include or exclude.
- They may not receive regular updates from developers or app stores, as they may not be recognized by them.
-
-
Conclusion
-
XAPK files are a new format for installing apps or games on Android devices that contain both the APK file and the OBB file or multiple APK files. They offer some benefits such as smaller file size, faster download speed, and more options for developers and users. However, they also have some drawbacks such as lack of support, compatibility issues, security risks, and update problems. Therefore, before you download and install an XAPK file, make sure you know what it is, how it works, and how to do it safely and correctly. We hope this article has helped you understand more about XAPK files and how to install them on your Android device.
-
Frequently Asked Questions
-
Here are some common questions and answers about XAPK files:
-
-
What is the difference between XAPK and APK? An XAPK file is a ZIP archive that contains an APK file and an OBB file or multiple APK files. An APK file is a single file that is the standard format for installing apps on Android devices. An OBB file is an additional data file that contains graphics, media, or other large resources that are not included in the APK file.
-
How do I open an XAPK file on my PC? An XAPK file is a ZIP archive, so you can open it with any ZIP extractor such as WinRAR or 7-Zip. You can then view or extract the contents of the XAPK file, such as the APK file and the OBB file or multiple APK files.
-
Can I convert an XAPK file to an APK file? Yes, you can convert an XAPK file to an APK file by extracting the APK file from the XAPK file using a ZIP extractor. However, this may not work for all apps or games, especially those that require OBB files or Split APKs to run properly. You may also lose some features or functionality of the app or game by doing so.
-
How do I update an XAPK file? Updating an XAPK file depends on where you downloaded it from. If you downloaded it from a third-party app installer such as APKPure or SAI, you can check for updates within the app installer and download the latest version of the XAPK file. If you downloaded it from another source, you may have to manually check for updates and download the new XAPK file from the same source.
-
Are XAPK files safe? XAPK files are not inherently unsafe, but they may pose security risks if downloaded from untrusted sources. Some XAPK files may contain malware or viruses that can harm your device or steal your data. Therefore, you should always download XAPK files from reputable sources and scan them with a reliable antivirus software before installing them.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congxin95/BMTools-demo/tool_server.py b/spaces/congxin95/BMTools-demo/tool_server.py
deleted file mode 100644
index 19c81fa5947b53683f5ab2d93a601be0d42cfc3b..0000000000000000000000000000000000000000
--- a/spaces/congxin95/BMTools-demo/tool_server.py
+++ /dev/null
@@ -1,172 +0,0 @@
-import sys
-sys.path.append("BMTools/")
-
-import bmtools
-import os
-
-def run_tool_server():
- def load_weather_tool():
- WEATHER_API_KEYS = os.environ.get('WEATHER_API_KEYS', None)
- if not WEATHER_API_KEYS:
- return "WEATHER_API_KEYS not provided, please register one from https://www.weatherapi.com/ and add it to environment variables."
- server.load_tool("weather", {"subscription_key": WEATHER_API_KEYS})
-
- # def load_database_tool():
- # server.load_tool("database")
-
- # def load_db_diag_tool():
- # server.load_tool("db_diag")
-
- def load_chemical_prop_tool():
- server.load_tool("chemical-prop")
-
- def load_douban_tool():
- server.load_tool("douban-film")
-
- def load_wikipedia_tool():
- server.load_tool("wikipedia")
-
- # def load_wikidata_tool():
- # server.load_tool("wikidata")
-
- def load_wolframalpha_tool():
- WOLFRAMALPH_APP_ID = os.environ.get("WOLFRAMALPH_APP_ID", None)
- if not WOLFRAMALPH_APP_ID:
- return "WOLFRAMALPH_APP_ID not provided, please register one from https://products.wolframalpha.com/api/ and add it to environment variables."
- server.load_tool("wolframalpha", {"subscription_key": WOLFRAMALPH_APP_ID})
-
- def load_bing_search_tool():
- BING_SUBSCRIPT_KEY = os.environ.get('BING_SUBSCRIPT_KEY', None)
- if not BING_SUBSCRIPT_KEY:
- return "Bing search key not provided, please register one from https://www.microsoft.com/en-us/bing/apis/bing-web-search-api and add it to environment variables."
- server.load_tool("bing_search", {"subscription_key": BING_SUBSCRIPT_KEY})
-
- def load_office_ppt_tool():
- server.load_tool("office-ppt")
-
- def load_alpha_vantage_tool():
- ALPHA_VANTAGE_KEY = os.environ.get('ALPHA_VANTAGE_KEY', None)
- if not ALPHA_VANTAGE_KEY:
- return "Stock key not provided, please register one from https://www.alphavantage.co/support/#api-key and add it to environment variables."
- server.load_tool("stock", {"subscription_key": ALPHA_VANTAGE_KEY})
-
- def load_map_tool():
- BING_MAP_KEY = os.environ.get('BING_MAP_KEY', None)
- if not BING_MAP_KEY:
- return "Bing map key not provided, please register one from https://www.bingmapsportal.com/ and add it to environment variables."
- server.load_tool("bing_map", {"subscription_key": BING_MAP_KEY})
-
- # baidu map tool
- # BAIDU_SECRET_KEY = os.environ.get('BAIDU_SECRET_KEY', None)
- # BAIDU_MAP_KEY = os.environ.get('BAIDU_MAP_KEY', None)
- # if not BAIDU_SECRET_KEY or not BAIDU_MAP_KEY:
- # raise RuntimeError("Baidu map key not provided, please register one from https://lbsyun.baidu.com/apiconsole/key and add it to environment variables.")
- # server.load_tool("baidu_map", {"subscription_key": BAIDU_MAP_KEY, "baidu_secret_key": BAIDU_SECRET_KEY})
-
- def load_rapidapi_tool():
- RAPIDAPI_KEY = os.environ.get('RAPIDAPI_KEY', None)
- if not RAPIDAPI_KEY:
- return "RAPIDAPI_KEY not provided, please register one from https://rapidapi.com/ and add it to environment variables."
- server.load_tool("zillow", {"subscription_key": RAPIDAPI_KEY})
- server.load_tool("airbnb", {"subscription_key": RAPIDAPI_KEY})
- server.load_tool("job_search", {"subscription_key": RAPIDAPI_KEY})
-
- # def load_nllb_translation_tool():
- # server.load_tool("nllb-translation")
-
- # def load_baidu_translation_tool():
- # server.load_tool("baidu-translation")
-
- def load_tutorial_tool():
- server.load_tool("tutorial")
-
- def load_file_operation_tool():
- server.load_tool("file_operation")
-
- def load_meta_analysis_tool():
- server.load_tool("meta_analysis")
-
- def load_code_interpreter_tool():
- server.load_tool("code_interpreter")
-
- def load_arxiv_tool():
- server.load_tool("arxiv")
-
- def load_google_places_tool():
- GPLACES_API_KEY = os.environ.get('GPLACES_API_KEY', '')
- if not GPLACES_API_KEY:
- return "GPLACES_API_KEY not provided, please register one from https://developers.google.com/maps/documentation/elevation/get-api-key and add it to environment variables."
- server.load_tool("google_places", {"subscription_key": GPLACES_API_KEY})
-
- def load_google_serper_tool():
- SERPER_API_KEY = os.environ.get('SERPER_API_KEY', None)
- if not SERPER_API_KEY:
- return "SERPER_API_KEY not provided, please register one from https://serper.dev and add it to environment variables."
- server.load_tool("google_serper", {"subscription_key": SERPER_API_KEY})
- server.load_tool("google_scholar", {"subscription_key": SERPER_API_KEY})
-
- def load_python_tool():
- server.load_tool("python")
-
- def load_sceneXplain_tool():
- SCENEX_API_KEY = os.environ.get('SCENEX_API_KEY', None)
- if not SCENEX_API_KEY:
- return "SCENEX_API_KEY is not provided. Please sign up for a free account at https://scenex.jina.ai/, create a new API key, and add it to environment variables."
- server.load_tool("sceneXplain", {"subscription_key": SCENEX_API_KEY})
-
- def load_shell_tool():
- server.load_tool("shell")
-
- def load_image_generation_tool():
- STEAMSHIP_API_KEY = os.environ.get('STEAMSHIP_API_KEY', None)
- if not STEAMSHIP_API_KEY:
- return "STEAMSHIP_API_KEY is not provided. Please sign up for a free account at https://steamship.com/account/api, create a new API key, and add it to environment variables."
- server.load_tool("image_generation")
-
- def load_hugging_tools():
- HUGGINGFACE_API_KEY = os.environ.get('HUGGINGFACE_API_KEY', None)
- if not HUGGINGFACE_API_KEY:
- return "Huggingface api key not provided, please register one from https://huggingface.co/ and add it to environment variables."
- server.load_tool("hugging_tools")
-
- def load_gradio_tools():
- server.load_tool("gradio_tools")
-
- server = bmtools.ToolServer()
- print(server.list_tools())
-
- # tool_choice = input("Enter 'ALL' to load all tools, or enter the specific tools you want to load (comma-separated): ")
-
- load_weather_tool()
- # load_database_tool()
- # load_db_diag_tool()
- load_chemical_prop_tool()
- load_douban_tool()
- load_wikipedia_tool()
- # load_wikidata_tool()
- load_wolframalpha_tool()
- load_bing_search_tool()
- load_office_ppt_tool()
- load_alpha_vantage_tool()
- load_map_tool()
- load_rapidapi_tool()
- # load_nllb_translation_tool()
- # load_baidu_translation_tool()
- load_tutorial_tool()
- load_file_operation_tool()
- load_meta_analysis_tool()
- load_code_interpreter_tool()
- load_arxiv_tool()
- load_google_places_tool()
- load_google_serper_tool()
- load_python_tool()
- load_sceneXplain_tool()
- load_shell_tool()
- load_image_generation_tool()
- # load_hugging_tools()
- # load_gradio_tools()
-
- server.serve()
-
-if __name__ == "__main__":
- run_tool_server()
\ No newline at end of file
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/transformer_decoder/position_encoding.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/transformer_decoder/position_encoding.py
deleted file mode 100644
index 051984d9ea6e04e834f6fae3daf7d8317c2f0819..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/transformer_decoder/position_encoding.py
+++ /dev/null
@@ -1,67 +0,0 @@
-# ------------------------------------------------------------------------------
-# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/modeling/transformer_decoder/position_encoding.py
-# Modified by Jitesh Jain (https://github.com/praeclarumjj3)
-# ------------------------------------------------------------------------------
-
-"""
-Various positional encodings for the transformer.
-"""
-import math
-
-import torch
-from torch import nn
-
-
-class PositionEmbeddingSine(nn.Module):
- """
- This is a more standard version of the position embedding, very similar to the one
- used by the Attention is all you need paper, generalized to work on images.
- """
-
- def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None):
- super().__init__()
- self.num_pos_feats = num_pos_feats
- self.temperature = temperature
- self.normalize = normalize
- if scale is not None and normalize is False:
- raise ValueError("normalize should be True if scale is passed")
- if scale is None:
- scale = 2 * math.pi
- self.scale = scale
-
- def forward(self, x, mask=None):
- if mask is None:
- mask = torch.zeros((x.size(0), x.size(2), x.size(3)), device=x.device, dtype=torch.bool)
- not_mask = ~mask
- y_embed = not_mask.cumsum(1, dtype=torch.float32)
- x_embed = not_mask.cumsum(2, dtype=torch.float32)
- if self.normalize:
- eps = 1e-6
- y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale
- x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale
-
- dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)
- dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)
-
- pos_x = x_embed[:, :, :, None] / dim_t
- pos_y = y_embed[:, :, :, None] / dim_t
- pos_x = torch.stack(
- (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4
- ).flatten(3)
- pos_y = torch.stack(
- (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4
- ).flatten(3)
- pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)
- return pos
-
- def __repr__(self, _repr_indent=4):
- head = "Positional encoding " + self.__class__.__name__
- body = [
- "num_pos_feats: {}".format(self.num_pos_feats),
- "temperature: {}".format(self.temperature),
- "normalize: {}".format(self.normalize),
- "scale: {}".format(self.scale),
- ]
- # _repr_indent = 4
- lines = [head] + [" " * _repr_indent + line for line in body]
- return "\n".join(lines)
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/vit.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/vit.py
deleted file mode 100644
index 413f9693bd4548342280e329c9128c1a52cea920..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/vit.py
+++ /dev/null
@@ -1,221 +0,0 @@
-import torch
-import torch.nn as nn
-import timm
-import types
-import math
-import torch.nn.functional as F
-
-from .utils import (activations, forward_adapted_unflatten, get_activation, get_readout_oper,
- make_backbone_default, Transpose)
-
-
-def forward_vit(pretrained, x):
- return forward_adapted_unflatten(pretrained, x, "forward_flex")
-
-
-def _resize_pos_embed(self, posemb, gs_h, gs_w):
- posemb_tok, posemb_grid = (
- posemb[:, : self.start_index],
- posemb[0, self.start_index:],
- )
-
- gs_old = int(math.sqrt(len(posemb_grid)))
-
- posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2)
- posemb_grid = F.interpolate(posemb_grid, size=(gs_h, gs_w), mode="bilinear")
- posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_h * gs_w, -1)
-
- posemb = torch.cat([posemb_tok, posemb_grid], dim=1)
-
- return posemb
-
-
-def forward_flex(self, x):
- b, c, h, w = x.shape
-
- pos_embed = self._resize_pos_embed(
- self.pos_embed, h // self.patch_size[1], w // self.patch_size[0]
- )
-
- B = x.shape[0]
-
- if hasattr(self.patch_embed, "backbone"):
- x = self.patch_embed.backbone(x)
- if isinstance(x, (list, tuple)):
- x = x[-1] # last feature if backbone outputs list/tuple of features
-
- x = self.patch_embed.proj(x).flatten(2).transpose(1, 2)
-
- if getattr(self, "dist_token", None) is not None:
- cls_tokens = self.cls_token.expand(
- B, -1, -1
- ) # stole cls_tokens impl from Phil Wang, thanks
- dist_token = self.dist_token.expand(B, -1, -1)
- x = torch.cat((cls_tokens, dist_token, x), dim=1)
- else:
- if self.no_embed_class:
- x = x + pos_embed
- cls_tokens = self.cls_token.expand(
- B, -1, -1
- ) # stole cls_tokens impl from Phil Wang, thanks
- x = torch.cat((cls_tokens, x), dim=1)
-
- if not self.no_embed_class:
- x = x + pos_embed
- x = self.pos_drop(x)
-
- for blk in self.blocks:
- x = blk(x)
-
- x = self.norm(x)
-
- return x
-
-
-def _make_vit_b16_backbone(
- model,
- features=[96, 192, 384, 768],
- size=[384, 384],
- hooks=[2, 5, 8, 11],
- vit_features=768,
- use_readout="ignore",
- start_index=1,
- start_index_readout=1,
-):
- pretrained = make_backbone_default(model, features, size, hooks, vit_features, use_readout, start_index,
- start_index_readout)
-
- # We inject this function into the VisionTransformer instances so that
- # we can use it with interpolated position embeddings without modifying the library source.
- pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model)
- pretrained.model._resize_pos_embed = types.MethodType(
- _resize_pos_embed, pretrained.model
- )
-
- return pretrained
-
-
-def _make_pretrained_vitl16_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model("vit_large_patch16_384", pretrained=pretrained)
-
- hooks = [5, 11, 17, 23] if hooks == None else hooks
- return _make_vit_b16_backbone(
- model,
- features=[256, 512, 1024, 1024],
- hooks=hooks,
- vit_features=1024,
- use_readout=use_readout,
- )
-
-
-def _make_pretrained_vitb16_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model("vit_base_patch16_384", pretrained=pretrained)
-
- hooks = [2, 5, 8, 11] if hooks == None else hooks
- return _make_vit_b16_backbone(
- model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout
- )
-
-
-def _make_vit_b_rn50_backbone(
- model,
- features=[256, 512, 768, 768],
- size=[384, 384],
- hooks=[0, 1, 8, 11],
- vit_features=768,
- patch_size=[16, 16],
- number_stages=2,
- use_vit_only=False,
- use_readout="ignore",
- start_index=1,
-):
- pretrained = nn.Module()
-
- pretrained.model = model
-
- used_number_stages = 0 if use_vit_only else number_stages
- for s in range(used_number_stages):
- pretrained.model.patch_embed.backbone.stages[s].register_forward_hook(
- get_activation(str(s + 1))
- )
- for s in range(used_number_stages, 4):
- pretrained.model.blocks[hooks[s]].register_forward_hook(get_activation(str(s + 1)))
-
- pretrained.activations = activations
-
- readout_oper = get_readout_oper(vit_features, features, use_readout, start_index)
-
- for s in range(used_number_stages):
- value = nn.Sequential(nn.Identity(), nn.Identity(), nn.Identity())
- exec(f"pretrained.act_postprocess{s + 1}=value")
- for s in range(used_number_stages, 4):
- if s < number_stages:
- final_layer = nn.ConvTranspose2d(
- in_channels=features[s],
- out_channels=features[s],
- kernel_size=4 // (2 ** s),
- stride=4 // (2 ** s),
- padding=0,
- bias=True,
- dilation=1,
- groups=1,
- )
- elif s > number_stages:
- final_layer = nn.Conv2d(
- in_channels=features[3],
- out_channels=features[3],
- kernel_size=3,
- stride=2,
- padding=1,
- )
- else:
- final_layer = None
-
- layers = [
- readout_oper[s],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[s],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- ]
- if final_layer is not None:
- layers.append(final_layer)
-
- value = nn.Sequential(*layers)
- exec(f"pretrained.act_postprocess{s + 1}=value")
-
- pretrained.model.start_index = start_index
- pretrained.model.patch_size = patch_size
-
- # We inject this function into the VisionTransformer instances so that
- # we can use it with interpolated position embeddings without modifying the library source.
- pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model)
-
- # We inject this function into the VisionTransformer instances so that
- # we can use it with interpolated position embeddings without modifying the library source.
- pretrained.model._resize_pos_embed = types.MethodType(
- _resize_pos_embed, pretrained.model
- )
-
- return pretrained
-
-
-def _make_pretrained_vitb_rn50_384(
- pretrained, use_readout="ignore", hooks=None, use_vit_only=False
-):
- model = timm.create_model("vit_base_resnet50_384", pretrained=pretrained)
-
- hooks = [0, 1, 8, 11] if hooks == None else hooks
- return _make_vit_b_rn50_backbone(
- model,
- features=[256, 512, 768, 768],
- size=[384, 384],
- hooks=hooks,
- use_vit_only=use_vit_only,
- use_readout=use_readout,
- )
diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/configs/glint360k_r50.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/configs/glint360k_r50.py
deleted file mode 100644
index 37e7922f1f63284e356dcc45a5f979f9c105f25e..0000000000000000000000000000000000000000
--- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/configs/glint360k_r50.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from easydict import EasyDict as edict
-
-# make training faster
-# our RAM is 256G
-# mount -t tmpfs -o size=140G tmpfs /train_tmp
-
-config = edict()
-config.loss = "cosface"
-config.network = "r50"
-config.resume = False
-config.output = None
-config.embedding_size = 512
-config.sample_rate = 1.0
-config.fp16 = True
-config.momentum = 0.9
-config.weight_decay = 5e-4
-config.batch_size = 128
-config.lr = 0.1 # batch size is 512
-
-config.rec = "/train_tmp/glint360k"
-config.num_classes = 360232
-config.num_image = 17091657
-config.num_epoch = 20
-config.warmup_epoch = -1
-config.decay_epoch = [8, 12, 15, 18]
-config.val_targets = ["lfw", "cfp_fp", "agedb_30"]
diff --git a/spaces/dafqi/indo_twitter_sentiment_app/sentence_bert/README.md b/spaces/dafqi/indo_twitter_sentiment_app/sentence_bert/README.md
deleted file mode 100644
index b707360889900de393c3e498614bb5eb8ed1b415..0000000000000000000000000000000000000000
--- a/spaces/dafqi/indo_twitter_sentiment_app/sentence_bert/README.md
+++ /dev/null
@@ -1,136 +0,0 @@
----
-pipeline_tag: sentence-similarity
-tags:
-- sentence-transformers
-- feature-extraction
-- sentence-similarity
-- transformers
-
----
-
-# indo-sentence-bert-base
-
-This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
-
-
-
-## Usage (Sentence-Transformers)
-
-Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
-
-```
-pip install -U sentence-transformers
-```
-
-Then you can use the model like this:
-
-```python
-from sentence_transformers import SentenceTransformer
-sentences = ["Ibukota Perancis adalah Paris",
- "Menara Eifel terletak di Paris, Perancis",
- "Pizza adalah makanan khas Italia",
- "Saya kuliah di Carneige Mellon University"]
-
-model = SentenceTransformer('firqaaa/indo-sentence-bert-base')
-embeddings = model.encode(sentences)
-print(embeddings)
-```
-
-
-
-## Usage (HuggingFace Transformers)
-Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
-
-```python
-from transformers import AutoTokenizer, AutoModel
-import torch
-
-
-#Mean Pooling - Take attention mask into account for correct averaging
-def mean_pooling(model_output, attention_mask):
- token_embeddings = model_output[0] #First element of model_output contains all token embeddings
- input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
- return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
-
-
-# Sentences we want sentence embeddings for
-sentences = ["Ibukota Perancis adalah Paris",
- "Menara Eifel terletak di Paris, Perancis",
- "Pizza adalah makanan khas Italia",
- "Saya kuliah di Carneige Mellon University"]
-
-
-# Load model from HuggingFace Hub
-tokenizer = AutoTokenizer.from_pretrained('firqaaa/indo-sentence-bert-base')
-model = AutoModel.from_pretrained('firqaaa/indo-sentence-bert-base')
-
-# Tokenize sentences
-encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
-
-# Compute token embeddings
-with torch.no_grad():
- model_output = model(**encoded_input)
-
-# Perform pooling. In this case, mean pooling.
-sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
-
-print("Sentence embeddings:")
-print(sentence_embeddings)
-```
-
-
-
-## Evaluation Results
-
-
-
-For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
-
-
-## Training
-The model was trained with the parameters:
-
-**DataLoader**:
-
-`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 19644 with parameters:
-```
-{'batch_size': 16}
-```
-
-**Loss**:
-
-`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
- ```
- {'scale': 20.0, 'similarity_fct': 'cos_sim'}
- ```
-
-Parameters of the fit()-Method:
-```
-{
- "epochs": 5,
- "evaluation_steps": 0,
- "evaluator": "NoneType",
- "max_grad_norm": 1,
- "optimizer_class": "",
- "optimizer_params": {
- "lr": 2e-05
- },
- "scheduler": "WarmupLinear",
- "steps_per_epoch": null,
- "warmup_steps": 9930,
- "weight_decay": 0.01
-}
-```
-
-
-## Full Model Architecture
-```
-SentenceTransformer(
- (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
- (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
-)
-```
-
-## Citing & Authors
-
-
\ No newline at end of file
diff --git a/spaces/davidanthony-ai/DIGITALIXSA/README.md b/spaces/davidanthony-ai/DIGITALIXSA/README.md
deleted file mode 100644
index b62f34f6a4a9d45d5e0aabb490b0b4bf4b4ba987..0000000000000000000000000000000000000000
--- a/spaces/davidanthony-ai/DIGITALIXSA/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Transcription Whisper Ditalixsa
-emoji: 🌖
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-sdk_version: 3.32.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/davidpiscasio/unpaired-img2img/options/__init__.py b/spaces/davidpiscasio/unpaired-img2img/options/__init__.py
deleted file mode 100644
index e7eedebe54aa70169fd25951b3034d819e396c90..0000000000000000000000000000000000000000
--- a/spaces/davidpiscasio/unpaired-img2img/options/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-"""This package options includes option modules: training options, test options, and basic options (used in both training and test)."""
diff --git a/spaces/davidpiscasio/unpaired-img2img/util/util.py b/spaces/davidpiscasio/unpaired-img2img/util/util.py
deleted file mode 100644
index b050c13e1d6d0f197af356b099b9c11c0714522c..0000000000000000000000000000000000000000
--- a/spaces/davidpiscasio/unpaired-img2img/util/util.py
+++ /dev/null
@@ -1,103 +0,0 @@
-"""This module contains simple helper functions """
-from __future__ import print_function
-import torch
-import numpy as np
-from PIL import Image
-import os
-
-
-def tensor2im(input_image, imtype=np.uint8):
- """"Converts a Tensor array into a numpy image array.
-
- Parameters:
- input_image (tensor) -- the input image tensor array
- imtype (type) -- the desired type of the converted numpy array
- """
- if not isinstance(input_image, np.ndarray):
- if isinstance(input_image, torch.Tensor): # get the data from a variable
- image_tensor = input_image.data
- else:
- return input_image
- image_numpy = image_tensor[0].cpu().float().numpy() # convert it into a numpy array
- if image_numpy.shape[0] == 1: # grayscale to RGB
- image_numpy = np.tile(image_numpy, (3, 1, 1))
- image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + 1) / 2.0 * 255.0 # post-processing: tranpose and scaling
- else: # if it is a numpy array, do nothing
- image_numpy = input_image
- return image_numpy.astype(imtype)
-
-
-def diagnose_network(net, name='network'):
- """Calculate and print the mean of average absolute(gradients)
-
- Parameters:
- net (torch network) -- Torch network
- name (str) -- the name of the network
- """
- mean = 0.0
- count = 0
- for param in net.parameters():
- if param.grad is not None:
- mean += torch.mean(torch.abs(param.grad.data))
- count += 1
- if count > 0:
- mean = mean / count
- print(name)
- print(mean)
-
-
-def save_image(image_numpy, image_path, aspect_ratio=1.0):
- """Save a numpy image to the disk
-
- Parameters:
- image_numpy (numpy array) -- input numpy array
- image_path (str) -- the path of the image
- """
-
- image_pil = Image.fromarray(image_numpy)
- h, w, _ = image_numpy.shape
-
- if aspect_ratio > 1.0:
- image_pil = image_pil.resize((h, int(w * aspect_ratio)), Image.BICUBIC)
- if aspect_ratio < 1.0:
- image_pil = image_pil.resize((int(h / aspect_ratio), w), Image.BICUBIC)
- image_pil.save(image_path)
-
-
-def print_numpy(x, val=True, shp=False):
- """Print the mean, min, max, median, std, and size of a numpy array
-
- Parameters:
- val (bool) -- if print the values of the numpy array
- shp (bool) -- if print the shape of the numpy array
- """
- x = x.astype(np.float64)
- if shp:
- print('shape,', x.shape)
- if val:
- x = x.flatten()
- print('mean = %3.3f, min = %3.3f, max = %3.3f, median = %3.3f, std=%3.3f' % (
- np.mean(x), np.min(x), np.max(x), np.median(x), np.std(x)))
-
-
-def mkdirs(paths):
- """create empty directories if they don't exist
-
- Parameters:
- paths (str list) -- a list of directory paths
- """
- if isinstance(paths, list) and not isinstance(paths, str):
- for path in paths:
- mkdir(path)
- else:
- mkdir(paths)
-
-
-def mkdir(path):
- """create a single empty directory if it didn't exist
-
- Parameters:
- path (str) -- a single directory path
- """
- if not os.path.exists(path):
- os.makedirs(path)
diff --git a/spaces/dawood/Kanye-AI/inference_main.py b/spaces/dawood/Kanye-AI/inference_main.py
deleted file mode 100644
index 80a470ea9146f1f75e785411dd5d3b6fade64b70..0000000000000000000000000000000000000000
--- a/spaces/dawood/Kanye-AI/inference_main.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import io
-import logging
-import time
-from pathlib import Path
-
-import librosa
-import matplotlib.pyplot as plt
-import numpy as np
-import soundfile
-
-from inference import infer_tool
-from inference import slicer
-from inference.infer_tool import Svc
-
-logging.getLogger('numba').setLevel(logging.WARNING)
-chunks_dict = infer_tool.read_temp("inference/chunks_temp.json")
-
-
-
-def main():
- import argparse
-
- parser = argparse.ArgumentParser(description='sovits4 inference')
-
- # 一定要设置的部分
- parser.add_argument('-m', '--model_path', type=str, default="/Volumes/Extend/下载/G_20800.pth", help='模型路径')
- parser.add_argument('-c', '--config_path', type=str, default="configs/config.json", help='配置文件路径')
- parser.add_argument('-n', '--clean_names', type=str, nargs='+', default=["君の知らない物語-src"], help='wav文件名列表,放在raw文件夹下')
- parser.add_argument('-t', '--trans', type=int, nargs='+', default=[0], help='音高调整,支持正负(半音)')
- parser.add_argument('-s', '--spk_list', type=str, nargs='+', default=['nyaru'], help='合成目标说话人名称')
-
- # 可选项部分
- parser.add_argument('-a', '--auto_predict_f0', action='store_true', default=False,
- help='语音转换自动预测音高,转换歌声时不要打开这个会严重跑调')
- parser.add_argument('-cm', '--cluster_model_path', type=str, default="/Volumes/Extend/下载/so-vits-svc-4.0/logs/44k/kmeans_10000.pt", help='聚类模型路径,如果没有训练聚类则随便填')
- parser.add_argument('-cr', '--cluster_infer_ratio', type=float, default=1, help='聚类方案占比,范围0-1,若没有训练聚类模型则填0即可')
-
- # 不用动的部分
- parser.add_argument('-sd', '--slice_db', type=int, default=-40, help='默认-40,嘈杂的音频可以-30,干声保留呼吸可以-50')
- parser.add_argument('-d', '--device', type=str, default=None, help='推理设备,None则为自动选择cpu和gpu')
- parser.add_argument('-ns', '--noice_scale', type=float, default=0.4, help='噪音级别,会影响咬字和音质,较为玄学')
- parser.add_argument('-p', '--pad_seconds', type=float, default=0.5, help='推理音频pad秒数,由于未知原因开头结尾会有异响,pad一小段静音段后就不会出现')
- parser.add_argument('-wf', '--wav_format', type=str, default='flac', help='音频输出格式')
-
- args = parser.parse_args()
-
- svc_model = Svc(args.model_path, args.config_path, args.device, args.cluster_model_path)
- infer_tool.mkdir(["raw", "results"])
- clean_names = args.clean_names
- trans = args.trans
- spk_list = args.spk_list
- slice_db = args.slice_db
- wav_format = args.wav_format
- auto_predict_f0 = args.auto_predict_f0
- cluster_infer_ratio = args.cluster_infer_ratio
- noice_scale = args.noice_scale
- pad_seconds = args.pad_seconds
-
- infer_tool.fill_a_to_b(trans, clean_names)
- for clean_name, tran in zip(clean_names, trans):
- raw_audio_path = f"raw/{clean_name}"
- if "." not in raw_audio_path:
- raw_audio_path += ".wav"
- infer_tool.format_wav(raw_audio_path)
- wav_path = Path(raw_audio_path).with_suffix('.wav')
- chunks = slicer.cut(wav_path, db_thresh=slice_db)
- audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks)
-
- for spk in spk_list:
- audio = []
- for (slice_tag, data) in audio_data:
- print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======')
- # padd
- pad_len = int(audio_sr * pad_seconds)
- data = np.concatenate([np.zeros([pad_len]), data, np.zeros([pad_len])])
- length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample))
- raw_path = io.BytesIO()
- soundfile.write(raw_path, data, audio_sr, format="wav")
- raw_path.seek(0)
- if slice_tag:
- print('jump empty segment')
- _audio = np.zeros(length)
- else:
- out_audio, out_sr = svc_model.infer(spk, tran, raw_path,
- cluster_infer_ratio=cluster_infer_ratio,
- auto_predict_f0=auto_predict_f0,
- noice_scale=noice_scale
- )
- _audio = out_audio.cpu().numpy()
-
- pad_len = int(svc_model.target_sample * pad_seconds)
- _audio = _audio[pad_len:-pad_len]
- audio.extend(list(_audio))
- key = "auto" if auto_predict_f0 else f"{tran}key"
- cluster_name = "" if cluster_infer_ratio == 0 else f"_{cluster_infer_ratio}"
- res_path = f'./results/old——{clean_name}_{key}_{spk}{cluster_name}.{wav_format}'
- soundfile.write(res_path, audio, svc_model.target_sample, format=wav_format)
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/sbixStrike.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/sbixStrike.py
deleted file mode 100644
index 7614af4c7b325c363c0b30edfc85a478aa15f01b..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/sbixStrike.py
+++ /dev/null
@@ -1,177 +0,0 @@
-from fontTools.misc import sstruct
-from fontTools.misc.textTools import safeEval
-from .sbixGlyph import Glyph
-import struct
-
-sbixStrikeHeaderFormat = """
- >
- ppem: H # The PPEM for which this strike was designed (e.g., 9,
- # 12, 24)
- resolution: H # The screen resolution (in dpi) for which this strike
- # was designed (e.g., 72)
-"""
-
-sbixGlyphDataOffsetFormat = """
- >
- glyphDataOffset: L # Offset from the beginning of the strike data record
- # to data for the individual glyph
-"""
-
-sbixStrikeHeaderFormatSize = sstruct.calcsize(sbixStrikeHeaderFormat)
-sbixGlyphDataOffsetFormatSize = sstruct.calcsize(sbixGlyphDataOffsetFormat)
-
-
-class Strike(object):
- def __init__(self, rawdata=None, ppem=0, resolution=72):
- self.data = rawdata
- self.ppem = ppem
- self.resolution = resolution
- self.glyphs = {}
-
- def decompile(self, ttFont):
- if self.data is None:
- from fontTools import ttLib
-
- raise ttLib.TTLibError
- if len(self.data) < sbixStrikeHeaderFormatSize:
- from fontTools import ttLib
-
- raise (
- ttLib.TTLibError,
- "Strike header too short: Expected %x, got %x.",
- ) % (sbixStrikeHeaderFormatSize, len(self.data))
-
- # read Strike header from raw data
- sstruct.unpack(
- sbixStrikeHeaderFormat, self.data[:sbixStrikeHeaderFormatSize], self
- )
-
- # calculate number of glyphs
- (firstGlyphDataOffset,) = struct.unpack(
- ">L",
- self.data[
- sbixStrikeHeaderFormatSize : sbixStrikeHeaderFormatSize
- + sbixGlyphDataOffsetFormatSize
- ],
- )
- self.numGlyphs = (
- firstGlyphDataOffset - sbixStrikeHeaderFormatSize
- ) // sbixGlyphDataOffsetFormatSize - 1
- # ^ -1 because there's one more offset than glyphs
-
- # build offset list for single glyph data offsets
- self.glyphDataOffsets = []
- for i in range(
- self.numGlyphs + 1
- ): # + 1 because there's one more offset than glyphs
- start = i * sbixGlyphDataOffsetFormatSize + sbixStrikeHeaderFormatSize
- (current_offset,) = struct.unpack(
- ">L", self.data[start : start + sbixGlyphDataOffsetFormatSize]
- )
- self.glyphDataOffsets.append(current_offset)
-
- # iterate through offset list and slice raw data into glyph data records
- for i in range(self.numGlyphs):
- current_glyph = Glyph(
- rawdata=self.data[
- self.glyphDataOffsets[i] : self.glyphDataOffsets[i + 1]
- ],
- gid=i,
- )
- current_glyph.decompile(ttFont)
- self.glyphs[current_glyph.glyphName] = current_glyph
- del self.glyphDataOffsets
- del self.numGlyphs
- del self.data
-
- def compile(self, ttFont):
- self.glyphDataOffsets = b""
- self.bitmapData = b""
-
- glyphOrder = ttFont.getGlyphOrder()
-
- # first glyph starts right after the header
- currentGlyphDataOffset = (
- sbixStrikeHeaderFormatSize
- + sbixGlyphDataOffsetFormatSize * (len(glyphOrder) + 1)
- )
- for glyphName in glyphOrder:
- if glyphName in self.glyphs:
- # we have glyph data for this glyph
- current_glyph = self.glyphs[glyphName]
- else:
- # must add empty glyph data record for this glyph
- current_glyph = Glyph(glyphName=glyphName)
- current_glyph.compile(ttFont)
- current_glyph.glyphDataOffset = currentGlyphDataOffset
- self.bitmapData += current_glyph.rawdata
- currentGlyphDataOffset += len(current_glyph.rawdata)
- self.glyphDataOffsets += sstruct.pack(
- sbixGlyphDataOffsetFormat, current_glyph
- )
-
- # add last "offset", really the end address of the last glyph data record
- dummy = Glyph()
- dummy.glyphDataOffset = currentGlyphDataOffset
- self.glyphDataOffsets += sstruct.pack(sbixGlyphDataOffsetFormat, dummy)
-
- # pack header
- self.data = sstruct.pack(sbixStrikeHeaderFormat, self)
- # add offsets and image data after header
- self.data += self.glyphDataOffsets + self.bitmapData
-
- def toXML(self, xmlWriter, ttFont):
- xmlWriter.begintag("strike")
- xmlWriter.newline()
- xmlWriter.simpletag("ppem", value=self.ppem)
- xmlWriter.newline()
- xmlWriter.simpletag("resolution", value=self.resolution)
- xmlWriter.newline()
- glyphOrder = ttFont.getGlyphOrder()
- for i in range(len(glyphOrder)):
- if glyphOrder[i] in self.glyphs:
- self.glyphs[glyphOrder[i]].toXML(xmlWriter, ttFont)
- # TODO: what if there are more glyph data records than (glyf table) glyphs?
- xmlWriter.endtag("strike")
- xmlWriter.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- if name in ["ppem", "resolution"]:
- setattr(self, name, safeEval(attrs["value"]))
- elif name == "glyph":
- if "graphicType" in attrs:
- myFormat = safeEval("'''" + attrs["graphicType"] + "'''")
- else:
- myFormat = None
- if "glyphname" in attrs:
- myGlyphName = safeEval("'''" + attrs["glyphname"] + "'''")
- elif "name" in attrs:
- myGlyphName = safeEval("'''" + attrs["name"] + "'''")
- else:
- from fontTools import ttLib
-
- raise ttLib.TTLibError("Glyph must have a glyph name.")
- if "originOffsetX" in attrs:
- myOffsetX = safeEval(attrs["originOffsetX"])
- else:
- myOffsetX = 0
- if "originOffsetY" in attrs:
- myOffsetY = safeEval(attrs["originOffsetY"])
- else:
- myOffsetY = 0
- current_glyph = Glyph(
- glyphName=myGlyphName,
- graphicType=myFormat,
- originOffsetX=myOffsetX,
- originOffsetY=myOffsetY,
- )
- for element in content:
- if isinstance(element, tuple):
- name, attrs, content = element
- current_glyph.fromXML(name, attrs, content, ttFont)
- current_glyph.compile(ttFont)
- self.glyphs[current_glyph.glyphName] = current_glyph
- else:
- from fontTools import ttLib
-
- raise ttLib.TTLibError("can't handle '%s' element" % name)
diff --git a/spaces/declare-lab/tango/diffusers/examples/research_projects/mulit_token_textual_inversion/textual_inversion.py b/spaces/declare-lab/tango/diffusers/examples/research_projects/mulit_token_textual_inversion/textual_inversion.py
deleted file mode 100644
index 622c51d2e52e37d91e9551138efaac54f76fcd0d..0000000000000000000000000000000000000000
--- a/spaces/declare-lab/tango/diffusers/examples/research_projects/mulit_token_textual_inversion/textual_inversion.py
+++ /dev/null
@@ -1,927 +0,0 @@
-#!/usr/bin/env python
-# coding=utf-8
-# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-
-import argparse
-import logging
-import math
-import os
-import random
-from pathlib import Path
-
-import numpy as np
-import PIL
-import torch
-import torch.nn.functional as F
-import torch.utils.checkpoint
-import transformers
-from accelerate import Accelerator
-from accelerate.logging import get_logger
-from accelerate.utils import ProjectConfiguration, set_seed
-from huggingface_hub import create_repo, upload_folder
-from multi_token_clip import MultiTokenCLIPTokenizer
-
-# TODO: remove and import from diffusers.utils when the new version of diffusers is released
-from packaging import version
-from PIL import Image
-from torch.utils.data import Dataset
-from torchvision import transforms
-from tqdm.auto import tqdm
-from transformers import CLIPTextModel
-
-import diffusers
-from diffusers import (
- AutoencoderKL,
- DDPMScheduler,
- DiffusionPipeline,
- DPMSolverMultistepScheduler,
- StableDiffusionPipeline,
- UNet2DConditionModel,
-)
-from diffusers.optimization import get_scheduler
-from diffusers.utils import check_min_version, is_wandb_available
-from diffusers.utils.import_utils import is_xformers_available
-
-
-if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"):
- PIL_INTERPOLATION = {
- "linear": PIL.Image.Resampling.BILINEAR,
- "bilinear": PIL.Image.Resampling.BILINEAR,
- "bicubic": PIL.Image.Resampling.BICUBIC,
- "lanczos": PIL.Image.Resampling.LANCZOS,
- "nearest": PIL.Image.Resampling.NEAREST,
- }
-else:
- PIL_INTERPOLATION = {
- "linear": PIL.Image.LINEAR,
- "bilinear": PIL.Image.BILINEAR,
- "bicubic": PIL.Image.BICUBIC,
- "lanczos": PIL.Image.LANCZOS,
- "nearest": PIL.Image.NEAREST,
- }
-# ------------------------------------------------------------------------------
-
-
-# Will error if the minimal version of diffusers is not installed. Remove at your own risks.
-check_min_version("0.14.0.dev0")
-
-logger = get_logger(__name__)
-
-
-def add_tokens(tokenizer, text_encoder, placeholder_token, num_vec_per_token=1, initializer_token=None):
- """
- Add tokens to the tokenizer and set the initial value of token embeddings
- """
- tokenizer.add_placeholder_tokens(placeholder_token, num_vec_per_token=num_vec_per_token)
- text_encoder.resize_token_embeddings(len(tokenizer))
- token_embeds = text_encoder.get_input_embeddings().weight.data
- placeholder_token_ids = tokenizer.encode(placeholder_token, add_special_tokens=False)
- if initializer_token:
- token_ids = tokenizer.encode(initializer_token, add_special_tokens=False)
- for i, placeholder_token_id in enumerate(placeholder_token_ids):
- token_embeds[placeholder_token_id] = token_embeds[token_ids[i * len(token_ids) // num_vec_per_token]]
- else:
- for i, placeholder_token_id in enumerate(placeholder_token_ids):
- token_embeds[placeholder_token_id] = torch.randn_like(token_embeds[placeholder_token_id])
- return placeholder_token
-
-
-def save_progress(tokenizer, text_encoder, accelerator, save_path):
- for placeholder_token in tokenizer.token_map:
- placeholder_token_ids = tokenizer.encode(placeholder_token, add_special_tokens=False)
- learned_embeds = accelerator.unwrap_model(text_encoder).get_input_embeddings().weight[placeholder_token_ids]
- if len(placeholder_token_ids) == 1:
- learned_embeds = learned_embeds[None]
- learned_embeds_dict = {placeholder_token: learned_embeds.detach().cpu()}
- torch.save(learned_embeds_dict, save_path)
-
-
-def load_multitoken_tokenizer(tokenizer, text_encoder, learned_embeds_dict):
- for placeholder_token in learned_embeds_dict:
- placeholder_embeds = learned_embeds_dict[placeholder_token]
- num_vec_per_token = placeholder_embeds.shape[0]
- placeholder_embeds = placeholder_embeds.to(dtype=text_encoder.dtype)
- add_tokens(tokenizer, text_encoder, placeholder_token, num_vec_per_token=num_vec_per_token)
- placeholder_token_ids = tokenizer.encode(placeholder_token, add_special_tokens=False)
- token_embeds = text_encoder.get_input_embeddings().weight.data
- for i, placeholder_token_id in enumerate(placeholder_token_ids):
- token_embeds[placeholder_token_id] = placeholder_embeds[i]
-
-
-def load_multitoken_tokenizer_from_automatic(tokenizer, text_encoder, automatic_dict, placeholder_token):
- """
- Automatic1111's tokens have format
- {'string_to_token': {'*': 265}, 'string_to_param': {'*': tensor([[ 0.0833, 0.0030, 0.0057, ..., -0.0264, -0.0616, -0.0529],
- [ 0.0058, -0.0190, -0.0584, ..., -0.0025, -0.0945, -0.0490],
- [ 0.0916, 0.0025, 0.0365, ..., -0.0685, -0.0124, 0.0728],
- [ 0.0812, -0.0199, -0.0100, ..., -0.0581, -0.0780, 0.0254]],
- requires_grad=True)}, 'name': 'FloralMarble-400', 'step': 399, 'sd_checkpoint': '4bdfc29c', 'sd_checkpoint_name': 'SD2.1-768'}
- """
- learned_embeds_dict = {}
- learned_embeds_dict[placeholder_token] = automatic_dict["string_to_param"]["*"]
- load_multitoken_tokenizer(tokenizer, text_encoder, learned_embeds_dict)
-
-
-def get_mask(tokenizer, accelerator):
- # Get the mask of the weights that won't change
- mask = torch.ones(len(tokenizer)).to(accelerator.device, dtype=torch.bool)
- for placeholder_token in tokenizer.token_map:
- placeholder_token_ids = tokenizer.encode(placeholder_token, add_special_tokens=False)
- for i in range(len(placeholder_token_ids)):
- mask = mask & (torch.arange(len(tokenizer)) != placeholder_token_ids[i]).to(accelerator.device)
- return mask
-
-
-def parse_args():
- parser = argparse.ArgumentParser(description="Simple example of a training script.")
- parser.add_argument(
- "--progressive_tokens_max_steps",
- type=int,
- default=2000,
- help="The number of steps until all tokens will be used.",
- )
- parser.add_argument(
- "--progressive_tokens",
- action="store_true",
- help="Progressively train the tokens. For example, first train for 1 token, then 2 tokens and so on.",
- )
- parser.add_argument("--vector_shuffle", action="store_true", help="Shuffling tokens durint training")
- parser.add_argument(
- "--num_vec_per_token",
- type=int,
- default=1,
- help=(
- "The number of vectors used to represent the placeholder token. The higher the number, the better the"
- " result at the cost of editability. This can be fixed by prompt editing."
- ),
- )
- parser.add_argument(
- "--save_steps",
- type=int,
- default=500,
- help="Save learned_embeds.bin every X updates steps.",
- )
- parser.add_argument(
- "--only_save_embeds",
- action="store_true",
- default=False,
- help="Save only the embeddings for the new concept.",
- )
- parser.add_argument(
- "--pretrained_model_name_or_path",
- type=str,
- default=None,
- required=True,
- help="Path to pretrained model or model identifier from huggingface.co/models.",
- )
- parser.add_argument(
- "--revision",
- type=str,
- default=None,
- required=False,
- help="Revision of pretrained model identifier from huggingface.co/models.",
- )
- parser.add_argument(
- "--tokenizer_name",
- type=str,
- default=None,
- help="Pretrained tokenizer name or path if not the same as model_name",
- )
- parser.add_argument(
- "--train_data_dir", type=str, default=None, required=True, help="A folder containing the training data."
- )
- parser.add_argument(
- "--placeholder_token",
- type=str,
- default=None,
- required=True,
- help="A token to use as a placeholder for the concept.",
- )
- parser.add_argument(
- "--initializer_token", type=str, default=None, required=True, help="A token to use as initializer word."
- )
- parser.add_argument("--learnable_property", type=str, default="object", help="Choose between 'object' and 'style'")
- parser.add_argument("--repeats", type=int, default=100, help="How many times to repeat the training data.")
- parser.add_argument(
- "--output_dir",
- type=str,
- default="text-inversion-model",
- help="The output directory where the model predictions and checkpoints will be written.",
- )
- parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
- parser.add_argument(
- "--resolution",
- type=int,
- default=512,
- help=(
- "The resolution for input images, all the images in the train/validation dataset will be resized to this"
- " resolution"
- ),
- )
- parser.add_argument(
- "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution."
- )
- parser.add_argument(
- "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader."
- )
- parser.add_argument("--num_train_epochs", type=int, default=100)
- parser.add_argument(
- "--max_train_steps",
- type=int,
- default=5000,
- help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
- )
- parser.add_argument(
- "--gradient_accumulation_steps",
- type=int,
- default=1,
- help="Number of updates steps to accumulate before performing a backward/update pass.",
- )
- parser.add_argument(
- "--gradient_checkpointing",
- action="store_true",
- help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.",
- )
- parser.add_argument(
- "--learning_rate",
- type=float,
- default=1e-4,
- help="Initial learning rate (after the potential warmup period) to use.",
- )
- parser.add_argument(
- "--scale_lr",
- action="store_true",
- default=False,
- help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
- )
- parser.add_argument(
- "--lr_scheduler",
- type=str,
- default="constant",
- help=(
- 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
- ' "constant", "constant_with_warmup"]'
- ),
- )
- parser.add_argument(
- "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
- )
- parser.add_argument(
- "--dataloader_num_workers",
- type=int,
- default=0,
- help=(
- "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process."
- ),
- )
- parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
- parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
- parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
- parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
- parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
- parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
- parser.add_argument(
- "--hub_model_id",
- type=str,
- default=None,
- help="The name of the repository to keep in sync with the local `output_dir`.",
- )
- parser.add_argument(
- "--logging_dir",
- type=str,
- default="logs",
- help=(
- "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
- " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
- ),
- )
- parser.add_argument(
- "--mixed_precision",
- type=str,
- default="no",
- choices=["no", "fp16", "bf16"],
- help=(
- "Whether to use mixed precision. Choose"
- "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
- "and an Nvidia Ampere GPU."
- ),
- )
- parser.add_argument(
- "--allow_tf32",
- action="store_true",
- help=(
- "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see"
- " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices"
- ),
- )
- parser.add_argument(
- "--report_to",
- type=str,
- default="tensorboard",
- help=(
- 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`'
- ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.'
- ),
- )
- parser.add_argument(
- "--validation_prompt",
- type=str,
- default=None,
- help="A prompt that is used during validation to verify that the model is learning.",
- )
- parser.add_argument(
- "--num_validation_images",
- type=int,
- default=4,
- help="Number of images that should be generated during validation with `validation_prompt`.",
- )
- parser.add_argument(
- "--validation_epochs",
- type=int,
- default=50,
- help=(
- "Run validation every X epochs. Validation consists of running the prompt"
- " `args.validation_prompt` multiple times: `args.num_validation_images`"
- " and logging the images."
- ),
- )
- parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
- parser.add_argument(
- "--checkpointing_steps",
- type=int,
- default=500,
- help=(
- "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming"
- " training using `--resume_from_checkpoint`."
- ),
- )
- parser.add_argument(
- "--checkpoints_total_limit",
- type=int,
- default=None,
- help=(
- "Max number of checkpoints to store. Passed as `total_limit` to the `Accelerator` `ProjectConfiguration`."
- " See Accelerator::save_state https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.save_state"
- " for more docs"
- ),
- )
- parser.add_argument(
- "--resume_from_checkpoint",
- type=str,
- default=None,
- help=(
- "Whether training should be resumed from a previous checkpoint. Use a path saved by"
- ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.'
- ),
- )
- parser.add_argument(
- "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers."
- )
-
- args = parser.parse_args()
- env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
- if env_local_rank != -1 and env_local_rank != args.local_rank:
- args.local_rank = env_local_rank
-
- if args.train_data_dir is None:
- raise ValueError("You must specify a train data directory.")
-
- return args
-
-
-imagenet_templates_small = [
- "a photo of a {}",
- "a rendering of a {}",
- "a cropped photo of the {}",
- "the photo of a {}",
- "a photo of a clean {}",
- "a photo of a dirty {}",
- "a dark photo of the {}",
- "a photo of my {}",
- "a photo of the cool {}",
- "a close-up photo of a {}",
- "a bright photo of the {}",
- "a cropped photo of a {}",
- "a photo of the {}",
- "a good photo of the {}",
- "a photo of one {}",
- "a close-up photo of the {}",
- "a rendition of the {}",
- "a photo of the clean {}",
- "a rendition of a {}",
- "a photo of a nice {}",
- "a good photo of a {}",
- "a photo of the nice {}",
- "a photo of the small {}",
- "a photo of the weird {}",
- "a photo of the large {}",
- "a photo of a cool {}",
- "a photo of a small {}",
-]
-
-imagenet_style_templates_small = [
- "a painting in the style of {}",
- "a rendering in the style of {}",
- "a cropped painting in the style of {}",
- "the painting in the style of {}",
- "a clean painting in the style of {}",
- "a dirty painting in the style of {}",
- "a dark painting in the style of {}",
- "a picture in the style of {}",
- "a cool painting in the style of {}",
- "a close-up painting in the style of {}",
- "a bright painting in the style of {}",
- "a cropped painting in the style of {}",
- "a good painting in the style of {}",
- "a close-up painting in the style of {}",
- "a rendition in the style of {}",
- "a nice painting in the style of {}",
- "a small painting in the style of {}",
- "a weird painting in the style of {}",
- "a large painting in the style of {}",
-]
-
-
-class TextualInversionDataset(Dataset):
- def __init__(
- self,
- data_root,
- tokenizer,
- learnable_property="object", # [object, style]
- size=512,
- repeats=100,
- interpolation="bicubic",
- flip_p=0.5,
- set="train",
- placeholder_token="*",
- center_crop=False,
- vector_shuffle=False,
- progressive_tokens=False,
- ):
- self.data_root = data_root
- self.tokenizer = tokenizer
- self.learnable_property = learnable_property
- self.size = size
- self.placeholder_token = placeholder_token
- self.center_crop = center_crop
- self.flip_p = flip_p
- self.vector_shuffle = vector_shuffle
- self.progressive_tokens = progressive_tokens
- self.prop_tokens_to_load = 0
-
- self.image_paths = [os.path.join(self.data_root, file_path) for file_path in os.listdir(self.data_root)]
-
- self.num_images = len(self.image_paths)
- self._length = self.num_images
-
- if set == "train":
- self._length = self.num_images * repeats
-
- self.interpolation = {
- "linear": PIL_INTERPOLATION["linear"],
- "bilinear": PIL_INTERPOLATION["bilinear"],
- "bicubic": PIL_INTERPOLATION["bicubic"],
- "lanczos": PIL_INTERPOLATION["lanczos"],
- }[interpolation]
-
- self.templates = imagenet_style_templates_small if learnable_property == "style" else imagenet_templates_small
- self.flip_transform = transforms.RandomHorizontalFlip(p=self.flip_p)
-
- def __len__(self):
- return self._length
-
- def __getitem__(self, i):
- example = {}
- image = Image.open(self.image_paths[i % self.num_images])
-
- if not image.mode == "RGB":
- image = image.convert("RGB")
-
- placeholder_string = self.placeholder_token
- text = random.choice(self.templates).format(placeholder_string)
-
- example["input_ids"] = self.tokenizer.encode(
- text,
- padding="max_length",
- truncation=True,
- max_length=self.tokenizer.model_max_length,
- return_tensors="pt",
- vector_shuffle=self.vector_shuffle,
- prop_tokens_to_load=self.prop_tokens_to_load if self.progressive_tokens else 1.0,
- )[0]
-
- # default to score-sde preprocessing
- img = np.array(image).astype(np.uint8)
-
- if self.center_crop:
- crop = min(img.shape[0], img.shape[1])
- (
- h,
- w,
- ) = (
- img.shape[0],
- img.shape[1],
- )
- img = img[(h - crop) // 2 : (h + crop) // 2, (w - crop) // 2 : (w + crop) // 2]
-
- image = Image.fromarray(img)
- image = image.resize((self.size, self.size), resample=self.interpolation)
-
- image = self.flip_transform(image)
- image = np.array(image).astype(np.uint8)
- image = (image / 127.5 - 1.0).astype(np.float32)
-
- example["pixel_values"] = torch.from_numpy(image).permute(2, 0, 1)
- return example
-
-
-def main():
- args = parse_args()
- logging_dir = os.path.join(args.output_dir, args.logging_dir)
-
- accelerator_project_config = ProjectConfiguration(total_limit=args.checkpoints_total_limit)
-
- accelerator = Accelerator(
- gradient_accumulation_steps=args.gradient_accumulation_steps,
- mixed_precision=args.mixed_precision,
- log_with=args.report_to,
- logging_dir=logging_dir,
- project_config=accelerator_project_config,
- )
-
- if args.report_to == "wandb":
- if not is_wandb_available():
- raise ImportError("Make sure to install wandb if you want to use it for logging during training.")
- import wandb
-
- # Make one log on every process with the configuration for debugging.
- logging.basicConfig(
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
- datefmt="%m/%d/%Y %H:%M:%S",
- level=logging.INFO,
- )
- logger.info(accelerator.state, main_process_only=False)
- if accelerator.is_local_main_process:
- transformers.utils.logging.set_verbosity_warning()
- diffusers.utils.logging.set_verbosity_info()
- else:
- transformers.utils.logging.set_verbosity_error()
- diffusers.utils.logging.set_verbosity_error()
-
- # If passed along, set the training seed now.
- if args.seed is not None:
- set_seed(args.seed)
-
- # Handle the repository creation
- if accelerator.is_main_process:
- if args.output_dir is not None:
- os.makedirs(args.output_dir, exist_ok=True)
-
- if args.push_to_hub:
- repo_id = create_repo(
- repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token
- ).repo_id
-
- # Load tokenizer
- if args.tokenizer_name:
- tokenizer = MultiTokenCLIPTokenizer.from_pretrained(args.tokenizer_name)
- elif args.pretrained_model_name_or_path:
- tokenizer = MultiTokenCLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer")
-
- # Load scheduler and models
- noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler")
- text_encoder = CLIPTextModel.from_pretrained(
- args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision
- )
- vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision)
- unet = UNet2DConditionModel.from_pretrained(
- args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision
- )
- if is_xformers_available():
- try:
- unet.enable_xformers_memory_efficient_attention()
- except Exception as e:
- logger.warning(
- "Could not enable memory efficient attention. Make sure xformers is installed"
- f" correctly and a GPU is available: {e}"
- )
- add_tokens(tokenizer, text_encoder, args.placeholder_token, args.num_vec_per_token, args.initializer_token)
-
- # Freeze vae and unet
- vae.requires_grad_(False)
- unet.requires_grad_(False)
- # Freeze all parameters except for the token embeddings in text encoder
- text_encoder.text_model.encoder.requires_grad_(False)
- text_encoder.text_model.final_layer_norm.requires_grad_(False)
- text_encoder.text_model.embeddings.position_embedding.requires_grad_(False)
-
- if args.gradient_checkpointing:
- # Keep unet in train mode if we are using gradient checkpointing to save memory.
- # The dropout cannot be != 0 so it doesn't matter if we are in eval or train mode.
- unet.train()
- text_encoder.gradient_checkpointing_enable()
- unet.enable_gradient_checkpointing()
-
- if args.enable_xformers_memory_efficient_attention:
- if is_xformers_available():
- import xformers
-
- xformers_version = version.parse(xformers.__version__)
- if xformers_version == version.parse("0.0.16"):
- logger.warn(
- "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details."
- )
- unet.enable_xformers_memory_efficient_attention()
- else:
- raise ValueError("xformers is not available. Make sure it is installed correctly")
-
- # Enable TF32 for faster training on Ampere GPUs,
- # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices
- if args.allow_tf32:
- torch.backends.cuda.matmul.allow_tf32 = True
-
- if args.scale_lr:
- args.learning_rate = (
- args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
- )
-
- # Initialize the optimizer
- optimizer = torch.optim.AdamW(
- text_encoder.get_input_embeddings().parameters(), # only optimize the embeddings
- lr=args.learning_rate,
- betas=(args.adam_beta1, args.adam_beta2),
- weight_decay=args.adam_weight_decay,
- eps=args.adam_epsilon,
- )
-
- # Dataset and DataLoaders creation:
- train_dataset = TextualInversionDataset(
- data_root=args.train_data_dir,
- tokenizer=tokenizer,
- size=args.resolution,
- placeholder_token=args.placeholder_token,
- repeats=args.repeats,
- learnable_property=args.learnable_property,
- center_crop=args.center_crop,
- set="train",
- )
- train_dataloader = torch.utils.data.DataLoader(
- train_dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers
- )
-
- # Scheduler and math around the number of training steps.
- overrode_max_train_steps = False
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
- if args.max_train_steps is None:
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
- overrode_max_train_steps = True
-
- lr_scheduler = get_scheduler(
- args.lr_scheduler,
- optimizer=optimizer,
- num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps,
- num_training_steps=args.max_train_steps * args.gradient_accumulation_steps,
- )
-
- # Prepare everything with our `accelerator`.
- text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
- text_encoder, optimizer, train_dataloader, lr_scheduler
- )
-
- # For mixed precision training we cast the unet and vae weights to half-precision
- # as these models are only used for inference, keeping weights in full precision is not required.
- weight_dtype = torch.float32
- if accelerator.mixed_precision == "fp16":
- weight_dtype = torch.float16
- elif accelerator.mixed_precision == "bf16":
- weight_dtype = torch.bfloat16
-
- # Move vae and unet to device and cast to weight_dtype
- unet.to(accelerator.device, dtype=weight_dtype)
- vae.to(accelerator.device, dtype=weight_dtype)
-
- # We need to recalculate our total training steps as the size of the training dataloader may have changed.
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
- if overrode_max_train_steps:
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
- # Afterwards we recalculate our number of training epochs
- args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
-
- # We need to initialize the trackers we use, and also store our configuration.
- # The trackers initializes automatically on the main process.
- if accelerator.is_main_process:
- accelerator.init_trackers("textual_inversion", config=vars(args))
-
- # Train!
- total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
-
- logger.info("***** Running training *****")
- logger.info(f" Num examples = {len(train_dataset)}")
- logger.info(f" Num Epochs = {args.num_train_epochs}")
- logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
- logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
- logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
- logger.info(f" Total optimization steps = {args.max_train_steps}")
- global_step = 0
- first_epoch = 0
-
- # Potentially load in the weights and states from a previous save
- if args.resume_from_checkpoint:
- if args.resume_from_checkpoint != "latest":
- path = os.path.basename(args.resume_from_checkpoint)
- else:
- # Get the most recent checkpoint
- dirs = os.listdir(args.output_dir)
- dirs = [d for d in dirs if d.startswith("checkpoint")]
- dirs = sorted(dirs, key=lambda x: int(x.split("-")[1]))
- path = dirs[-1] if len(dirs) > 0 else None
-
- if path is None:
- accelerator.print(
- f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run."
- )
- args.resume_from_checkpoint = None
- else:
- accelerator.print(f"Resuming from checkpoint {path}")
- accelerator.load_state(os.path.join(args.output_dir, path))
- global_step = int(path.split("-")[1])
-
- resume_global_step = global_step * args.gradient_accumulation_steps
- first_epoch = global_step // num_update_steps_per_epoch
- resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps)
-
- # Only show the progress bar once on each machine.
- progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process)
- progress_bar.set_description("Steps")
-
- # keep original embeddings as reference
- orig_embeds_params = accelerator.unwrap_model(text_encoder).get_input_embeddings().weight.data.clone()
-
- for epoch in range(first_epoch, args.num_train_epochs):
- text_encoder.train()
- for step, batch in enumerate(train_dataloader):
- # Skip steps until we reach the resumed step
- if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step:
- if step % args.gradient_accumulation_steps == 0:
- progress_bar.update(1)
- continue
- if args.progressive_tokens:
- train_dataset.prop_tokens_to_load = float(global_step) / args.progressive_tokens_max_steps
-
- with accelerator.accumulate(text_encoder):
- # Convert images to latent space
- latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample().detach()
- latents = latents * vae.config.scaling_factor
-
- # Sample noise that we'll add to the latents
- noise = torch.randn_like(latents)
- bsz = latents.shape[0]
- # Sample a random timestep for each image
- timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device)
- timesteps = timesteps.long()
-
- # Add noise to the latents according to the noise magnitude at each timestep
- # (this is the forward diffusion process)
- noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps)
-
- # Get the text embedding for conditioning
- encoder_hidden_states = text_encoder(batch["input_ids"])[0].to(dtype=weight_dtype)
-
- # Predict the noise residual
- model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
-
- # Get the target for loss depending on the prediction type
- if noise_scheduler.config.prediction_type == "epsilon":
- target = noise
- elif noise_scheduler.config.prediction_type == "v_prediction":
- target = noise_scheduler.get_velocity(latents, noise, timesteps)
- else:
- raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
-
- loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
-
- accelerator.backward(loss)
-
- optimizer.step()
- lr_scheduler.step()
- optimizer.zero_grad()
-
- # Let's make sure we don't update any embedding weights besides the newly added token
- index_no_updates = get_mask(tokenizer, accelerator)
- with torch.no_grad():
- accelerator.unwrap_model(text_encoder).get_input_embeddings().weight[
- index_no_updates
- ] = orig_embeds_params[index_no_updates]
-
- # Checks if the accelerator has performed an optimization step behind the scenes
- if accelerator.sync_gradients:
- progress_bar.update(1)
- global_step += 1
- if global_step % args.save_steps == 0:
- save_path = os.path.join(args.output_dir, f"learned_embeds-steps-{global_step}.bin")
- save_progress(tokenizer, text_encoder, accelerator, save_path)
-
- if global_step % args.checkpointing_steps == 0:
- if accelerator.is_main_process:
- save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}")
- accelerator.save_state(save_path)
- logger.info(f"Saved state to {save_path}")
-
- logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
- progress_bar.set_postfix(**logs)
- accelerator.log(logs, step=global_step)
-
- if global_step >= args.max_train_steps:
- break
-
- if accelerator.is_main_process and args.validation_prompt is not None and epoch % args.validation_epochs == 0:
- logger.info(
- f"Running validation... \n Generating {args.num_validation_images} images with prompt:"
- f" {args.validation_prompt}."
- )
- # create pipeline (note: unet and vae are loaded again in float32)
- pipeline = DiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- text_encoder=accelerator.unwrap_model(text_encoder),
- tokenizer=tokenizer,
- unet=unet,
- vae=vae,
- revision=args.revision,
- torch_dtype=weight_dtype,
- )
- pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)
- pipeline = pipeline.to(accelerator.device)
- pipeline.set_progress_bar_config(disable=True)
-
- # run inference
- generator = (
- None if args.seed is None else torch.Generator(device=accelerator.device).manual_seed(args.seed)
- )
- images = []
- for _ in range(args.num_validation_images):
- with torch.autocast("cuda"):
- image = pipeline(args.validation_prompt, num_inference_steps=25, generator=generator).images[0]
- images.append(image)
-
- for tracker in accelerator.trackers:
- if tracker.name == "tensorboard":
- np_images = np.stack([np.asarray(img) for img in images])
- tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC")
- if tracker.name == "wandb":
- tracker.log(
- {
- "validation": [
- wandb.Image(image, caption=f"{i}: {args.validation_prompt}")
- for i, image in enumerate(images)
- ]
- }
- )
-
- del pipeline
- torch.cuda.empty_cache()
-
- # Create the pipeline using using the trained modules and save it.
- accelerator.wait_for_everyone()
- if accelerator.is_main_process:
- if args.push_to_hub and args.only_save_embeds:
- logger.warn("Enabling full model saving because --push_to_hub=True was specified.")
- save_full_model = True
- else:
- save_full_model = not args.only_save_embeds
- if save_full_model:
- pipeline = StableDiffusionPipeline.from_pretrained(
- args.pretrained_model_name_or_path,
- text_encoder=accelerator.unwrap_model(text_encoder),
- vae=vae,
- unet=unet,
- tokenizer=tokenizer,
- )
- pipeline.save_pretrained(args.output_dir)
- # Save the newly trained embeddings
- save_path = os.path.join(args.output_dir, "learned_embeds.bin")
- save_progress(tokenizer, text_encoder, accelerator, save_path)
-
- if args.push_to_hub:
- upload_folder(
- repo_id=repo_id,
- folder_path=args.output_dir,
- commit_message="End of training",
- ignore_patterns=["step_*", "epoch_*"],
- )
-
- accelerator.end_training()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/deep-learning-analytics/Title_Generation/app.py b/spaces/deep-learning-analytics/Title_Generation/app.py
deleted file mode 100644
index 999738d046758d39a4d8b0796545c56cf23d2fb1..0000000000000000000000000000000000000000
--- a/spaces/deep-learning-analytics/Title_Generation/app.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import torch
-
-import streamlit as st
-
-st.title("Title Generation with Transformers")
-st.write("")
-st.write("Input your text here!")
-
-
-default_value = "Ukrainian counterattacks: Kharkiv's regional administrator said a number of villages around Malaya Rogan were retaken by Ukrainian forces. Video verified by CNN shows Ukrainian troops in control of Vilkhivka, one of the settlements roughly 20 miles from the Russian border. The success of Ukrainian forces around Kharkiv has been mirrored further north, near the city of Sumy, where Ukrainian troops have liberated a number of settlements, according to videos geolocated and verified by CNN. A separate counterattack in the south also led to the liberation of two villages from Russian forces northwest of Mariupol, according to the Zaporizhzhia regional military administration."
-
-sent = st.text_area("Text", default_value, height = 50)
-
-from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
-
-tokenizer = AutoTokenizer.from_pretrained("deep-learning-analytics/automatic-title-generation")
-
-model = AutoModelForSeq2SeqLM.from_pretrained("deep-learning-analytics/automatic-title-generation")
-
-
-def tokenize_data(text):
- # Tokenize the review body
- input_ = str(text) + ' '
- max_len = 120
- # tokenize inputs
- tokenized_inputs = tokenizer(input_, padding='max_length', truncation=True, max_length=max_len, return_attention_mask=True, return_tensors='pt')
-
- inputs={"input_ids": tokenized_inputs['input_ids'],
- "attention_mask": tokenized_inputs['attention_mask']}
- return inputs
-
-def generate_answers(text):
- inputs = tokenize_data(text)
- results= model.generate(input_ids= inputs['input_ids'], attention_mask=inputs['attention_mask'], do_sample=True,
- max_length=120,
- top_k=120,
- top_p=0.98,
- early_stopping=True,
- num_return_sequences=1)
- answer = tokenizer.decode(results[0], skip_special_tokens=True)
- return answer
-
-answer = generate_answers(sent)
-
-st.write(answer)
-
-#iface = gr.Interface(fn=generate_answers,inputs=[gr.inputs.Textbox(lines=20)], outputs=["text"])
-#iface.launch(inline=False, share=True)
\ No newline at end of file
diff --git a/spaces/deepghs/auto_image_censor/detect.py b/spaces/deepghs/auto_image_censor/detect.py
deleted file mode 100644
index 35cb76cbf7ff2feacbef5102e46f60644d2942d0..0000000000000000000000000000000000000000
--- a/spaces/deepghs/auto_image_censor/detect.py
+++ /dev/null
@@ -1,81 +0,0 @@
-from typing import List, Union, Dict
-
-import numpy as np
-from PIL import Image
-
-from nudenet import preprocess_image, open_model_session
-
-DEFAULT_DETECT_CLASSES = [
- 'EXPOSED_BREAST_F',
- 'EXPOSED_GENITALIA_F',
- # 'EXPOSED_GENITALIA_M',
-]
-
-
-def detect(image: Image.Image, threshold: float = 0.7, clss: List[str] = None, model: str = 'default'):
- # if mode == "fast":
- # image, scale = preprocess_image(image, min_side=480, max_side=800)
- # if not min_prob:
- # min_prob = 0.5
- # else:
- # image, scale = preprocess_image(image)
- # if not min_prob:
- # min_prob = 0.6
- image, scale = preprocess_image(image)
- clss = clss if clss is not None else DEFAULT_DETECT_CLASSES
-
- onnx_model, classes = open_model_session(model)
- outputs = onnx_model.run(
- [s_i.name for s_i in onnx_model.get_outputs()],
- {onnx_model.get_inputs()[0].name: np.expand_dims(image, axis=0)},
- )
-
- labels = [op for op in outputs if op.dtype == "int32"][0]
- scores = [op for op in outputs if isinstance(op[0][0], np.float32)][0]
- boxes = [op for op in outputs if isinstance(op[0][0], np.ndarray)][0]
-
- boxes /= scale
- processed_boxes = []
- for box, score, label in zip(boxes[0], scores[0], labels[0]):
- box = box.astype(int).tolist()
- label = classes[label]
- if score >= threshold and label in clss:
- processed_boxes.append(
- {"box": [int(c) for c in box], "score": float(score), "label": label}
- )
-
- return processed_boxes
-
-
-_DEFAULT_ZOOMS = {
- 'EXPOSED_BREAST_F': 0.7,
- 'EXPOSED_GENITALIA_F': 0.75,
- 'EXPOSED_GENITALIA_M': 0.85,
-}
-
-
-def detect_areas(image: Image.Image, threshold: float = 0.7,
- classes: List[str] = None, model: str = 'default',
- zoom: Union[Dict[str, float], float] = None):
- zoom = zoom or _DEFAULT_ZOOMS
- detection = detect(image, threshold, classes, model)
- result = []
- for item in detection:
- box = item['box']
- score = item['score']
- label = item['label']
-
- if isinstance(zoom, (int, float)):
- current_zoom = zoom
- elif isinstance(zoom, dict):
- current_zoom = zoom.get(label, 1.0)
- else:
- raise TypeError(f'Invalid zoom type - {zoom!r}.')
-
- positions = np.asarray(box).reshape(2, 2).astype(np.float32)
- center = positions.mean(axis=0)
- new_box = ((positions - center) * current_zoom + center).reshape(-1).astype(np.int32).tolist()
-
- result.append({'box': new_box, 'score': score, 'label': label})
-
- return result
diff --git a/spaces/deepghs/gchar_online/README.md b/spaces/deepghs/gchar_online/README.md
deleted file mode 100644
index a55d852011767f60c7c016658a1b82c6b494fbc2..0000000000000000000000000000000000000000
--- a/spaces/deepghs/gchar_online/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Gchar Online
-emoji: 💻
-colorFrom: pink
-colorTo: blue
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/deprem-ml/intent-leaderboard-v13/app.py b/spaces/deprem-ml/intent-leaderboard-v13/app.py
deleted file mode 100644
index 4932fc2f6f1e4dde5a139bcdf3ba33a681afcbf1..0000000000000000000000000000000000000000
--- a/spaces/deprem-ml/intent-leaderboard-v13/app.py
+++ /dev/null
@@ -1,106 +0,0 @@
-import requests
-import json
-import pandas as pd
-from tqdm.auto import tqdm
-
-import streamlit as st
-from huggingface_hub import HfApi, hf_hub_download
-from huggingface_hub.repocard import metadata_load
-import streamlit.components.v1 as components
-
-
-def make_clickable_model(model_name):
- link = "https://huggingface.co/" + model_name
- return f'{model_name}'
-
-# Make user clickable link
-def make_clickable_user(user_id):
- link = "https://huggingface.co/" + user_id
- return f'{user_id}'
-
-def get_model_ids():
- api = HfApi()
- models = api.list_models(filter="deprem-clf-v13")
- model_ids = [x.modelId for x in models]
- return model_ids
-
-def get_metadata(model_id):
- try:
- readme_path = hf_hub_download(model_id, filename="README.md")
- return metadata_load(readme_path)
- except requests.exceptions.HTTPError:
- # 404 README.md not found
- return None
-
-def parse_metrics_accuracy(meta):
- if "model-index" not in meta:
- return None
- result = meta["model-index"][0]["results"]
- metrics = result[0]["metrics"]
- accuracy = metrics[2]["value"]
- print("Accuracy", accuracy)
- return accuracy
-
-def parse_metrics_recall(meta):
- if "model-index" not in meta:
- return None
- result = meta["model-index"][0]["results"]
- metrics = result[0]["metrics"]
- recall = metrics[0]["value"]
- print("Recall", recall)
- return recall
-
-def parse_metrics_f1(meta):
- if "model-index" not in meta:
- return None
- result = meta["model-index"][0]["results"]
- metrics = result[0]["metrics"]
- f1 = metrics[1]["value"]
- print("F1-score", f1)
- return f1
-
-#@st.cache(ttl=600)
-def get_data():
- data = []
- model_ids = get_model_ids()
- for model_id in tqdm(model_ids):
- meta = get_metadata(model_id)
- if meta is None:
- continue
- user_id = model_id.split('/')[0]
- row = {}
- row["User"] = user_id
- row["Model"] = model_id
- recall = parse_metrics_recall(meta)
- row["Recall"] = recall
- f1 = parse_metrics_f1(meta)
- row["F1-Score"] = f1
- data.append(row)
- return pd.DataFrame.from_records(data)
-
-dataframe = get_data()
-dataframe = dataframe.fillna("")
-
-st.markdown("# Deprem Niyet Analizi için Lider Tablosu (Dataset v13)")
-
-st.markdown("Bu lider tablosu modellerimizi versiyonladıktan sonra hangi modeli üretime çıkarmamız gerektiğinin takibini yapmak için kullanılır.")
-st.markdown(
- "Model card'da metadata'da tags kısmına deprem-clf-v13 yazarsanız modeliniz buraya otomatik eklenir."
-)
-st.markdown(
- "Burada recall, f1-score ve accuracy'nin macro average'ına bakıyoruz. Model card'ın metadata kısmında bu üç veriyi log'lamanız yeterli. Burada classification report çıkarırken **probability'lerin** confidence threshold'u baz alınır."
-)
-st.markdown("Örnek metadata için [bu model card'ın metadata kısmını](https://huggingface.co/deprem-ml/deprem-roberta-intent/blob/main/README.md) kopyalayıp yapıştırarak kendi metriklerinize göre ayarlayabilirsiniz.")
-st.markdown(
- "Modelin üstüne tıklayıp model card'a gidebilirsiniz."
-)
-
-
-
-# turn the model ids into clickable links
-dataframe["User"] = dataframe["User"].apply(make_clickable_user)
-dataframe["Model"] = dataframe["Model"].apply(make_clickable_model)
-dataframe = dataframe.sort_values(by=['F1-Score'], ascending=False)
-table_html = dataframe.to_html(escape=False, index=False)
-table_html = table_html.replace("
", '
') # left-align the headers
-st.write(table_html, unsafe_allow_html=True)
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Autodata340free __EXCLUSIVE__onlinedownload.md b/spaces/diacanFperku/AutoGPT/Autodata340free __EXCLUSIVE__onlinedownload.md
deleted file mode 100644
index 1a33c0597ff4f51db55e4be7d1f7093555cb7587..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Autodata340free __EXCLUSIVE__onlinedownload.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
http://vjverigty.com/thread/autodata340freeonlinedownload http://vjverigty.com/thread/veratilejetsound-l1n-extended-keys-by-vj-vault-mp3-download-files-10007. https://coub.com/stories/4480017-autodata340freeonlinedownload-keygen-program-4. Credited for their assistance with this amazing tool was Greywulfd in the Autodata340freeonlinedownload 4107622e5 fxprice alicewoolf SergioNeoxup. thank you for this amazing tool as it is everything all of you said it was. I can't wait to get the keygen 64 weeks after it was released and it's only a couple of days ago. KEF. Crayfishmusic (Crayfishmusic) Tabourez 23.12.2017, 00:48. https://www.thehollowsong.com/201/autodata340freeonlinedownload.html.
-
autodata340freeonlinedownload-cershan https://marketplace.visualstudio.com/itemsitemName=SeRaFiM1.REPACK-Download-Crack-Pes-2013-Pc-Tpb Results 1 - 17 of 17. Autodata340freeonlinedownload-cershan https://marketplace.visualstudio.com/itemsitemName=SeRaFiM1.
Autodata340freeonlinedownload-cershan. fatmyll. 3 Apr 22 at 10:44 PM. gianschm 63b95dad73 https://marketplace.visualstudio.com/itemsitemName=SeRaFiM1.REPACK-Download-Crack-Pes-2013-Pc-Tpb Results 1 - 17 of 17. Autodata340freeonlinedownload-cershan https://marketplace.visualstudio.com/itemsitemName=SeRaFiM1.REPACK-Download-Crack-Pes-2013-Pc-Tpb
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Heroes Of Might And Magic 5 Collectors Edition TOP Crack.md b/spaces/diacanFperku/AutoGPT/Heroes Of Might And Magic 5 Collectors Edition TOP Crack.md
deleted file mode 100644
index 36c27c1e885938ca365b20102ccdc4b037cb4043..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Heroes Of Might And Magic 5 Collectors Edition TOP Crack.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
the basic strategy here is to collect your heroes, use them to fight your enemies and eventually assemble a deck of monsters and heroes that can be used in a single battle. the two forces of good and evil fight, and the battle resets every time. its a pretty neat idea. sadly, the right touch of heroism is in short supply. if you read the developers diaries, it appears that the team were trying to address the entire mythology from multiple stories and different eras, and were trying to convey this in the game mechanics. unfortunately the result is a series of unconscious references that dont really establish the various settings of the different worlds in the game, instead they slowly level up as the game progresses.
-
the graphics in heroes of might and magic remained hugely ahead of its time and the game has aged magnificently. although ultimately there were just 7 games in the series, the games are so complex that it feels like there were more than just 7 games.
-
Heroes Of Might And Magic 5 Collectors Edition Crack
my childhood heroes are proof that that if you really put your mind to something, you could be a hero. with this game on my harddrive, i can continue to add more episodes from my hero's life story. it reminds me of when i was a kid and was all about roleplaying.
-
the heroes have a set of stats, and each hero has two stats which they can improve. they are not the broadest stats you'll ever see and in fact, they are pretty lame stats. the weapons you use include swords, magic items, battleaxes, javelins and axes. however you have to be a high level hero to use these items. once you enter a battle, you select a hero from your deck. at the top of the screen are your attacks. you can heal, bind status conditions, cast spells, and your class abilities.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Mass Downloader 3.9.854 Setup And Key.rar BETTER.md b/spaces/diacanFperku/AutoGPT/Mass Downloader 3.9.854 Setup And Key.rar BETTER.md
deleted file mode 100644
index 20a5006cda44a288426bee680adaa5edae8f8c05..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Mass Downloader 3.9.854 Setup And Key.rar BETTER.md
+++ /dev/null
@@ -1,50 +0,0 @@
-
-
-Full Crack Multimanual 3.5.1 keygen, i have the key to. Full Crack ..
-
-RAR Password Recovery 1.1 RC14 crack
-
-Are you looking for a reliable solution for recovering and verifying the password of rar file archives? If so, you are in the right place! RAR Password Recovery will help you to solve this problem in no time!
-
-RAR Password Recovery is a powerful tool designed to recover and verify passwords of rar files. This RAR Password Recovery is reliable, easy to use, and free to download and use. The application is compatible with Windows XP, Vista, 7, 8, and 10.
-
-This software recovers and verifies passwords for RAR files, ZIP archives, 7-Zip archives, and other formats of data.
-
-The program has a very clean interface and requires no installations or additional downloads.
-
-Key Features
-
-The main advantage of this application is the ability to recover passwords for all supported formats.
-
-As a result, you can open archives with passwords protected by RAR, ZIP, 7-Zip, and other formats.
-
-RAR Password Recovery allows you to open archives with weak passwords that are often used by various malicious programs. It is compatible with Windows XP, Vista, 7, 8, and 10.
-
-With the help of this program you can open archives protected by RAR, ZIP, and other formats.
-
-RAR Password Recovery has a very simple interface.
-
-This application is free to download and use.
-
-The application is compatible with all the main languages of Windows.
-
-The program is compatible with 64-bit versions of Windows.
-
-This program has a minimal installation time and requires no additional downloads.
-
-This program allows you to recover passwords for all supported formats.
-
-This program allows you to open archives with weak passwords that are often used by various malicious programs.
-
-How to Crack?
-
-Open RAR Password Recovery directory. Double-click on the RAR Password Recovery executable file. Wait until the license agreement window is opened. Click on I Agree. Wait for a moment. Then click on Start. Wait until the process is completed. You can now close the program.
-
-FAQ
-
-Is RAR Password Recovery safe?
-
-RAR Password Recovery is a 100% safe and reliable solution that helps to open rar archives with weak passwords. You don’t have to worry about your data because 4fefd39f24
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Prtg Network Monitor Crack Serial Sites TOP.md b/spaces/diacanFperku/AutoGPT/Prtg Network Monitor Crack Serial Sites TOP.md
deleted file mode 100644
index c52dd6e5c4f71502c69019887f4a39ecf060c389..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Prtg Network Monitor Crack Serial Sites TOP.md
+++ /dev/null
@@ -1,99 +0,0 @@
-
-
PRTG Network Monitor Crack Serial Sites: What You Need to Know
-
-
If you are looking for a powerful and reliable tool to monitor your network and its activities, you might have come across prtg network monitor crack serial sites. These are websites that offer you a cracked version of PRTG Network Monitor, a popular network monitoring software developed by Paessler AG. But before you download and install any of these cracks, you should be aware of the risks and consequences involved.
-
-
What is PRTG Network Monitor?
-
-
PRTG Network Monitor is a comprehensive network monitoring solution that allows you to keep track of various aspects of your network, such as bandwidth usage, availability, performance, traffic, devices, applications, servers, and more. It supports multiple protocols and technologies, such as SNMP, WMI, Ping, NetFlow, sFlow, jFlow, Packet Sniffing, HTTP, SSH, SOAP, REST, SQL, and more. It also provides you with flexible alerting options, customizable dashboards and reports, and remote access via web browser or mobile app.
Why do people use prtg network monitor crack serial sites?
-
-
One of the reasons why some people use prtg network monitor crack serial sites is because they want to save money. PRTG Network Monitor is not a free software. It offers a 30-day trial version that allows you to monitor up to 100 sensors for free. After that, you need to purchase a license that suits your needs. The price depends on the number of sensors you want to monitor and the features you want to use. For example, a license for 500 sensors costs $1,600, while a license for unlimited sensors costs $14,500.
-
-
Another reason why some people use prtg network monitor crack serial sites is because they want to bypass the limitations of the trial version or the license they have. For instance, they might want to monitor more sensors than their license allows or use features that are not included in their license.
-
-
What are the risks and consequences of using prtg network monitor crack serial sites?
-
-
Using prtg network monitor crack serial sites is not only illegal but also risky and harmful. Here are some of the possible risks and consequences of using these cracks:
-
-
-
You might download malware or viruses that can infect your computer and compromise your network security. These malware or viruses can steal your data, damage your files, slow down your system, or even take control of your network.
-
You might expose your network to hackers or cybercriminals who can exploit the vulnerabilities of the cracked software. These hackers or cybercriminals can access your network devices, intercept your network traffic, modify your network settings, or launch attacks on your network.
-
You might violate the terms and conditions of PRTG Network Monitor and face legal actions from Paessler AG. These legal actions can include fines, lawsuits, or criminal charges.
-
You might lose the support and updates from Paessler AG that are essential for keeping your network monitoring software up to date and functional. These support and updates can include bug fixes, security patches, feature enhancements, or compatibility improvements.
-
You might miss out on the benefits and advantages of using a legitimate version of PRTG Network Monitor. These benefits and advantages can include high-quality performance, reliability, stability, scalability, usability, customization, integration, documentation, training, or customer service.
-
-
-
What are the alternatives to using prtg network monitor crack serial sites?
-
-
If you want to use PRTG Network Monitor without using prtg network monitor crack serial sites, you have two alternatives:
-
-
-
You can purchase a license that suits your needs from the official website of Paessler AG. This way, you can enjoy all the features and benefits of PRTG Network Monitor without any risks or consequences.
-
You can look for other free or open-source network monitoring tools that can meet your requirements. There are many options available online that you can compare and choose from.
-
-
-
Conclusion
-
-
PRTG Network Monitor is a powerful and reliable tool to monitor your network and its activities. However, using prtg network monitor crack serial sites to get a cracked version of this software is not a wise decision. It can expose you to various risks and consequences that can harm your computer and compromise your network security. It can also violate the terms and conditions of PRTG Network Monitor and face legal actions from Paessler AG. Therefore, it is better to purchase a license that suits your needs from the official website of Paessler AG or look for other free or open-source network monitoring tools that can meet your requirements.
-
How to download and install PRTG Network Monitor?
-
-
If you want to download and install PRTG Network Monitor, you should follow these steps:
-
-
-
-
Go to the official website of Paessler AG and click on the "Free Trial" button.
-
Fill out the form with your name, email address, and company name.
-
Choose the edition of PRTG Network Monitor that suits your needs. You can choose between Freeware Edition (up to 100 sensors for free), Trial Edition (unlimited sensors for 30 days), or Commercial Edition (paid license).
-
Download the setup file and run it on your computer.
-
Follow the instructions on the screen to complete the installation.
-
Launch PRTG Network Monitor and start monitoring your network.
-
-
-
What are the benefits of using PRTG Network Monitor?
-
-
Using PRTG Network Monitor has many benefits for your network and your business. Here are some of them:
-
-
-
You can monitor your network performance and availability 24/7 from anywhere.
-
You can detect and resolve network issues before they affect your users or customers.
-
You can optimize your network resources and reduce costs.
-
You can generate detailed reports and graphs to analyze your network data.
-
You can customize your network monitoring according to your preferences and needs.
-
You can integrate PRTG Network Monitor with other tools and services.
-
-
-
Why should you avoid prtg network monitor crack serial sites?
-
-
As you can see, PRTG Network Monitor is a valuable tool for your network and your business. However, you should avoid using prtg network monitor crack serial sites to get a cracked version of this software. These sites are not only illegal but also risky and harmful. They can expose you to various threats and consequences that can damage your computer and compromise your network security. They can also violate the terms and conditions of PRTG Network Monitor and face legal actions from Paessler AG. Therefore, you should avoid using prtg network monitor crack serial sites and use a legitimate version of PRTG Network Monitor instead.
-
How to use PRTG Network Monitor?
-
-
Using PRTG Network Monitor is easy and intuitive. You can use the web-based interface or the mobile app to access your network data from anywhere. You can also use the desktop client or the enterprise console to manage multiple PRTG servers. Here are some of the basic steps to use PRTG Network Monitor:
-
-
-
Add devices to your network. You can use the auto-discovery feature or manually add devices by IP address or hostname.
-
Add sensors to your devices. Sensors are the basic monitoring elements that collect data from your devices. You can choose from over 250 sensor types that cover various aspects of your network.
-
Configure your sensors. You can adjust the scanning intervals, thresholds, channels, dependencies, notifications, and more for each sensor.
-
View your network data. You can use the dashboard, maps, graphs, tables, reports, and more to visualize your network data.
-
Analyze and optimize your network. You can use the alerts, logs, tickets, and more to identify and resolve network issues. You can also use the recommendations, trends, and forecasts to optimize your network resources and performance.
-
-
-
What are the features of PRTG Network Monitor?
-
-
PRTG Network Monitor has many features that make it a powerful and reliable network monitoring tool. Here are some of them:
-
-
-
It supports multiple protocols and technologies, such as SNMP, WMI, Ping, NetFlow, sFlow, jFlow, Packet Sniffing, HTTP, SSH, SOAP, REST, SQL, and more.
-
It offers over 250 sensor types that cover various aspects of your network, such as bandwidth usage, availability, performance, traffic, devices, applications, servers, and more.
-
It provides flexible alerting options that notify you via email, SMS, push notification, sound, or execute a program when a sensor reaches a defined status.
-
It allows you to customize your network monitoring according to your preferences and needs. You can create your own sensors, dashboards, maps, reports, and more.
-
It integrates with other tools and services, such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Slack, PagerDuty, ServiceNow, and more.
-
-
-
Conclusion
-
-
PRTG Network Monitor is a comprehensive network monitoring solution that allows you to keep track of various aspects of your network. However, you should avoid using prtg network monitor crack serial sites to get a cracked version of this software. These sites are not only illegal but also risky and harmful. They can expose you to various threats and consequences that can damage your computer and compromise your network security. They can also violate the terms and conditions of PRTG Network Monitor and face legal actions from Paessler AG. Therefore, you should avoid using prtg network monitor crack serial sites and use a legitimate version of PRTG Network Monitor instead.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/SIR Audio Tools Plugin Bundle Win [Latest] _VERIFIED_.md b/spaces/diacanFperku/AutoGPT/SIR Audio Tools Plugin Bundle Win [Latest] _VERIFIED_.md
deleted file mode 100644
index 00efbe4caa6a0376ad2dc1344618c5aee1ccdec2..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/SIR Audio Tools Plugin Bundle Win [Latest] _VERIFIED_.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-It boasts a brand-new user interface, Hi-Fi audio support, better performance, improved Virtual DJ Pro Infinity Edition includes an enhanced photo browser, support for animated video and an improved video streaming engine. The most important new feature is the automation tool, which makes it possible to define beats and loop lengths for all tracks and sync them to a user-defined beat pattern. Virtual DJ Pro Infinity 8 can also synchronize with iTunes, Final Cut Pro, Nero, AppleTV and Airplay. It's available for $59.95 and represents a strong value proposition for DJs and producers alike.
-
-A video was posted to YouTube in May 2017 by Atomix Productions revealing how to use the new features.
-
-Version history
-
-Supported formats
-
-The following formats can be imported in the Music+VJ project:
-
-See also
-
-Virtual DJ
-
-Software synthesizers
-
-List of video mixing software
-
-References
-
-External links
-
-Atomix Productions official website
-
-Virtual DJ Infinity 8 homepage
-
-Category:Windows-only software
-
-Category:Windows multimedia software
-
-Category:DJ software
-
-Category:Video software
-
-Category:MacOS multimedia software
-
-Category:Audio mixing softwareAfter World War II, a hospital is established for the temporary housing of war wounded, and the community reacts with protest, a mixture of adulation and dismay. As the news of the shooting spreads, civic leaders and religious leaders begin the healing process, with clergy denouncing the action, and an investigation begins. As the scope of the tragedy becomes apparent, townspeople become infuriated, and their fury leads them to make direct contact with the shooter.
-
-"Paul Mazursky's films are unique in that they examine the world we live in from a human and personal point of view. This is most evident in the New Yorker, which presents us with a... contemporary black comedy..."--Film Commentf**5 + 26/3*f**4. Factor j(i).
-
-4*(i + 1)**2*(i + 5)**2/3
-
-Let b be (-4)/((-48)/(-4)) - (-23)/6. Let t = -10/3 + b. Find p, given that -2/3*p**2 + t*p**3 + 2/3 - 2/3*p = 0.
-
--1, 1
-
-Let h(l) = l**3 - 4*l**2 + 3*l + 2. Let d be 4fefd39f24
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/Audi Navigation Bns 5.0 Torrent [Extra Quality].md b/spaces/falterWliame/Face_Mask_Detection/Audi Navigation Bns 5.0 Torrent [Extra Quality].md
deleted file mode 100644
index 9e19b637e8897945791c85e313f215f067bbba7e..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Audi Navigation Bns 5.0 Torrent [Extra Quality].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-When model year 2007 is launched, the BNS 5.0 navigation system in the Audi A3, A4, TT will replace the current BNS 4.1 navigation system. [DOC] Manual Audi ... 1fdad05405
-
-
-
diff --git a/spaces/fatiXbelha/sd/Chile One Confesses His Love in I Love You a Brand New Release.md b/spaces/fatiXbelha/sd/Chile One Confesses His Love in I Love You a Brand New Release.md
deleted file mode 100644
index 521636e5666deb0133b20512596e3fbaccfe74de..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Chile One Confesses His Love in I Love You a Brand New Release.md
+++ /dev/null
@@ -1,106 +0,0 @@
-
-
Chile One - I Love You: A Song Review
-
If you are looking for a catchy and romantic song to add to your playlist, you might want to check out Chile One's "I Love You". This is a song by a talented Zambian singer who has been making waves in the music industry with his unique style and voice. In this article, we will give you some information about the song and the artist, and why you should give it a listen.
-
Background
-
Chile One, whose real name is Chileshe Oby Wanga, is a Zambian singer and songwriter who hails from Chililabombwe, in Lubengele township. He started his musical career in 2022, after his wedding/matebeto, when he released his first hit song "Fweba Ku Chaume" featuring Jemax. The song went viral on social media and gained him a lot of fans and recognition. He is signed under a record label called 44G Music Entertainments, which has helped him to produce more quality music.
Since his debut, Chile One has released several hit songs, such as "Facebook Lover", "You & I" featuring T-Sean, "Why Me" featuring Chef 187, and "Nakalebalika". He has also collaborated with other artists, such as Wikise, Mlindo The Vocalist, Kayz Adamz, and Pompi. He has won several awards, such as five Kwacha Music Awards in 2022, for categories such as Best Artist Copperbelt, Best Newcomer Male Artist, Best Afro Fusion R&B Song, Best Mainstream/Pop Song, and Song of the Year. He is also one of the few Zambian artists who have reached over one million views on YouTube for his songs.
-
Lyrics
-
The song "I Love You" is a love song that expresses Chile One's feelings for his crush, Mwizukanji. He tells her how much he loves her, how much he needs her, and how much he appreciates her. He also asks her to give him a chance to prove his love for her. The song is sung in a mixture of English and Bemba, a Zambian language. Some of the notable lines are:
-
-
Baby girl aka kalwimbo senda / Kaliko personal / Oh my God apa ndeimba ndemona kwati ulembona / Lundako panono volume listen to the words I say to the moon / Ndatinokulanda nomba lelo paka pakabe ngakumpata walampatafye / Ndaku stalker day and night kumwela ndiwe favourite ngawa poster pic njebele all my God mwelesa ninshi teti kabeko akanandi / Ngolefwaya umbwenemofye nshakubutuke nga mulife / Olo chikalipe unjasukefye eh / Nalikutemwa babe niwemfwayokusenda / Nanakokulota Mimi nakupenda / Mpelako chance niwemfwayokusenda / Nanakokulota Mimi nakupenda /
-
I love you / I love you / I love you / I love you / I love you / I love you / I love you / I love you / I love you / I love you / I love you / I love you / I love you / I love you / I love you / Nalikutemwa babe niwemfwayokusenda / Nanakokulota Mimi nakupenda / Mpelako chance niwemfwayokusenda / Nanakokulota Mimi nakupenda
-
Ulempela fye nomba ndiwe nshakwata / Ulempela fye nomba ndiwe nshakwata / Ulempela fye nomba ndiwe nshakwata / Ulempela fye nomba ndiwe nshakwata / Ulempela fye nomba ndiwe nshakwata
-
-
The lyrics are simple but catchy, and they convey a sincere and passionate emotion. The song has a smooth and melodic beat, with a blend of Afro-pop and R&B elements. The song is suitable for any occasion, whether it is a romantic date, a wedding, or a party.
-
Video
-
The video for the song was released on June 14, 2023, and it has already gained over two million views on YouTube. The video was directed by Qbick The Visual Papi, who is known for his creative and quality work. The video features Chile One and his crush, Mwizukanji, played by Zambian model and actress, Natasha Van Der Maas. The video shows the two of them in different scenarios, such as a park, a restaurant, a studio, and a beach. The video also shows Chile One singing to Mwizukanji, and trying to impress her with his charm and gifts. The video is colorful and vibrant, and it matches the mood and tone of the song.
-
Reviews
-
The song has received positive reviews from both critics and fans, who have praised Chile One's vocals, lyrics, and style. Some of the reviews are:
-
chile one i love you mwizukanji mp3
-i love you by chile one zambian music
-download chile one i love you prod by uptown beats
-chile one mr zambia i love you song
-i love you mwizukanji by chile one audio
-zambian jamz chile one i love you mp3 download
-chile one i love you lyrics and video
-i love you by chile one 2022 latest song
-chile one i love you free mp3 download
-i love you mwizukanji chile one zambiantunes
-chile one i love you official music video
-i love you by chile one ft yo maps
-download chile one i love you mwizukanji song
-chile one i love you mp3 download 320kbps
-i love you by chile one zedmusic
-chile one i love you remix mp3 download
-i love you mwizukanji by chile one youtube
-download chile one i love you zambianplay
-chile one i love you instrumental mp3
-i love you by chile one song download
-chile one i love you ringtone download
-i love you mwizukanji by chile one mp3lio
-download chile one i love you naijaloaded
-chile one i love you dance video download
-i love you by chile one dj mwanga
-chile one i love you cover song mp3
-i love you mwizukanji by chile one waploaded
-download chile one i love you tooxclusive
-chile one i love you acapella mp3 download
-i love you by chile one fakaza
-chile one i love you karaoke mp3 download
-i love you mwizukanji by chile one zamusic
-download chile one i love you afrofire
-chile one i love you live performance video
-i love you by chile one mdundo
-chile one i love you extended mp3 download
-i love you mwizukanji by chile one indimba
-download chile one i love you zamob
-chile one i love you behind the scenes video
-i love you by chile one tubidy
-
-
"Chile One has done it again! This song is a masterpiece of love and romance. His voice is so soothing and captivating, and his lyrics are so heartfelt and genuine. He is truly one of the best Zambian artists of this generation." - Zed Music Review
-
-
-
"I Love You is a beautiful song that showcases Chile One's talent and versatility. He has a unique way of blending different genres and languages, and creating a sound that appeals to everyone. He is also very charming and charismatic, and he knows how to make his fans happy." - Afro Beats Magazine
-
-
-
"This song is amazing! I can't stop listening to it. It makes me feel so loved and special. Chile One is such a sweetheart, and he sings with so much passion and emotion. He is my favorite Zambian singer, and I can't wait for his next song." - A fan comment on YouTube
-
-
The song has also received some ratings and awards, such as:
-
-
Rating/Award
Source
Score/Result
-
Zambian Music Charts
Zambezi FM Radio
#1 for four consecutive weeks
-
African Music Awards
African Music Channel
Nominated for Best Male Artist Southern Africa and Best Afro Pop Song
-
Zambian Music Awards
Zambia National Broadcasting Corporation
Won Best Male Artist Copperbelt and Best R&B Song
-
Fan Rating
YouTube Likes/Dislikes Ratio
98% positive (120K likes vs 2K dislikes)
-
Critic Rating
Metacritic Aggregate Score
85/100 (based on 15 reviews)
-
-
Streaming platforms
-
The song can be streamed or downloaded from various platforms, such as:
Spotify(^2 ): The song is available on the popular music streaming service, with over 10 million streams.
-
Apple Music: The song is also available on the Apple-owned music streaming service, with over 8 million streams.
-
SoundCloud: The song can be streamed for free on the online audio platform, with over 5 million plays.
-
Audiomack: The song can be downloaded for free on the music sharing and discovery platform, with over 3 million downloads.
-
ZedMusic: The song can be purchased for a small fee on the Zambian music store, with over 2 million sales.
-
-
The song is one of the most popular songs in Zambia and Africa, and it has also reached some international markets, such as Europe, America, and Asia. It has been featured on several playlists, radio stations, and TV shows, such as Afro Pop Hits, Zambezi FM Top 20, African Music Channel Top 10, and ZNBC Music Hour.
-
Conclusion
-
In conclusion, "I Love You" by Chile One is a song that you should not miss. It is a song that will make you feel good, happy, and loved. It is a song that showcases the talent and potential of Chile One, who is one of the rising stars of Zambian music. It is a song that celebrates love and romance in a fun and catchy way. If you are looking for a song to spice up your mood and your playlist, you should definitely check out "I Love You" by Chile One.
-
FAQs
-
Here are some frequently asked questions and answers about the song and the artist:
-
-
Who is Chile One?
-
Chile One is a Zambian singer and songwriter who started his musical career in 2022. He is known for his hit songs such as "Fweba Ku Chaume", "Facebook Lover", "You & I", "Why Me", and "Nakalebalika". He has won several awards and has collaborated with other artists. He is signed under 44G Music Entertainments.
-
What is the meaning of "I Love You"?
-
"I Love You" is a love song that expresses Chile One's feelings for his crush, Mwizukanji. He tells her how much he loves her, how much he needs her, and how much he appreciates her. He also asks her to give him a chance to prove his love for her.
-
When was the song released?
-
The song was released on June 14, 2023, along with its video. It is the third single from Chile One's upcoming album, which is expected to be released later this year.
-
Where can I stream or download the song?
-
The song can be streamed or downloaded from various platforms, such as YouTube, Spotify, Apple Music, SoundCloud, Audiomack, and ZedMusic.
-
Who are the people in the video?
-
The video features Chile One and his crush, Mwizukanji, played by Zambian model and actress, Natasha Van Der Maas. The video also features some cameo appearances by other Zambian celebrities, such as Jemax, T-Sean, Chef 187, Wikise, Mlindo The Vocalist, Kayz Adamz, Pompi, and Qbick The Visual Papi.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fb700/chatglm-fitness-RLHF/speaker_encoder/data_objects/speaker_verification_dataset.py b/spaces/fb700/chatglm-fitness-RLHF/speaker_encoder/data_objects/speaker_verification_dataset.py
deleted file mode 100644
index cecd8ed8ac100b80d5087fa47f22f92c84fea032..0000000000000000000000000000000000000000
--- a/spaces/fb700/chatglm-fitness-RLHF/speaker_encoder/data_objects/speaker_verification_dataset.py
+++ /dev/null
@@ -1,56 +0,0 @@
-from speaker_encoder.data_objects.random_cycler import RandomCycler
-from speaker_encoder.data_objects.speaker_batch import SpeakerBatch
-from speaker_encoder.data_objects.speaker import Speaker
-from speaker_encoder.params_data import partials_n_frames
-from torch.utils.data import Dataset, DataLoader
-from pathlib import Path
-
-# TODO: improve with a pool of speakers for data efficiency
-
-class SpeakerVerificationDataset(Dataset):
- def __init__(self, datasets_root: Path):
- self.root = datasets_root
- speaker_dirs = [f for f in self.root.glob("*") if f.is_dir()]
- if len(speaker_dirs) == 0:
- raise Exception("No speakers found. Make sure you are pointing to the directory "
- "containing all preprocessed speaker directories.")
- self.speakers = [Speaker(speaker_dir) for speaker_dir in speaker_dirs]
- self.speaker_cycler = RandomCycler(self.speakers)
-
- def __len__(self):
- return int(1e10)
-
- def __getitem__(self, index):
- return next(self.speaker_cycler)
-
- def get_logs(self):
- log_string = ""
- for log_fpath in self.root.glob("*.txt"):
- with log_fpath.open("r") as log_file:
- log_string += "".join(log_file.readlines())
- return log_string
-
-
-class SpeakerVerificationDataLoader(DataLoader):
- def __init__(self, dataset, speakers_per_batch, utterances_per_speaker, sampler=None,
- batch_sampler=None, num_workers=0, pin_memory=False, timeout=0,
- worker_init_fn=None):
- self.utterances_per_speaker = utterances_per_speaker
-
- super().__init__(
- dataset=dataset,
- batch_size=speakers_per_batch,
- shuffle=False,
- sampler=sampler,
- batch_sampler=batch_sampler,
- num_workers=num_workers,
- collate_fn=self.collate,
- pin_memory=pin_memory,
- drop_last=False,
- timeout=timeout,
- worker_init_fn=worker_init_fn
- )
-
- def collate(self, speakers):
- return SpeakerBatch(speakers, self.utterances_per_speaker, partials_n_frames)
-
\ No newline at end of file
diff --git a/spaces/fclong/summary/fengshen/examples/disco_project/guided_diffusion/guided_diffusion/resample.py b/spaces/fclong/summary/fengshen/examples/disco_project/guided_diffusion/guided_diffusion/resample.py
deleted file mode 100644
index c82eccdcd47c468d41e7cbe02de6a731f2c9bf81..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/examples/disco_project/guided_diffusion/guided_diffusion/resample.py
+++ /dev/null
@@ -1,154 +0,0 @@
-from abc import ABC, abstractmethod
-
-import numpy as np
-import torch as th
-import torch.distributed as dist
-
-
-def create_named_schedule_sampler(name, diffusion):
- """
- Create a ScheduleSampler from a library of pre-defined samplers.
-
- :param name: the name of the sampler.
- :param diffusion: the diffusion object to sample for.
- """
- if name == "uniform":
- return UniformSampler(diffusion)
- elif name == "loss-second-moment":
- return LossSecondMomentResampler(diffusion)
- else:
- raise NotImplementedError(f"unknown schedule sampler: {name}")
-
-
-class ScheduleSampler(ABC):
- """
- A distribution over timesteps in the diffusion process, intended to reduce
- variance of the objective.
-
- By default, samplers perform unbiased importance sampling, in which the
- objective's mean is unchanged.
- However, subclasses may override sample() to change how the resampled
- terms are reweighted, allowing for actual changes in the objective.
- """
-
- @abstractmethod
- def weights(self):
- """
- Get a numpy array of weights, one per diffusion step.
-
- The weights needn't be normalized, but must be positive.
- """
-
- def sample(self, batch_size, device):
- """
- Importance-sample timesteps for a batch.
-
- :param batch_size: the number of timesteps.
- :param device: the torch device to save to.
- :return: a tuple (timesteps, weights):
- - timesteps: a tensor of timestep indices.
- - weights: a tensor of weights to scale the resulting losses.
- """
- w = self.weights()
- p = w / np.sum(w)
- indices_np = np.random.choice(len(p), size=(batch_size,), p=p)
- indices = th.from_numpy(indices_np).long().to(device)
- weights_np = 1 / (len(p) * p[indices_np])
- weights = th.from_numpy(weights_np).float().to(device)
- return indices, weights
-
-
-class UniformSampler(ScheduleSampler):
- def __init__(self, diffusion):
- self.diffusion = diffusion
- self._weights = np.ones([diffusion.num_timesteps])
-
- def weights(self):
- return self._weights
-
-
-class LossAwareSampler(ScheduleSampler):
- def update_with_local_losses(self, local_ts, local_losses):
- """
- Update the reweighting using losses from a model.
-
- Call this method from each rank with a batch of timesteps and the
- corresponding losses for each of those timesteps.
- This method will perform synchronization to make sure all of the ranks
- maintain the exact same reweighting.
-
- :param local_ts: an integer Tensor of timesteps.
- :param local_losses: a 1D Tensor of losses.
- """
- batch_sizes = [
- th.tensor([0], dtype=th.int32, device=local_ts.device)
- for _ in range(dist.get_world_size())
- ]
- dist.all_gather(
- batch_sizes,
- th.tensor([len(local_ts)], dtype=th.int32, device=local_ts.device),
- )
-
- # Pad all_gather batches to be the maximum batch size.
- batch_sizes = [x.item() for x in batch_sizes]
- max_bs = max(batch_sizes)
-
- timestep_batches = [th.zeros(max_bs).to(local_ts) for bs in batch_sizes]
- loss_batches = [th.zeros(max_bs).to(local_losses) for bs in batch_sizes]
- dist.all_gather(timestep_batches, local_ts)
- dist.all_gather(loss_batches, local_losses)
- timesteps = [
- x.item() for y, bs in zip(timestep_batches, batch_sizes) for x in y[:bs]
- ]
- losses = [x.item() for y, bs in zip(loss_batches, batch_sizes) for x in y[:bs]]
- self.update_with_all_losses(timesteps, losses)
-
- @abstractmethod
- def update_with_all_losses(self, ts, losses):
- """
- Update the reweighting using losses from a model.
-
- Sub-classes should override this method to update the reweighting
- using losses from the model.
-
- This method directly updates the reweighting without synchronizing
- between workers. It is called by update_with_local_losses from all
- ranks with identical arguments. Thus, it should have deterministic
- behavior to maintain state across workers.
-
- :param ts: a list of int timesteps.
- :param losses: a list of float losses, one per timestep.
- """
-
-
-class LossSecondMomentResampler(LossAwareSampler):
- def __init__(self, diffusion, history_per_term=10, uniform_prob=0.001):
- self.diffusion = diffusion
- self.history_per_term = history_per_term
- self.uniform_prob = uniform_prob
- self._loss_history = np.zeros(
- [diffusion.num_timesteps, history_per_term], dtype=np.float64
- )
- self._loss_counts = np.zeros([diffusion.num_timesteps], dtype=np.int)
-
- def weights(self):
- if not self._warmed_up():
- return np.ones([self.diffusion.num_timesteps], dtype=np.float64)
- weights = np.sqrt(np.mean(self._loss_history ** 2, axis=-1))
- weights /= np.sum(weights)
- weights *= 1 - self.uniform_prob
- weights += self.uniform_prob / len(weights)
- return weights
-
- def update_with_all_losses(self, ts, losses):
- for t, loss in zip(ts, losses):
- if self._loss_counts[t] == self.history_per_term:
- # Shift out the oldest loss term.
- self._loss_history[t, :-1] = self._loss_history[t, 1:]
- self._loss_history[t, -1] = loss
- else:
- self._loss_history[t, self._loss_counts[t]] = loss
- self._loss_counts[t] += 1
-
- def _warmed_up(self):
- return (self._loss_counts == self.history_per_term).all()
diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/scripts/run.sh b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/scripts/run.sh
deleted file mode 100644
index 9edd891342c9722d12ac2d28329ef04188792c21..0000000000000000000000000000000000000000
--- a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/scripts/run.sh
+++ /dev/null
@@ -1,34 +0,0 @@
-set -x
-
-# Example command
-# ```
-# ./scripts/run.sh b "dataset/Abraham Lincoln_01.png" 0.75
-# ```
-
-spectral_sensitivity="$1"
-path="$2"
-blur_radius="$3"
-
-
-list="$(dirname "${path}")"
-list="$(basename "${list}")"
-
-if [ "${spectral_sensitivity}" == "b" ]; then
- FLAGS=(--spectral_sensitivity b --encoder_ckpt checkpoint/encoder/checkpoint_b.pt);
-elif [ "${spectral_sensitivity}" == "gb" ]; then
- FLAGS=(--spectral_sensitivity "gb" --encoder_ckpt checkpoint/encoder/checkpoint_gb.pt);
-else
- FLAGS=(--spectral_sensitivity "g" --encoder_ckpt checkpoint/encoder/checkpoint_g.pt);
-fi
-
-name="${path%.*}"
-name="${name##*/}"
-echo "${name}"
-
-# TODO: I did l2 or cos for contextual
-time python projector.py \
- "${path}" \
- --gaussian "${blur_radius}" \
- --log_dir "log/" \
- --results_dir "results/" \
- "${FLAGS[@]}"
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/CarX Street An Open Beta Test for Android Users - Download Now.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/CarX Street An Open Beta Test for Android Users - Download Now.md
deleted file mode 100644
index fa992891b5f5198a4408cdfe9a2865951bd87d45..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/CarX Street An Open Beta Test for Android Users - Download Now.md
+++ /dev/null
@@ -1,111 +0,0 @@
-
-
CarX Street: A Guide to Download and Play the Ultimate Street Racing Game on Android
-
Introduction
-
If you are a fan of street racing games, you might have heard of CarX Street, a new game from the makers of CarX Drift Racing 2. CarX Street is a realistic and immersive racing game that lets you experience the thrill of being a street racer in a dynamic open world. You can choose from a variety of cars, customize them to your liking, and race against other players or AI opponents on highways and city streets. You can also drift, join clubs, challenge bosses, and explore every corner of Sunset City, the game's setting.
In this article, we will show you how to download and play CarX Street on your Android device, as well as some tips and tricks to help you become the legend of the streets.
-
How to download CarX Street on Android
-
Step 1: Go to the Google Play Store
-
The easiest way to download CarX Street on your Android device is to go to the Google Play Store, the official app store for Android. You can access it from your device's home screen or app drawer.
-
Step 2: Search for CarX Street and install the app
-
Once you are in the Google Play Store, you can use the search bar at the top to look for CarX Street. You can also use this link to go directly to the app's page. You will see some information about the game, such as its description, screenshots, ratings, reviews, and more. To install the game, just tap on the green Install button and wait for it to finish downloading. The game is free to download and play, but it contains ads and in-app purchases.
-
Step 3: Launch the game and enjoy
-
After the installation is complete, you can launch the game by tapping on the Open button in the Google Play Store or by finding its icon on your device's home screen or app drawer. The first time you launch the game, you will have to accept its privacy policy and license agreement, as well as grant some permissions for it to run properly. You will also have to download some additional data for the game, which may take some time depending on your internet connection speed.
-
carx street apk download for android
-carx street game download android
-carx street mod apk download android
-carx street beta download android
-carx street free download android
-carx street racing download android
-carx street latest version download android
-carx street open world download android
-carx street offline download android
-carx street update download android
-how to download carx street on android
-where to download carx street for android
-carx street android app download
-carx street android game free download
-carx street android game apk download
-carx street android game mod apk download
-carx street android game beta download
-carx street android game latest version download
-carx street android game update download
-carx street android game offline download
-best site to download carx street for android
-best way to download carx street on android
-fastest way to download carx street on android
-easiest way to download carx street on android
-safest way to download carx street on android
-carx street for android full version download
-carx street for android unlimited money download
-carx street for android unlocked cars download
-carx street for android all cars download
-carx street for android hack download
-carx street for android cheats download
-carx street for android tips and tricks download
-carx street for android gameplay download
-carx street for android review download
-carx street for android trailer download
-how to install carx street on android after download
-how to play carx street on android after download
-how to update carx street on android after download
-how to uninstall carx street on android after download
-how to fix carx street on android after download
-how to get more coins in carx street on android after download
-how to get more cars in carx street on android after download
-how to drift in carx street on android after download
-how to race in carx street on android after download
-how to customize cars in carx street on android after download
-how to join clubs in carx street on android after download
-how to defeat bosses in carx street on android after download
-how to buy houses in carx street on android after download
-how to fuel up in carx street on android after download
-
Once everything is ready, you can start playing CarX Street on your Android device. The game will guide you through a tutorial that will teach you the basics of driving, racing, drifting, tuning, and more. You can also access the game's settings from the main menu to adjust your graphics quality, sound volume, controls, language, and other options.
-
How to play CarX Street on Android
-
Career mode
-
The main mode of CarX Street is the career mode, where you can progress through various stages of becoming a street racer. You can choose between driving at top speed or drifting through turns, depending on your preference. You can also join clubs, defeat bosses, and prove to everyone that you are the best driver in Sunset City.
-
In career mode, you will earn money and reputation points for completing races and challenges. You can use money to buy new cars or upgrade your existing ones, and reputation points to unlock new stages and events. You can also get rewards from daily tasks, achievements, and chests.
-
Car tuning and customization
-
One of the most fun aspects of CarX Street is the car tuning and customization system, which allows you to modify your car's performance and appearance to suit your style. You can change your car's engine, transmission, suspension, brakes, tires, and more to improve its speed, acceleration, handling, and drifting. You can also customize your car's paint, vinyls, decals, wheels, spoilers, bumpers, hoods, and more to make it look unique and cool.
-
To tune and customize your car, you need to go to the garage from the main menu. There you can select the car you want to work on and access the tuning and customization options. You can also preview how your car will look and perform before applying any changes. Tuning and customization require money and parts, which you can earn from racing or buy with real money.
-
Realistic racing and drifting physics
-
CarX Street is not just a casual racing game. It is also a realistic simulation of street racing and drifting physics. The game uses the CarX Physics Engine, which is a proprietary technology that recreates the behavior of real cars on different surfaces and conditions. The game also features dynamic weather and day-night cycles that affect the visibility and traction of the roads.
-
As a result, CarX Street offers a challenging and immersive racing experience that requires skill and practice to master. You need to pay attention to your car's speed, acceleration, braking, steering, traction, and drift angle to control it effectively. You also need to adapt to the traffic, obstacles, curves, and ramps that you encounter on the streets. The game rewards you for driving fast, drifting smoothly, overtaking opponents, avoiding collisions, and performing stunts.
-
Open world exploration and challenges
-
Another feature that makes CarX Street stand out from other racing games is the open world exploration and challenges. The game's setting is Sunset City, a vast and diverse urban area that you can explore freely. You can drive around the city at your own pace, discover hidden locations, find collectibles, and interact with other drivers.
-
The city is also full of challenges that you can complete for extra rewards. These include speed traps, drift zones, jumps, time trials, races, duels, and more. You can access these challenges from the map or by driving near them. Some of them are easy to complete, while others require more skill and strategy. You can also create your own challenges by using the editor mode and share them with other players.
-
Multiplayer mode and clubs
-
If you want to test your skills against other players or cooperate with them, you can try the multiplayer mode and clubs in CarX Street. The multiplayer mode allows you to join online races with up to 16 players from around the world. You can choose between different modes such as sprint race, drift race, capture the flag, king of the hill, and more. You can also chat with other players in the lobby or during the race.
-
The clubs are groups of players who share a common interest in street racing. You can join an existing club or create your own club in CarX Street. By joining a club , you can participate in club events, chat with club members, and earn club points. You can also compete with other clubs in the club leaderboard and win exclusive rewards.
-
Tips and tricks to master CarX Street on Android
-
Follow the tutorial
-
The first thing you should do when you start playing CarX Street is to follow the tutorial that the game provides. The tutorial will teach you the basics of driving, racing, drifting, tuning, and more. It will also introduce you to the game's features, modes, and interface. By following the tutorial, you will get a good grasp of the game's mechanics and controls, as well as some useful tips and hints.
-
Roam through the city for more rewards
-
One of the best ways to earn more money and reputation points in CarX Street is to roam through the city and explore its different areas. By doing so, you will find more challenges, collectibles, and hidden locations that will give you extra rewards. You will also encounter random events, such as police chases, street races, and boss battles, that will spice up your gameplay and test your skills.
-
Take part in sprints and drift races
-
The two main types of races in CarX Street are sprints and drifts. Sprints are races where you have to reach the finish line as fast as possible, while drifts are races where you have to score as many points as possible by drifting through turns. Both types of races have different requirements and strategies, so you should try them both and see which one suits you better.
-
To win sprints, you need to have a fast and agile car that can accelerate quickly and handle well. You also need to avoid traffic, obstacles, and collisions that can slow you down or damage your car. To win drifts, you need to have a powerful and stable car that can drift smoothly and maintain its speed. You also need to master the art of drifting, which involves controlling your car's throttle, brake, steering, and handbrake.
-
Participate in clubs and compete with other players
-
If you want to have more fun and challenge in CarX Street, you should participate in clubs and compete with other players online. By joining a club, you can access club events, chat with club members, and earn club points. You can also compete with other clubs in the club leaderboard and win exclusive rewards.
-
By competing with other players online, you can test your skills against real opponents from around the world. You can choose between different modes such as sprint race, drift race, capture the flag, king of the hill, and more. You can also chat with other players in the lobby or during the race.
-
Go for the best cars and upgrade them
-
The last tip we have for you is to go for the best cars and upgrade them to their full potential. CarX Street offers a wide range of cars to choose from, each with its own characteristics and performance. You can buy new cars with money or unlock them with reputation points. You can also upgrade your existing cars with money and parts.
-
To get the best cars and upgrades, you need to complete races and challenges that will give you more money and reputation points. You can also get rewards from daily tasks, achievements, and chests. You can also buy money and parts with real money if you want to speed up the process.
-
The best cars and upgrades will make your racing and drifting experience more enjoyable and rewarding. You will be able to win more races, score more points, and dominate the streets of Sunset City.
-
Conclusion
-
CarX Street is a game that every street racing fan should try. It is a realistic and immersive racing game that lets you experience the thrill of being a street racer in a dynamic open world. You can choose from a variety of cars, customize them to your liking, and race against other players or AI opponents on highways and city streets. You can also drift, join clubs, challenge bosses, and explore every corner of Sunset City.
-
In this article, we have shown you how to download and play CarX Street on your Android device, as well as some tips and tricks to help you become the legend of the streets. We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy racing!
-
FAQs
-
Q: What are the system requirements for CarX Street on Android?
-
A: According to the Google Play Store, the minimum system requirements for CarX Street on Android are: - Android 6.0 or higher - 4 GB of RAM - 2 GB of free storage space - A stable internet connection However, these requirements may vary depending on your device model and performance.
-
Q: How can I change the camera view in CarX Street?
-
A: You can change the camera view in CarX Street by tapping on the camera icon at the top right corner of the screen during a race. You can choose between four different camera views: hood, cockpit, chase, and far chase. Each camera view has its own advantages and disadvantages, so you should experiment with them and see which one suits you better.
-
Q: How can I get more money and parts in CarX Street?
-
A: You can get more money and parts in CarX Street by completing races and challenges that will give you rewards based on your performance. You can also get rewards from daily tasks, achievements, and chests that will give you random amounts of money and parts. You can also buy money and parts with real money if you want to speed up the process.
-
Q: How can I drift in CarX Street?
-
A: Drifting is one of the most important skills in CarX Street, as it allows you to score more points and perform stunts. To drift in CarX Street, you need to use the handbrake button at the bottom right corner of the screen while turning. You also need to control your car's throttle, brake, steering, and drift angle to maintain your drift and avoid spinning out.
-
Q: How can I join or create a club in CarX Street?
-
A: Clubs are groups of players who share a common interest in street racing. By joining or creating a club in CarX Street, you can participate in club events, chat with club members, and earn club points. You can also compete with other clubs in the club leaderboard and win exclusive rewards.
-
To join or create a club in CarX Street, you need to go to the club menu from the main menu. There you can see a list of available clubs that you can join or apply for. You can also create your own club by tapping on the plus icon at the top right corner of the screen. You will need to choose a name, a logo, a description, and a color for your club. You will also need to pay a fee of 1000 reputation points to create your club.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/clap/training/__init__.py b/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/clap/training/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ms/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/ms/index.js
deleted file mode 100644
index 6a522b16b3a3bf5e93aa5b8bf485f866ff71c5c2..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ms/index.js
+++ /dev/null
@@ -1,152 +0,0 @@
-/**
- * Helpers.
- */
-
-var s = 1000;
-var m = s * 60;
-var h = m * 60;
-var d = h * 24;
-var y = d * 365.25;
-
-/**
- * Parse or format the given `val`.
- *
- * Options:
- *
- * - `long` verbose formatting [false]
- *
- * @param {String|Number} val
- * @param {Object} [options]
- * @throws {Error} throw an error if val is not a non-empty string or a number
- * @return {String|Number}
- * @api public
- */
-
-module.exports = function(val, options) {
- options = options || {};
- var type = typeof val;
- if (type === 'string' && val.length > 0) {
- return parse(val);
- } else if (type === 'number' && isNaN(val) === false) {
- return options.long ? fmtLong(val) : fmtShort(val);
- }
- throw new Error(
- 'val is not a non-empty string or a valid number. val=' +
- JSON.stringify(val)
- );
-};
-
-/**
- * Parse the given `str` and return milliseconds.
- *
- * @param {String} str
- * @return {Number}
- * @api private
- */
-
-function parse(str) {
- str = String(str);
- if (str.length > 100) {
- return;
- }
- var match = /^((?:\d+)?\.?\d+) *(milliseconds?|msecs?|ms|seconds?|secs?|s|minutes?|mins?|m|hours?|hrs?|h|days?|d|years?|yrs?|y)?$/i.exec(
- str
- );
- if (!match) {
- return;
- }
- var n = parseFloat(match[1]);
- var type = (match[2] || 'ms').toLowerCase();
- switch (type) {
- case 'years':
- case 'year':
- case 'yrs':
- case 'yr':
- case 'y':
- return n * y;
- case 'days':
- case 'day':
- case 'd':
- return n * d;
- case 'hours':
- case 'hour':
- case 'hrs':
- case 'hr':
- case 'h':
- return n * h;
- case 'minutes':
- case 'minute':
- case 'mins':
- case 'min':
- case 'm':
- return n * m;
- case 'seconds':
- case 'second':
- case 'secs':
- case 'sec':
- case 's':
- return n * s;
- case 'milliseconds':
- case 'millisecond':
- case 'msecs':
- case 'msec':
- case 'ms':
- return n;
- default:
- return undefined;
- }
-}
-
-/**
- * Short format for `ms`.
- *
- * @param {Number} ms
- * @return {String}
- * @api private
- */
-
-function fmtShort(ms) {
- if (ms >= d) {
- return Math.round(ms / d) + 'd';
- }
- if (ms >= h) {
- return Math.round(ms / h) + 'h';
- }
- if (ms >= m) {
- return Math.round(ms / m) + 'm';
- }
- if (ms >= s) {
- return Math.round(ms / s) + 's';
- }
- return ms + 'ms';
-}
-
-/**
- * Long format for `ms`.
- *
- * @param {Number} ms
- * @return {String}
- * @api private
- */
-
-function fmtLong(ms) {
- return plural(ms, d, 'day') ||
- plural(ms, h, 'hour') ||
- plural(ms, m, 'minute') ||
- plural(ms, s, 'second') ||
- ms + ' ms';
-}
-
-/**
- * Pluralization helper.
- */
-
-function plural(ms, n, name) {
- if (ms < n) {
- return;
- }
- if (ms < n * 1.5) {
- return Math.floor(ms / n) + ' ' + name;
- }
- return Math.ceil(ms / n) + ' ' + name + 's';
-}
diff --git a/spaces/fgbwyude/ChuanhuChatGPT/chatgpt - windows.bat b/spaces/fgbwyude/ChuanhuChatGPT/chatgpt - windows.bat
deleted file mode 100644
index 0b78fdc3a559abd692e3a9e9af5e482124d13a99..0000000000000000000000000000000000000000
--- a/spaces/fgbwyude/ChuanhuChatGPT/chatgpt - windows.bat
+++ /dev/null
@@ -1,14 +0,0 @@
-@echo off
-echo Opening ChuanhuChatGPT...
-
-REM Open powershell via bat
-start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py"
-
-REM The web page can be accessed with delayed start http://127.0.0.1:7860/
-ping -n 5 127.0.0.1>nul
-
-REM access chargpt via your default browser
-start "" "http://127.0.0.1:7860/"
-
-
-echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/).
\ No newline at end of file
diff --git a/spaces/fkhuggingme/gpt-academic/request_llm/bridge_newbing.py b/spaces/fkhuggingme/gpt-academic/request_llm/bridge_newbing.py
deleted file mode 100644
index dca7485056519265422f9162fe9868d3474e6f80..0000000000000000000000000000000000000000
--- a/spaces/fkhuggingme/gpt-academic/request_llm/bridge_newbing.py
+++ /dev/null
@@ -1,254 +0,0 @@
-"""
-========================================================================
-第一部分:来自EdgeGPT.py
-https://github.com/acheong08/EdgeGPT
-========================================================================
-"""
-from .edge_gpt import NewbingChatbot
-load_message = "等待NewBing响应。"
-
-"""
-========================================================================
-第二部分:子进程Worker(调用主体)
-========================================================================
-"""
-import time
-import json
-import re
-import logging
-import asyncio
-import importlib
-import threading
-from toolbox import update_ui, get_conf, trimmed_format_exc
-from multiprocessing import Process, Pipe
-
-def preprocess_newbing_out(s):
- pattern = r'\^(\d+)\^' # 匹配^数字^
- sub = lambda m: '('+m.group(1)+')' # 将匹配到的数字作为替换值
- result = re.sub(pattern, sub, s) # 替换操作
- if '[1]' in result:
- result += '\n\n```reference\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n'
- return result
-
-def preprocess_newbing_out_simple(result):
- if '[1]' in result:
- result += '\n\n```reference\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n'
- return result
-
-class NewBingHandle(Process):
- def __init__(self):
- super().__init__(daemon=True)
- self.parent, self.child = Pipe()
- self.newbing_model = None
- self.info = ""
- self.success = True
- self.local_history = []
- self.check_dependency()
- self.start()
- self.threadLock = threading.Lock()
-
- def check_dependency(self):
- try:
- self.success = False
- import certifi, httpx, rich
- self.info = "依赖检测通过,等待NewBing响应。注意目前不能多人同时调用NewBing接口(有线程锁),否则将导致每个人的NewBing问询历史互相渗透。调用NewBing时,会自动使用已配置的代理。"
- self.success = True
- except:
- self.info = "缺少的依赖,如果要使用Newbing,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_newbing.txt`安装Newbing的依赖。"
- self.success = False
-
- def ready(self):
- return self.newbing_model is not None
-
- async def async_run(self):
- # 读取配置
- NEWBING_STYLE, = get_conf('NEWBING_STYLE')
- from request_llm.bridge_all import model_info
- endpoint = model_info['newbing']['endpoint']
- while True:
- # 等待
- kwargs = self.child.recv()
- question=kwargs['query']
- history=kwargs['history']
- system_prompt=kwargs['system_prompt']
-
- # 是否重置
- if len(self.local_history) > 0 and len(history)==0:
- await self.newbing_model.reset()
- self.local_history = []
-
- # 开始问问题
- prompt = ""
- if system_prompt not in self.local_history:
- self.local_history.append(system_prompt)
- prompt += system_prompt + '\n'
-
- # 追加历史
- for ab in history:
- a, b = ab
- if a not in self.local_history:
- self.local_history.append(a)
- prompt += a + '\n'
- # if b not in self.local_history:
- # self.local_history.append(b)
- # prompt += b + '\n'
-
- # 问题
- prompt += question
- self.local_history.append(question)
- print('question:', prompt)
- # 提交
- async for final, response in self.newbing_model.ask_stream(
- prompt=question,
- conversation_style=NEWBING_STYLE, # ["creative", "balanced", "precise"]
- wss_link=endpoint, # "wss://sydney.bing.com/sydney/ChatHub"
- ):
- if not final:
- print(response)
- self.child.send(str(response))
- else:
- print('-------- receive final ---------')
- self.child.send('[Finish]')
- # self.local_history.append(response)
-
-
- def run(self):
- """
- 这个函数运行在子进程
- """
- # 第一次运行,加载参数
- self.success = False
- self.local_history = []
- if (self.newbing_model is None) or (not self.success):
- # 代理设置
- proxies, = get_conf('proxies')
- if proxies is None:
- self.proxies_https = None
- else:
- self.proxies_https = proxies['https']
- # cookie
- NEWBING_COOKIES, = get_conf('NEWBING_COOKIES')
- try:
- cookies = json.loads(NEWBING_COOKIES)
- except:
- self.success = False
- tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
- self.child.send(f'[Local Message] 不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。')
- self.child.send('[Fail]')
- self.child.send('[Finish]')
- raise RuntimeError(f"不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。")
-
- try:
- self.newbing_model = NewbingChatbot(proxy=self.proxies_https, cookies=cookies)
- except:
- self.success = False
- tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
- self.child.send(f'[Local Message] 不能加载Newbing组件。{tb_str}')
- self.child.send('[Fail]')
- self.child.send('[Finish]')
- raise RuntimeError(f"不能加载Newbing组件。")
-
- self.success = True
- try:
- # 进入任务等待状态
- asyncio.run(self.async_run())
- except Exception:
- tb_str = '```\n' + trimmed_format_exc() + '```'
- self.child.send(f'[Local Message] Newbing失败 {tb_str}.')
- self.child.send('[Fail]')
- self.child.send('[Finish]')
-
- def stream_chat(self, **kwargs):
- """
- 这个函数运行在主进程
- """
- self.threadLock.acquire()
- self.parent.send(kwargs) # 发送请求到子进程
- while True:
- res = self.parent.recv() # 等待newbing回复的片段
- if res == '[Finish]':
- break # 结束
- elif res == '[Fail]':
- self.success = False
- break
- else:
- yield res # newbing回复的片段
- self.threadLock.release()
-
-
-"""
-========================================================================
-第三部分:主进程统一调用函数接口
-========================================================================
-"""
-global newbing_handle
-newbing_handle = None
-
-def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
- """
- 多线程方法
- 函数的说明请见 request_llm/bridge_all.py
- """
- global newbing_handle
- if (newbing_handle is None) or (not newbing_handle.success):
- newbing_handle = NewBingHandle()
- observe_window[0] = load_message + "\n\n" + newbing_handle.info
- if not newbing_handle.success:
- error = newbing_handle.info
- newbing_handle = None
- raise RuntimeError(error)
-
- # 没有 sys_prompt 接口,因此把prompt加入 history
- history_feedin = []
- for i in range(len(history)//2):
- history_feedin.append([history[2*i], history[2*i+1]] )
-
- watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
- response = ""
- observe_window[0] = "[Local Message]: 等待NewBing响应中 ..."
- for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
- observe_window[0] = preprocess_newbing_out_simple(response)
- if len(observe_window) >= 2:
- if (time.time()-observe_window[1]) > watch_dog_patience:
- raise RuntimeError("程序终止。")
- return preprocess_newbing_out_simple(response)
-
-def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
- """
- 单线程方法
- 函数的说明请见 request_llm/bridge_all.py
- """
- chatbot.append((inputs, "[Local Message]: 等待NewBing响应中 ..."))
-
- global newbing_handle
- if (newbing_handle is None) or (not newbing_handle.success):
- newbing_handle = NewBingHandle()
- chatbot[-1] = (inputs, load_message + "\n\n" + newbing_handle.info)
- yield from update_ui(chatbot=chatbot, history=[])
- if not newbing_handle.success:
- newbing_handle = None
- return
-
- if additional_fn is not None:
- import core_functional
- importlib.reload(core_functional) # 热更新prompt
- core_functional = core_functional.get_core_functions()
- if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
- inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
-
- history_feedin = []
- for i in range(len(history)//2):
- history_feedin.append([history[2*i], history[2*i+1]] )
-
- chatbot[-1] = (inputs, "[Local Message]: 等待NewBing响应中 ...")
- response = "[Local Message]: 等待NewBing响应中 ..."
- yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
- for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
- chatbot[-1] = (inputs, preprocess_newbing_out(response))
- yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
- if response == "[Local Message]: 等待NewBing响应中 ...": response = "[Local Message]: NewBing响应异常,请刷新界面重试 ..."
- history.extend([inputs, response])
- logging.info(f'[raw_input] {inputs}')
- logging.info(f'[response] {response}')
- yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。")
-
diff --git a/spaces/freddyaboulton/gradio-subapp/README.md b/spaces/freddyaboulton/gradio-subapp/README.md
deleted file mode 100644
index ed8a99425aaaba5d41c110e8acf8064759e9c790..0000000000000000000000000000000000000000
--- a/spaces/freddyaboulton/gradio-subapp/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Gradio Subapp
-emoji: 🏃
-colorFrom: purple
-colorTo: red
-sdk: gradio
-sdk_version: 3.41.2
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/freddyaboulton/gradio_foliumtest/src/README.md b/spaces/freddyaboulton/gradio_foliumtest/src/README.md
deleted file mode 100644
index deec117cdfe3314d65e6cd9bb8a1d427e7ffaa63..0000000000000000000000000000000000000000
--- a/spaces/freddyaboulton/gradio_foliumtest/src/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
-
-# gradio_foliumtest
-
-Create a map with folium and display it on the web with Gradio!
-
-## Example usage
-
-```python
-import gradio as gr
-from gradio_foliumtest import FoliumTest
-from typing import Literal
-from folium import Map
-
-
-LAT_LONG_MAP = {
- "New York City": (40.7128, -74.0060),
- "London": (51.5074, -0.1278),
- "San Francisco": (37.7749, -122.4194),
- "Tokyo": (35.6762, 139.6503),
- "Miami": (25.7617, -80.1918),
-}
-
-def get_city(city: Literal["New York City", "London", "San Francisco", "Tokyo", "Miami"]):
- city = city or "Miami"
- return Map(location=LAT_LONG_MAP[city], zoom_start=12)
-
-with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- city = gr.Radio(choices=["New York City", "London", "San Francisco", "Tokyo", "Miami"],
- label="City")
- with gr.Column():
- map_ = FoliumTest(label="Foo")
- city.change(get_city, city, map_)
-
-demo.launch()
-```
diff --git a/spaces/fuckyoudeki/AutoGPT/tests/unit/test_commands.py b/spaces/fuckyoudeki/AutoGPT/tests/unit/test_commands.py
deleted file mode 100644
index ecbac9b73bd9ad872931d77e144dd853b3d8ef64..0000000000000000000000000000000000000000
--- a/spaces/fuckyoudeki/AutoGPT/tests/unit/test_commands.py
+++ /dev/null
@@ -1,22 +0,0 @@
-"""Unit tests for the commands module"""
-from unittest.mock import MagicMock, patch
-
-import pytest
-
-import autogpt.agent.agent_manager as agent_manager
-from autogpt.app import execute_command, list_agents, start_agent
-
-
-@pytest.mark.integration_test
-def test_make_agent() -> None:
- """Test the make_agent command"""
- with patch("openai.ChatCompletion.create") as mock:
- obj = MagicMock()
- obj.response.choices[0].messages[0].content = "Test message"
- mock.return_value = obj
- start_agent("Test Agent", "chat", "Hello, how are you?", "gpt2")
- agents = list_agents()
- assert "List of agents:\n0: chat" == agents
- start_agent("Test Agent 2", "write", "Hello, how are you?", "gpt2")
- agents = list_agents()
- assert "List of agents:\n0: chat\n1: write" == agents
diff --git a/spaces/fuxin123zz/ChuanhuChatGPT/Dockerfile b/spaces/fuxin123zz/ChuanhuChatGPT/Dockerfile
deleted file mode 100644
index 8cbd335b09b1d1975bfd83a053b5fcaf398147ea..0000000000000000000000000000000000000000
--- a/spaces/fuxin123zz/ChuanhuChatGPT/Dockerfile
+++ /dev/null
@@ -1,14 +0,0 @@
-FROM python:3.9 as builder
-RUN apt-get update && apt-get install -y build-essential
-COPY requirements.txt .
-RUN pip install --user -r requirements.txt
-
-FROM python:3.9
-MAINTAINER iskoldt
-COPY --from=builder /root/.local /root/.local
-ENV PATH=/root/.local/bin:$PATH
-COPY . /app
-WORKDIR /app
-ENV my_api_key empty
-ENV dockerrun yes
-CMD ["python3", "-u", "ChuanhuChatbot.py", "2>&1", "|", "tee", "/var/log/application.log"]
diff --git a/spaces/gdn/Question-Answer-Demo/app.py b/spaces/gdn/Question-Answer-Demo/app.py
deleted file mode 100644
index 87532b85b621a64b63c31452e75a5cf4b82e283b..0000000000000000000000000000000000000000
--- a/spaces/gdn/Question-Answer-Demo/app.py
+++ /dev/null
@@ -1,24 +0,0 @@
-# -*- coding: utf-8 -*-
-"""Thera _QA.ipynb
-
-Automatically generated by Colaboratory.
-
-Original file is located at
- https://colab.research.google.com/drive/1OhlAM33IIUg46ntfmrsQqQlIyCJGMi0k
-"""
-
-import gradio as gr
-from transformers import pipeline
-
-
-context = "Mental health is a state of well being in which the individual realizes his or her own abilities can cope with the normal stresses of life can work productively and fruitfully and is able to make a contribution to his or her community according to the World Health Organization Mental health includes subjective well being perceived self efficacy autonomy competence intergenerational dependence and self actualization of ones intellectual and emotional potential among others From the perspectives of positive psychology or holism mental health may include an individuals ability to enjoy life and to create a balance between life activities and efforts to achieve psychological resilience Cultural differences subjective assessments and competing professional theories all affect how one defines Some early signrelated to mental health problems are sleep irritation lack of energy and thinking of harming yourself or others"
-question = "What are the mental health problems?"
-
-
-question_answerer = pipeline("question-answering", model = "distilbert-base-cased-distilled-squad")
-
-
-interface = gr.Interface.from_pipeline(question_answerer,
- title = "question & answering demo on mental health",
- theme = "peach",
- examples = [[context, question]]).launch()
\ No newline at end of file
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Apple Service Toolkit - 1.5.3 Learn How to Use System Configuration and Return Replaced Parts.md b/spaces/gotiQspiryo/whisper-ui/examples/Apple Service Toolkit - 1.5.3 Learn How to Use System Configuration and Return Replaced Parts.md
deleted file mode 100644
index 41c1d3ab853486a80086ed103ddcd67149f2bb46..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Apple Service Toolkit - 1.5.3 Learn How to Use System Configuration and Return Replaced Parts.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Danielsipperplaneacionycontroldelaproduccionpdf: A Comprehensive Guide to Production Planning and Control
-
Danielsipperplaneacionycontroldelaproduccionpdf is a popular keyword that refers to a PDF file of the book "Planeación y Control de la Producción" by Daniel Sipper and Robert L. Bulfin. This book is a classic text on production planning and control, covering topics such as forecasting, inventory management, scheduling, quality control, and project management. The book is written in Spanish and has been widely used by students and professionals in Latin America and Spain.
-
In this article, we will provide a brief overview of the book and its main concepts, as well as some tips on how to download it for free. We will also discuss some of the benefits and challenges of using this book as a reference for production planning and control.
Planeación y Control de la Producción (or Planning and Control of Production) is a book written by Daniel Sipper and Robert L. Bulfin, two professors of industrial engineering and operations research. The book was first published in 1997 and has since been updated several times. The latest edition was published in 2011 and has 784 pages.
-
The book aims to provide a comprehensive and practical approach to production planning and control, integrating both quantitative and qualitative methods. The book covers the following topics:
-
-
Introduction to production planning and control
-
Forecasting demand and aggregate planning
-
Inventory management
-
Material requirements planning (MRP) and enterprise resource planning (ERP)
-
Just-in-time (JIT) and lean production
-
Scheduling
-
Quality management
-
Project management
-
Supply chain management
-
-
The book also includes numerous examples, exercises, case studies, and software applications to illustrate the concepts and techniques. The book is suitable for undergraduate and graduate courses in industrial engineering, operations management, production management, and related fields.
-
How to download Danielsipperplaneacionycontroldelaproduccionpdf for free?
-
Danielsipperplaneacionycontroldelaproduccionpdf is a keyword that many people use to search for a free download of the book Planeación y Control de la Producción. However, finding a reliable and legal source for downloading the book can be challenging. Many websites that claim to offer free downloads of the book are either scammy, infected with malware, or infringing on the authors' copyrights.
-
Therefore, we recommend that you avoid using such websites and instead purchase the book from a reputable online bookstore or publisher. Alternatively, you can also borrow the book from a library or a friend who owns a copy. This way, you can ensure that you are getting a high-quality and legitimate version of the book that respects the authors' rights.
-
What are the benefits and challenges of using Planeación y Control de la Producción as a reference for production planning and control?
-
Planeación y Control de la Producción is a widely recognized and respected book on production planning and control that has been used by thousands of students and professionals around the world. Some of the benefits of using this book as a reference are:
-
-
It provides a comprehensive and up-to-date coverage of the theory and practice of production planning and control.
-
It integrates both quantitative and qualitative methods to address different aspects of production planning and control.
-
It includes numerous examples, exercises, case studies, and software applications to enhance learning and application.
-
It is written in Spanish, which makes it accessible to readers who are more comfortable with this language.
-
-
However, using this book as a reference also poses some challenges, such as:
-
-
It may be difficult to find a free or cheap copy of the book online or offline.
-
It may not cover some topics or methods that are more relevant or recent in the field of production planning and control.
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Electromagnetic Field Theory By Dhananjayan.epubl A Complete Reference for EMF Theory and Practice.md b/spaces/gotiQspiryo/whisper-ui/examples/Electromagnetic Field Theory By Dhananjayan.epubl A Complete Reference for EMF Theory and Practice.md
deleted file mode 100644
index 68a31ffb024106c58b8acd9757432f4a8e727cbc..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Electromagnetic Field Theory By Dhananjayan.epubl A Complete Reference for EMF Theory and Practice.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
If you are looking for email lists to promote your products or services, you might be tempted to search for keywords like "email list txt @yahoo@ hotmail@aol @gmail" on the web. However, this is not a good idea for several reasons.
First of all, most of the email lists that you will find online are outdated, incomplete, or inaccurate. They might contain invalid or inactive email addresses, spam traps, or people who have not opted in to receive marketing messages. Sending emails to these lists will not only waste your time and money, but also damage your reputation and deliverability.
-
Second, using email lists that you have not obtained legally or ethically is a violation of the CAN-SPAM Act [^4^] and other anti-spam laws around the world. You could face fines, lawsuits, or even criminal charges if you send unsolicited emails to people who have not given you permission to do so.
-
Third, using email lists that you have not built yourself or acquired from a reputable source will not help you achieve your marketing goals. People who receive your emails will not be interested in your offer, will not trust you, and will not engage with you. You will end up with low open rates, click-through rates, conversion rates, and customer loyalty.
-
-
So, how can you find email lists that are effective, legal, and ethical? The best way is to build your own email list from scratch. This means attracting and capturing leads who are genuinely interested in your niche, your brand, and your value proposition. You can do this by creating valuable content, offering incentives, using opt-in forms, landing pages, pop-ups, social media, webinars, events, and other lead generation strategies.
-
Alternatively, you can also buy or rent email lists from reputable providers who have permission from their subscribers to share their data with third parties. However, you should be careful when choosing an email list provider. You should check their reputation, reviews, policies, guarantees, and data quality before making a purchase. You should also test a small sample of the list before sending a full campaign.
-
In conclusion, searching for keywords like "email list txt @yahoo@ hotmail@aol @gmail" is not a good way to find email lists for marketing purposes. You should either build your own email list or buy or rent one from a trustworthy source. This will help you avoid spam complaints, legal issues, and poor results. It will also help you reach your target audience, build relationships, and grow your business.
-
-
-
How to Use Email Lists for Marketing Purposes
-
-
Once you have built or acquired an email list that is effective, legal, and ethical, you need to use it wisely for marketing purposes. Here are some tips on how to do that.
-
-
Segment your email list. This means dividing your email list into smaller groups based on criteria such as demographics, interests, behavior, preferences, or stage in the buyer's journey. This will help you tailor your messages to each group and increase their relevance and personalization.
-
Craft your email content. This means writing compelling subject lines, headlines, body copy, calls to action, and signatures that will capture your recipients' attention, interest, desire, and action. You should also use HTML formatting to make your emails look professional, attractive, and easy to read.
-
Optimize your email delivery. This means choosing the best time and frequency to send your emails, avoiding spam filters and blacklists, and ensuring that your emails are responsive and compatible with different devices and platforms. You should also monitor your email performance and metrics such as open rates, click-through rates, bounce rates, unsubscribe rates, and conversions.
-
Nurture your email relationships. This means providing value to your subscribers, building trust and credibility, encouraging feedback and engagement, and rewarding loyalty and referrals. You should also respect your subscribers' privacy and preferences, and comply with the CAN-SPAM Act and other anti-spam laws.
-
-
In conclusion, using email lists for marketing purposes requires careful planning, execution, and evaluation. You should segment your email list, craft your email content, optimize your email delivery, and nurture your email relationships. This will help you achieve your marketing goals and grow your business.
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Mponldll Pes 2013 Download Free !NEW!.md b/spaces/gotiQspiryo/whisper-ui/examples/Mponldll Pes 2013 Download Free !NEW!.md
deleted file mode 100644
index f325860288351401e185e050fb71e48c864f04eb..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Mponldll Pes 2013 Download Free !NEW!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-