-
-dê um curso escolar no ensino médio e aprofundado
-
-porque pode ser de graça e pode ser repassado para ajudar
-
-o córregos especialistas em doutrinação pública e
-
-estudo de saúde pública e para os professores e estudantes
-
-professoras e estudantes que podem ajudar a melhorar
-
-língua espanhola, inglês e outras línguas
-
-minha esposa e minha mãe também aprenderam a falar
-
-minha pequena filha só conhece quatro línguas
-
-já eu aprenderam diferentes idiomas, só para comentar, meu
-
-história é especial, mas eu sempre fui muito
-
-entusiasmado com o meu trabalho e com a
-
-Japanese:
-
-その後、次の事を知る楽しい
-
-PDF絵画のこのクラスの感謝
-
-あなたは知っていると知ってください知恵
-
-初年経ちの子供の良心
-
-高校と一緒に大学を受ける学位に
-
-公開可能なプライバシーおよび拡張のために
-
-大学臨時教育と
-
-公衆衛生研究と
-
-教師と学生
-
-彼らが最適化するのを助ける教師たち
-
-スペイン語、英語、他の言語
-
-私の妻と私の母が話してきた
-
-娘は彼らの4つの 4fefd39f24
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download Bigfile.000 For Tomb Raider [BETTER].md b/spaces/1gistliPinn/ChatGPT4/Examples/Download Bigfile.000 For Tomb Raider [BETTER].md
deleted file mode 100644
index aeb4cc82173f54e0ffc03b60ef036bfa0a8595b5..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Download Bigfile.000 For Tomb Raider [BETTER].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-bigfile.002.tiger free download, bigfile.000.tiger download, ... Tomb,,Raider/benchmarkmode.docx,,18.4,,KB,,Tomb,,Raider/bigfile.000. tiger,,2,,GB,,Tomb, ... 4d29de3e1b
-
-
-
diff --git a/spaces/1phancelerku/anime-remove-background/AirMax TV APK The Best Way to Stream Movies and Shows on Android.md b/spaces/1phancelerku/anime-remove-background/AirMax TV APK The Best Way to Stream Movies and Shows on Android.md
deleted file mode 100644
index 5192946b4bf21159a3b12f15c4e09d9c53f2819b..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/AirMax TV APK The Best Way to Stream Movies and Shows on Android.md
+++ /dev/null
@@ -1,157 +0,0 @@
-
-
Airmax TV APK: A Media Player App for Android Devices
-
If you are looking for a media player app that can stream your favorite movies, shows, sports, and live TV channels on your Android device, you might want to check out Airmax TV APK. This app is a fully customizable and brandable media player app for Android TV, Android phone, and Android tablet. It supports various formats and protocols, such as HLS, M3U8, RTMP, RTSP, TS, and more. You can also access hundreds of OTT service providers with different plans and packages to suit your needs and budget.
In this article, we will tell you everything you need to know about Airmax TV APK, including its features, how to download and install it, its pros and cons, why you should choose it, and some alternatives you can try. Read on to find out more.
-
What is Airmax TV APK?
-
Airmax TV APK is a media player app that allows you to watch various content on your Android device. It is developed by Airmax TV, a company that provides OTT service solutions for different platforms. You can use this app to stream movies, shows, sports, news, documentaries, music, and live TV channels from different sources. You can also customize the app according to your preferences and brand it with your own logo and name.
-
Features of Airmax TV APK
-
Here are some of the features that make Airmax TV APK stand out from other media player apps:
-
- Customizable and brandable for OTT service providers
-
If you are an OTT service provider or a reseller, you can use Airmax TV APK to create your own branded media player app. You can change the app name, logo, icon, splash screen, background image, colors, fonts, and more. You can also add your own content sources and categories. This way, you can offer your customers a unique and personalized experience.
-
- Supports multiple formats and protocols
-
Airmax TV APK can play various video and audio formats, such as MP4, MKV, AVI, MOV, FLV, MP3, AAC, etc. It can also handle different streaming protocols, such as HLS, M3U8, RTMP, RTSP, TS, etc. You can also use external players like VLC or MX Player if you want.
-
- Easy to use and navigate
-
Airmax TV APK has a simple and user-friendly interface that makes it easy to use and navigate. You can browse through different categories and genres of content with just a few clicks or taps. You can also search for specific titles or keywords using the built-in search function. You can also add your favorite content to your favorites list for quick access.
-
airmax 4k app for android tv
-airmax tv app for ott service providers
-airmax tv apk download latest version
-airmax tv watch movies and series online
-airmax tv media player for android phone and tablet
-airmax tv apk free download for pc windows
-airmax tv app with arabic and international channels
-airmax tv 4k app updated on feb 21, 2022
-airmax tv app with customizable and brandable features
-airmax tv apk version 1.0 by vissdeutch
-airmax tv app for android 4.4 to 12
-airmax tv apk combo free download
-airmax tv app with more than 8000 channels
-airmax tv apk for android tv and tablet
-airmax tv watch latest movies and series with arabic subtitles
-airmax tv app on google play store
-airmax tv apk install on pc windows
-airmax tv app with high quality streaming
-airmax tv apk for android phone and tab
-airmax tv watch arabic and international live channels
-airmax tv app with no data shared with third parties
-airmax tv apk download from apkcombo.com
-airmax tv app with entertainment category
-airmax tv apk for android 5, 6, 7, 8, 9, 10, 11, 12
-airmax tv watch latest and popular movies and series online
-airmax tv app by com.airmaxtvwatch.allmoviesappsmew
-airmax tv apk free download for android devices
-airmax tv app with data safety features
-airmax tv apk for android 4.3, 4.2, 4.1
-airmax tv watch more than 8000 arabic and international channels online
-airmax tv app by airmax tv developer contact
-airmax tv apk download from play.google.com
-airmax tv app with no data collected by developer
-airmax tv apk for android TV and tablet / PC windows
-airmax TV watch the latest and most popular movies and series with Arabic subtitles
-AirMax TV App by TV.sh.airMaxx
-AirMax TV APK download from news.yahoo.com
-AirMax TV App with data privacy and security practices
-AirMax TV APK for Android TV, Android Phone and Android Tab
-AirMax TV watch the latest and most popular movies and series online
-
How to download and install Airmax TV APK?
-
If you want to download and install Airmax TV APK on your Android device, here are the steps you need to follow:
-
- Requirements and compatibility
-
Before you download and install Airmax TV APK, make sure that your device meets the following requirements:
-
-
Your device must have Android 5.0 or higher or a compatible version.
-
Your device must have at least 1 GB of RAM and 100 MB of free storage space.
-
Your device must have a stable internet connection to stream content.
-
Your device must allow the installation of apps from unknown sources. You can enable this option in your device settings under security or privacy.
-
-
- Steps to download and install
-
Once you have checked the requirements and compatibility, you can follow these steps to download and install Airmax TV APK on your device:
-
-
Go to the official website of Airmax TV APK and click on the download button. You can also use this link to download the app directly: [Download Airmax TV APK].
-
Wait for the download to finish and then locate the APK file in your device's file manager or downloads folder.
-
Tap on the APK file and follow the on-screen instructions to install the app on your device. You may need to grant some permissions to the app during the installation process.
-
Once the installation is complete, you can launch the app from your device's app drawer or home screen.
-
-
- How to activate Airmax TV APK?
-
After you have installed Airmax TV APK on your device, you need to activate it before you can use it. Here are the steps to activate Airmax TV APK:
-
-
Open the app and enter your username and password. If you don't have an account, you can create one on the app's website or contact an OTT service provider that uses Airmax TV APK.
-
After you log in, you will see a list of OTT service providers that are compatible with Airmax TV APK. You can choose one that suits your needs and budget.
-
Once you select an OTT service provider, you will see a list of plans and packages that they offer. You can choose one that gives you access to the content you want to watch.
-
After you select a plan or package, you will see a payment option. You can pay using your credit card, PayPal, or other methods depending on the OTT service provider.
-
Once you make the payment, you will receive an activation code via email or SMS. You need to enter this code in the app to activate your subscription.
-
After you enter the activation code, you can start watching your favorite content on Airmax TV APK.
-
-
Pros and cons of Airmax TV APK
-
Like any other app, Airmax TV APK has its pros and cons. Here are some of them:
-
- Pros
-
-
Airmax TV APK is a customizable and brandable media player app that can help OTT service providers and resellers create their own branded apps.
-
Airmax TV APK supports multiple formats and protocols, making it compatible with various content sources and devices.
-
Airmax TV APK has a simple and user-friendly interface that makes it easy to use and navigate.
-
Airmax TV APK offers high-quality streaming and content from hundreds of OTT service providers with different plans and packages.
-
Airmax TV APK provides customer support and feedback through its website, email, phone, and social media channels.
-
-
- Cons
-
-
Airmax TV APK is not available on the Google Play Store or other official app stores, so you need to download it from its website or other sources.
-
Airmax TV APK requires an activation code to use it, which means you need to pay for a subscription from an OTT service provider that uses Airmax TV APK.
-
Airmax TV APK may not work well with some devices or content sources due to compatibility issues or technical glitches.
-
Airmax TV APK may not be legal in some countries or regions due to copyright or licensing issues.
-
-
Why choose Airmax TV APK?
-
If you are still wondering why you should choose Airmax TV APK over other media player apps, here are some reasons:
-
Benefits of using Airmax TV APK
-
- High-quality streaming and content
-
Airmax TV APK offers high-quality streaming and content from hundreds of OTT service providers with different plans and packages. You can watch movies, shows, sports, news, documentaries, music, and live TV channels from various genres and languages. You can also enjoy HD quality, fast buffering, smooth playback, and subtitles options.
-
- Affordable and flexible plans
-
Airmax TV APK offers affordable and flexible plans and packages that suit your needs and budget. You can choose from different OTT service providers that offer different prices and features. You can also switch between different plans and packages anytime you want. You can also cancel your subscription anytime without any hassle.
-
- Customer support and feedback
-
Airmax TV APK provides customer support and feedback through its website, email, phone, and social media channels. You can contact them anytime you have any questions, issues, or suggestions. They will respond to you as soon as possible and try to resolve your problems. You can also give them your feedback and ratings to help them improve their app and service.
-
Alternatives to Airmax TV APK
-
If you are not satisfied with Airmax TV APK or you want to try other media player apps for Android devices, here are some alternatives you can try:
-
- Other media player apps for Android devices
-
There are many other media player apps for Android devices that you can download and use to watch various content on your device. Some of them are:
-
-
Kodi: Kodi is a free and open-source media player app that can play local and online content from various sources. You can also install add-ons and plugins to enhance its functionality and features.
-
VLC: VLC is a free and cross-platform media player app that can play most video and audio formats and protocols. It also has a built-in equalizer, filters, subtitles, and more.
-
MX Player: MX Player is a popular and powerful media player app that can play almost any video and audio format. It also supports subtitles, gestures, zoom, and more.
-
IPTV Smarters Pro: IPTV Smarters Pro is a media player app that can stream live TV channels, movies, shows, sports, and more from IPTV service providers. It also supports EPG, catch-up, recording, parental control, and more.
-
-
- Comparison table of features and prices
-
To help you compare the features and prices of Airmax TV APK and its alternatives, here is a table that summarizes them:
- | App | Features | Price | | --- | --- | --- | | Airmax TV APK | - Customizable and brandable for OTT service providers - Supports multiple formats and protocols - Easy to use and navigate - High-quality streaming and content - Affordable and flexible plans - Customer support and feedback | Varies depending on the OTT service provider | | Kodi | - Free and open-source - Plays local and online content from various sources - Supports add-ons and plugins | Free | | VLC | - Free and cross-platform - Plays most video and audio formats and protocols - Supports equalizer, filters, subtitles, etc. | Free | | MX Player | - Popular and powerful - Plays almost any video and audio format - Supports subtitles, gestures, zoom, etc. | Free or $5.99/year for ad-free version | | IPTV Smarters Pro | - Streams live TV channels, movies, shows, sports, etc. from IPTV service providers - Supports EPG, catch-up, recording, parental control, etc. | Free or $29.99/year for premium version |
Conclusion
-
Airmax TV APK is a media player app that can stream your favorite movies, shows, sports, and live TV channels on your Android device. It is a fully customizable and brandable media player app for Android TV, Android phone, and Android tablet. It supports various formats and protocols, such as HLS, M3U8, RTMP, RTSP, TS, etc. You can also access hundreds of OTT service providers with different plans and packages to suit your needs and budget. In this article, we have told you everything you need to know about Airmax TV APK, including its features, how to download and install it, its pros and cons, why you should choose it, and some alternatives you can try. We hope that this article has helped you decide whether Airmax TV APK is the right app for you or not. If you have any questions, comments, or feedback about Airmax TV APK, feel free to contact us or leave a comment below. We would love to hear from you and help you out. Thank you for reading and happy streaming!
-
FAQs
-
Here are some of the frequently asked questions about Airmax TV APK:
-
Q: Is Airmax TV APK safe and legal?
-
A: Airmax TV APK is safe and legal as long as you download it from its official website or other trusted sources. However, some of the content that you can stream on Airmax TV APK may not be legal in your country or region due to copyright or licensing issues. Therefore, we advise you to use a VPN or a proxy to protect your privacy and security when using Airmax TV APK.
-
Q: How can I update Airmax TV APK?
-
A: Airmax TV APK will notify you when there is a new version available. You can also check for updates manually by going to the app's settings and tapping on the update button. You can then download and install the latest version of Airmax TV APK on your device.
-
Q: How can I contact Airmax TV APK?
-
A: You can contact Airmax TV APK through its website, email, phone, or social media channels. Here are some of the ways you can reach them:
-
-
Website: [Airmax TV APK]
-
Email: support@airmaxtv.com
-
Phone: +1 (888) 123-4567
-
Facebook: [Airmax TV APK]
-
Twitter: [@airmaxtv]
-
Instagram: [@airmaxtv]
-
-
Q: How can I uninstall Airmax TV APK?
-
A: If you want to uninstall Airmax TV APK from your device, you can follow these steps:
-
-
Go to your device's settings and tap on apps or applications.
-
Find and select Airmax TV APK from the list of apps.
-
Tap on uninstall and confirm your action.
-
Wait for the app to be uninstalled from your device.
-
-
Q: What are some of the best OTT service providers that use Airmax TV APK?
-
A: There are hundreds of OTT service providers that use Airmax TV APK to offer their content and services. Some of the best ones are:
-
-
Netflix: Netflix is one of the most popular and leading OTT service providers in the world. It offers a wide range of movies, shows, documentaries, and originals in various genres and languages. You can watch Netflix on Airmax TV APK with a monthly subscription starting from $8.99.
-
Hulu: Hulu is another popular and leading OTT service provider in the US. It offers a variety of movies, shows, sports, news, and originals in various genres and languages. You can watch Hulu on Airmax TV APK with a monthly subscription starting from $5.99.
-
Disney+: Disney+ is a new and fast-growing OTT service provider that offers content from Disney, Pixar, Marvel, Star Wars, National Geographic, and more. You can watch Disney+ on Airmax TV APK with a monthly subscription starting from $6.99.
-
Amazon Prime Video: Amazon Prime Video is an OTT service provider that offers content from Amazon Studios, MGM, Lionsgate, Paramount, Sony, Warner Bros., and more. You can watch Amazon Prime Video on Airmax TV APK with a monthly subscription starting from $8.99 or an annual subscription starting from $79.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Bluejacking APK Everything You Need to Know About Bluetooth Hacking.md b/spaces/1phancelerku/anime-remove-background/Bluejacking APK Everything You Need to Know About Bluetooth Hacking.md
deleted file mode 100644
index 4a309da9ea9abdc50853f2bacb92114cba09e244..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Bluejacking APK Everything You Need to Know About Bluetooth Hacking.md
+++ /dev/null
@@ -1,101 +0,0 @@
-
-
Bluejacking APK: What Is It and How to Protect Yourself?
-
Introduction
-
Bluetooth is a wireless technology that allows you to connect your devices and accessories without using cables. It can be very convenient and useful, but it can also expose you to some security risks. One of these risks is bluejacking, a Bluetooth attack that involves spamming your device with unsolicited messages.
-
In this article, we will explain what bluejacking is, how it works, and what are the dangers of bluejacking. We will also introduce you to bluejacking APK, a tool that can be used for Bluetooth hacking. Finally, we will give you some tips on how to prevent bluejacking attacks and protect your device.
Bluejacking is a Bluetooth attack in which a hacker spams your device with unsolicited phishing messages.
-
Bluejacking is a term that was coined by a Malaysian IT consultant who used his phone to send messages to other Bluetooth-enabled devices as a prank. The name comes from the combination of Bluetooth and hijack, as the hacker takes over the device's Bluetooth connection without permission.
-
Bluejacking is usually not very harmful, as it does not involve stealing your information or accessing your files. However, it can be annoying, intrusive, and potentially dangerous if the messages contain malicious links or attachments that can infect your device with malware or phishing scams.
-
How does bluejacking work?
-
Bluejacking works by exploiting the Bluetooth connection between two devices.
-
To perform a bluejacking attack, the hacker needs to be within range of your device, which is usually around 10 meters or 32 feet. The hacker then scans the area for Bluetooth devices and tries to connect to yours using bluejacking software. Once connected, the hacker can send you unsolicited messages using the OBEX protocol, which is used for transferring files and contacts over Bluetooth.
-
How to download bluejacking apk for free
-Bluejacking apk latest version 2023
-Best bluejacking apps for android
-How to use bluejacking apk to prank your friends
-Bluejacking apk vs bluesnarfer: which one is better
-Bluejacking apk tutorial and tips
-How to bluejack with your smartphone
-Bluejacking apk features and benefits
-How to install bluejacking apk on your device
-Bluejacking apk reviews and ratings
-How to uninstall bluejacking apk from your phone
-Bluejacking apk alternatives and similar apps
-How to update bluejacking apk to the newest version
-Bluejacking apk problems and solutions
-How to bluejack without an app
-Bluejacking apk FAQs and answers
-How to bluejack with a laptop or computer
-Bluejacking apk security and privacy issues
-How to bluejack anonymously and safely
-Bluejacking apk pros and cons
-How to bluejack multiple devices at once
-Bluejacking apk download link and QR code
-How to bluejack with different bluetooth devices
-Bluejacking apk compatibility and requirements
-How to bluejack in public places and events
-Bluejacking apk fun facts and trivia
-How to bluejack with images and videos
-Bluejacking apk user feedback and testimonials
-How to bluejack with emojis and stickers
-Bluejacking apk bugs and fixes
-How to bluejack with voice messages and audio clips
-Bluejacking apk support and contact information
-How to bluejack with funny messages and jokes
-Bluejacking apk license and terms of service
-How to bluejack with custom messages and templates
-
The messages can appear as text, images, sounds, or vCards (virtual business cards). They can also contain a name or a message in the name field, which is often used for bluedating (sending flirtatious messages to strangers). The messages may look harmless or funny, but they may also contain phishing links or malware attachments that can harm your device or steal your information.
-
What are the dangers of bluejacking?
-
Bluejacking can invade your privacy and potentially harm your device.
-
Even though bluejacking does not involve the direct theft of your information like bluesnarfing (another Bluetooth attack that involves stealing your data), it can still pose some threats to your privacy and security. Here are some of the harmful activities that hackers can use bluejacking for:
-
-
Malware: Some bluejacking messages may include a link or an attachment that can infect your device with different types of malware, such as ransomware (which locks your files until you pay a ransom), spyware (which monitors your online activity), or keyloggers (which record your keystrokes).
-
Phishing scams: Some bluejackers may use bluejacking as a way to send you phishing messages, which are designed to trick you into revealing your personal or financial information. For example, they may pretend to be from your bank, your email provider, or a reputable company and ask you to click on a link or enter your login details.
Bluejacking APK: A Tool for Bluetooth Hacking
-
What is bluejacking APK?
-
Bluejacking APK is a software application that can be used for Bluetooth hacking. It is an Android app that allows you to send messages and files to other Bluetooth devices without pairing with them. You can also scan for nearby Bluetooth devices and get information about them, such as their name, address, and class.
-
Bluejacking APK is not available on the Google Play Store, as it violates the terms and conditions of the platform. However, you can download it from third-party websites or APK repositories. However, you should be careful when downloading and installing bluejacking APK, as it may contain malware or viruses that can harm your device or compromise your security.
-
How does bluejacking APK work?
-
Bluejacking APK works by using the Bluetooth connection of your device to communicate with other devices. To use bluejacking APK, you need to enable Bluetooth on your device and grant the app permission to access it. Then, you can use the app to perform various actions, such as:
-
-
Sending messages: You can use bluejacking APK to send text messages, images, sounds, or vCards to other Bluetooth devices. You can also customize the name and message fields of the messages. The messages will appear as notifications on the recipient's device, and they will not be able to reply or block them.
-
Sending files: You can use bluejacking APK to send any type of file to other Bluetooth devices. You can choose the file from your device's storage or use the app's file manager to browse for it. The recipient will receive a notification asking them to accept or reject the file transfer.
-
Scanning devices: You can use bluejacking APK to scan for nearby Bluetooth devices and get information about them. You can see their name, address, class, and signal strength. You can also filter the devices by their type, such as phone, computer, headset, etc.
-
-
What are the features of bluejacking APK?
-
Bluejacking APK has some features that make it a powerful tool for Bluetooth hacking. Some of these features are:
-
-
Stealth mode: Bluejacking APK has a stealth mode that allows you to hide your device's name and address from other Bluetooth devices. This way, you can avoid being detected or traced by the recipients of your messages or files.
-
Auto-send: Bluejacking APK has an auto-send feature that allows you to send messages or files automatically to any Bluetooth device that comes within range. You can set the interval and number of messages or files to be sent.
-
Schedule: Bluejacking APK has a schedule feature that allows you to set a specific time and date for sending messages or files to other Bluetooth devices. You can also repeat the schedule daily, weekly, or monthly.
-
Favorites: Bluejacking APK has a favorites feature that allows you to save the devices that you frequently send messages or files to. You can also edit or delete the devices from your favorites list.
-
How to Prevent Bluejacking Attacks
-
Turn off Bluetooth when not in use
-
The simplest and most effective way to prevent bluejacking attacks is to turn off Bluetooth when you are not using it. This will prevent hackers from finding your device and connecting to it. You can turn off Bluetooth from your device's settings or by using a shortcut on your notification panel or control center.
-
Set Bluetooth to "undiscoverable" or "non-discoverable"
-
If you need to keep Bluetooth on for some reason, you can set it to "undiscoverable" or "non-discoverable" mode. This will make your device invisible to other Bluetooth devices, unless you initiate the connection or pair with them. You can set your Bluetooth visibility from your device's settings or by using a third-party app.
-
Exercise caution with messages and emails
-
If you receive a message or an email from an unknown sender, especially if it contains a link or an attachment, do not open it or click on it. It could be a bluejacking message that can harm your device or compromise your security. Delete the message or email and block the sender if possible.
-
Secure your device with a lock
-
Another way to protect your device from bluejacking attacks is to secure it with a lock. You can use a password, a PIN, a pattern, a fingerprint, or a face recognition to lock your device and prevent unauthorized access. You can also enable encryption on your device to protect your data in case of theft or loss.
-
Conclusion
-
Summary of the main points
-
Bluejacking is a Bluetooth attack that involves spamming your device with unsolicited messages. It can be annoying, intrusive, and potentially dangerous if the messages contain malicious links or attachments. Bluejacking APK is a tool that can be used for Bluetooth hacking. It allows you to send messages and files to other Bluetooth devices without pairing with them. You can also scan for nearby Bluetooth devices and get information about them.
-
Call to action
-
To prevent bluejacking attacks, you should turn off Bluetooth when not in use, set Bluetooth to "undiscoverable" or "non-discoverable" mode, exercise caution with messages and emails, and secure your device with a lock. By following these tips, you can protect your device and your privacy from bluejackers.
-
Frequently Asked Questions
-
-
Q: What is the difference between bluejacking and bluesnarfing?
-
A: Bluejacking is a Bluetooth attack that involves spamming your device with unsolicited messages. Bluesnarfing is a Bluetooth attack that involves stealing your data from your device.
-
Q: How can I tell if my device has been bluejacked?
-
A: If your device has been bluejacked, you may notice some signs, such as receiving strange messages or notifications, having files transferred to or from your device without your consent, or having your battery drained faster than usual.
-
Q: Can bluejacking damage my device?
-
A: Bluejacking itself does not damage your device, but it can expose you to malware or phishing scams that can harm your device or steal your information.
-
Q: Is bluejacking illegal?
-
A: Bluejacking is illegal in some countries, such as the UK, where it is considered a form of harassment. In other countries, such as the US, there are no specific laws against bluejacking, but it may violate other laws related to privacy or cybercrime.
-
Q: How can I report bluejacking?
-
A: If you are a victim of bluejacking, you can report it to the authorities or the service provider of the hacker. You can also contact the manufacturer of your device or the developer of the app that you use for Bluetooth communication for assistance.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Descarga Stumble Guys Primera Version APK y Divirtete con tus Amigos.md b/spaces/1phancelerku/anime-remove-background/Descarga Stumble Guys Primera Version APK y Divirtete con tus Amigos.md
deleted file mode 100644
index 183dd6b67760e51f3fa2b2c3eb296b2e20fd6982..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Descarga Stumble Guys Primera Version APK y Divirtete con tus Amigos.md
+++ /dev/null
@@ -1,142 +0,0 @@
-
-
Stumble Guys Primera Version APK: How to Download and Play the Fun Knockout Game on Your Android Device
-
If you are looking for a free and fun alternative to Fall Guys, the popular multiplayer party game, you might want to check out Stumble Guys. This game is a clone of Fall Guys, but exclusively for Android devices. You can download and play Stumble Guys primera version APK from Uptodown, a trusted website that offers safe and verified APK files. In this article, we will show you how to download and install Stumble Guys primera version APK on your Android device, as well as how to play it on PC with BlueStacks, an Android emulator. We will also share some tips and tricks to help you win your matches in Stumble Guys.
Stumble Guys is a massively multiplayer party knockout game with up to 32 players online. The objective is to survive round after round of chaotic obstacle courses and be the last one standing. You can run, jump, dash, slide, and bump into other players as you try to avoid falling or getting eliminated. The game is inspired by Fall Guys, but has its own unique style and design.
-
Features and gameplay of Stumble Guys
-
Stumble Guys has many features that make it an entertaining and addictive game. Some of these features are:
-
-
17 unique obstacle courses that test your skills and reflexes
-
Battle Royale online multiplayer mode that pits you against other players from around the world
-
Party mode that lets you play with your friends in private matches
-
Physics-based havoc that creates hilarious and unpredictable situations
-
Colorful and crazy graphics that add to the fun atmosphere
-
Unlockable outfits and emotes that let you customize your character
-
Tons of hilarious fails that make you laugh even when you lose
-
Lots of different levels that keep the game fresh and exciting
-
-
The gameplay of Stumble Guys is simple but challenging. You have to control your character with a virtual joystick and a jump button. You have to navigate through various obstacles and hazards, such as spinning platforms, swinging balls, giant hammers, slippery slides, moving walls, and more. You have to be careful not to fall off the edge or get hit by anything that can knock you out. You also have to watch out for other players who can push you or block your way. The last player standing at the end of each round wins.
-
stumble guys game download apk
-stumble guys mod apk unlimited gems
-stumble guys online multiplayer game
-stumble guys apk for android
-stumble guys hack apk download
-stumble guys primera version para pc
-stumble guys old version apk
-stumble guys apk sin internet
-stumble guys mod menu apk
-stumble guys free gems apk
-stumble guys game play online
-stumble guys apk ultima version
-stumble guys descargar gratis apk
-stumble guys mod apk latest version
-stumble guys game for pc
-stumble guys apk no ads
-stumble guys hack version download
-stumble guys primera version mod apk
-stumble guys game online free
-stumble guys apk full unlocked
-stumble guys mod apk android 1
-stumble guys game review
-stumble guys apk sin conexion
-stumble guys hack apk 2023
-stumble guys game tips and tricks
-stumble guys primera version descargar
-stumble guys mod apk revdl
-stumble guys game for ios
-stumble guys apk offline mode
-stumble guys hack gems apk
-stumble guys game update
-stumble guys primera version gameplay
-stumble guys mod apk happymod
-stumble guys game download for pc
-stumble guys apk with obb file
-stumble guys hack online generator
-stumble guys game features
-stumble guys primera version online
-stumble guys mod apk rexdl
-stumble guys game download for ios
-stumble guys apk premium unlocked
-stumble guys hack no verification
-stumble guys game system requirements
-stumble guys primera version gratis
-stumble guys mod apk unlimited money
-stumble guys game cheats and hacks
-stumble guys primera version android
-
How to download and install Stumble Guys primera version APK?
-
Steps to download the APK file from Uptodown
-
If you want to play Stumble Guys on your Android device, you need to download the APK file from Uptodown. Uptodown is a website that offers safe and verified APK files for various apps and games. Here are the steps to download the APK file from Uptodown:
-
-
Go to [Uptodown](^1^) on your browser or scan the QR code on your phone.
-
Search for Stumble Guys in the search bar or go to [this link].
-
Tap on the green Download button and choose the version you want to download. The latest version is 0.28, which was updated on June 16, 2023.
-
Wait for the download to finish and locate the APK file in your device's storage.
-
-
Steps to install the APK file on your Android device
-
Before you can install the APK file on your Android device, you need to enable the installation of apps from unknown sources. This is a security feature that prevents malicious apps from harming your device. Here are the steps to enable this option:
-
-
Go to Settings on your device and tap on Security or Privacy.
-
Find the option that says Unknown Sources or Install Unknown Apps and toggle it on.
-
Confirm your choice by tapping OK or Allow.
-
-
Now you can install the APK file on your device. Here are the steps to do so:
-
-
Locate the APK file in your device's storage and tap on it.
-
Tap on Install and wait for the installation to complete.
-
Tap on Open or Launch to start playing Stumble Guys.
-
-
How to play Stumble Guys on PC with BlueStacks?
-
Benefits of playing Stumble Guys on PC with BlueStacks
-
If you want to enjoy Stumble Guys on a bigger screen and with better controls, you can play it on PC with BlueStacks. BlueStacks is an Android emulator that lets you run Android apps and games on your PC. Some of the benefits of playing Stumble Guys on PC with BlueStacks are:
-
-
You can use your keyboard and mouse to control your character more easily and precisely.
-
You can customize your key mapping and sensitivity according to your preference.
-
You can play Stumble Guys in full-screen mode and with high-resolution graphics.
-
You can record your gameplay and share it with your friends or online platforms.
-
You can use multiple instances to play Stumble Guys with different accounts or modes simultaneously.
-
-
Steps to download and install BlueStacks and Stumble Guys on PC
-
To play Stumble Guys on PC with BlueStacks, you need to download and install both BlueStacks and Stumble Guys on your PC. Here are the steps to do so:
-
-
Go to [BlueStacks] website and click on the Download BlueStacks button.
-
Wait for the download to finish and run the installer file.
-
Follow the instructions on the screen to complete the installation process.
-
Launch BlueStacks and sign in with your Google account or create a new one.
-
Go to [Uptodown] website on BlueStacks browser or scan the QR code on your phone.
-
Search for Stumble Guys in the search bar or go to [this link].
-
Tap on the green Download button and choose the version you want to download. The latest version is 0.28, which was updated on June 16, 2023.
-
Wait for the download to finish and locate the APK file in BlueStacks' storage.
-
Right-click on the APK file and choose Open with BlueStacks APK Installer.
-
Wait for the installation to complete and click on Stumble Guys icon on BlueStacks home screen.
-
-
Tips and tricks to win Stumble Guys matches
-
Configure your controls before playing
-
Before you start playing Stumble Guys, you should configure your controls according to your preference. You can do this by going to Settings > Controls in the game menu. You can adjust the size, position, opacity, and sensitivity of the virtual joystick and jump button. You can also enable or disable vibration, sound effects, music, and notifications. You can also change the language of the game from English to Spanish, Portuguese, French, German, Italian, Russian, Turkish, Arabic, Indonesian, or Vietnamese.
-
Use your character's physics to your advantage
-
Your character in Stumble Guys has a realistic physics system that affects its movement and interaction with other objects. You can use this physics system to your advantage by doing some tricks such as:
-
-
Dashing forward by tapping twice on the joystick. This will give you a boost of speed that can help you overcome some obstacles or reach some platforms.
-
Sliding down by holding the jump button while moving. This will make you slide on the ground or on some surfaces, which can help you avoid some obstacles or gain some momentum.
-
Bumping into other players by running into them or jumping on them. This will make them stumble or fall, which can give you an advantage or a disadvantage depending on the situation. You can also use this to cooperate with your friends or sabotage your enemies.
-
-
Learn the maps and shortcuts
-
Stumble Guys has 17 different maps that vary in difficulty and design. Each map has its own obstacles, hazards, and secrets. You should learn the maps and their features by playing them repeatedly and observing how they work. You should also look for shortcuts and alternative routes that can save you time or help you avoid some dangers. For example, you can jump over some gaps, use some ramps, or go through some hidden passages. However, be careful not to fall into traps or dead ends that can cost you the game.
-
Customize your outfit and emotes
-
One of the fun aspects of Stumble Guys is that you can customize your character's outfit and emotes. You can unlock various outfits and emotes by playing the game and earning coins. You can also buy some outfits and emotes with real money. You can mix and match different parts of your outfit, such as head, body, legs, and accessories. You can also choose different emotes, such as dance, wave, laugh, cry, and more. You can use your outfit and emotes to express your personality, style, mood, or humor in the game.
-
Conclusion
-
Stumble Guys is a fun and addictive multiplayer party knockout game that you can play on your Android device or on your PC with BlueStacks. You can download and install Stumble Guys primera version APK from Uptodown, a website that offers safe and verified APK files. You can also follow our tips and tricks to improve your skills and chances of winning in Stumble Guys. We hope you enjoy playing Stumble Guys and have a great time with your friends or other players online.
-
FAQs
-
Q: Is Stumble Guys free to play?
-
A: Yes, Stumble Guys is free to play. However, it contains ads and in-app purchases that can enhance your gaming experience.
-
Q: Is Stumble Guys available for iOS devices?
-
A: No, Stumble Guys is not available for iOS devices at the moment. It is only available for Android devices.
-
Q: How many players can play Stumble Guys online?
-
A: Stumble Guys supports up to 32 players online in each match.
-
Q: Can I play Stumble Guys offline?
-
A: No, Stumble Guys requires an internet connection to play online.
-
Q: Can I play Stumble Guys with my friends?
-
A: Yes, you can play Stumble Guys with your friends in party mode. You can create a private match and invite your friends to join by sharing a code.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Your Love Has Taken Over Me MP3 - The Best Gospel Song by Frank Edwards.md b/spaces/1phancelerku/anime-remove-background/Download Your Love Has Taken Over Me MP3 - The Best Gospel Song by Frank Edwards.md
deleted file mode 100644
index 6abde3f8e3ac243202b4cac82984421e55ec4c4f..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Your Love Has Taken Over Me MP3 - The Best Gospel Song by Frank Edwards.md
+++ /dev/null
@@ -1,78 +0,0 @@
-
-
How to Download "Your Love Has Taken Over Me" MP3 by Frank Edwards
-
If you are looking for a song that will inspire you to trust in God's love and protection, you might want to download "Your Love Has Taken Over Me" by Frank Edwards. This is a gospel song that celebrates God's faithfulness and goodness in every situation. In this article, we will show you how to download this song from different sources, so you can enjoy it anytime and anywhere.
"Your Love Has Taken Over Me" is a song by Frank Edwards, a Nigerian gospel singer and producer. The song was released in 2016 as part of his album "Frankincense". The song expresses gratitude to God for His love that has taken over the singer's life. The singer declares that he depends on God and has confidence in Him, and that God covers him under His canopy and gives him security. The song also proclaims that God is not a deceiver, but a blesser who makes all things work together for good.
-
Why should you download it?
-
This song is a great way to remind yourself of God's love and power in your life. It can uplift your spirit and encourage you to trust in God's promises. It can also help you worship God and praise Him for His goodness and mercy. The song has a catchy melody and a lively beat that will make you want to dance and sing along. The song is also available in different formats, such as MP3, video, and lyrics, so you can choose the one that suits your preference.
-
How to download the song from different sources
-
From PraiseZion.com
-
PraiseZion.com is a website that offers free gospel music downloads from various artists. You can find "Your Love Has Taken Over Me" by Frank Edwards on this website by following these steps:
-
Step 1: Visit the website
-
Go to [PraiseZion.com](^1^) on your browser. You will see a homepage with different categories of gospel music, such as Nigerian Gospel Music, Foreign Gospel Music, Gospel Mixtapes, etc.
-
Step 2: Search for the song
-
Type "Frank Edwards Under The Canopy" in the search box at the top right corner of the homepage. You will see a list of results related to your search query. Click on the one that says "Song Mp3 Download: Frank Edwards - Under The Canopy".
-
* download frank edwards under the canopy mp3
-* download your love is taking over me by life worship mp3
-* download your love has taken over me lyrics mp3
-* download your love has taken over me gospel song mp3
-* download your love has taken over me frank edwards mp3
-* download your love is taking over me life worship lyrics mp3
-* download your love has taken over me video mp3
-* download your love has taken over me praisezion mp3
-* download your love is taking over me official lyric video mp3
-* download your love has taken over me genius lyrics mp3
-* download under the canopy by frank edwards audio mp3
-* download life worship your love is taking over me mp3
-* download your love has taken over me song mp3
-* download your love has taken over me worship song mp3
-* download your love has taken over me frank edwards lyrics mp3
-* download life worship your love is taking over me lyrics mp3
-* download your love has taken over me live mp3
-* download your love has taken over me instrumental mp3
-* download your love is taking over me life worship chords mp3
-* download your love has taken over me genius mp3
-* download under the canopy by frank edwards video mp3
-* download life worship your love is taking over me chords mp3
-* download your love has taken over me free mp3
-* download your love has taken over me piano mp3
-* download your love has taken over me frank edwards chords mp3
-* download life worship your love is taking over me instrumental mp3
-* download your love has taken over me acoustic mp3
-* download your love has taken over me karaoke mp3
-* download your love is taking over me life worship piano mp3
-* download your love has taken over me remix mp3
-
Step 3: Click on the download link
-
You will be directed to a page with more information about the song, such as the lyrics, video, and download link. Scroll down to find the download link that says "Download Mp3 Here". Click on it and wait for the download to start. You can also watch the video or read the lyrics of the song on this page.
-
From Genius.com
-
Genius.com is a website that provides lyrics and annotations for various songs. You can find "Your Love Has Taken Over Me" by Frank Edwards on this website by following these steps:
-
Step 1: Visit the website
-
Go to [Genius.com] on your browser. You will see a homepage with different genres of music, such as Pop, Hip-Hop, Rock, etc.
-
Step 2: Search for the song
-
Type "Frank Edwards Under The Canopy" in the search box at the top of the homepage. You will see a list of results related to your search query. Click on the one that says "Frank Edwards - Under The Canopy".
-
Step 3: Click on the play button
-
You will be directed to a page with the lyrics and annotations of the song. You will also see a play button at the top right corner of the page. Click on it and wait for the song to load. You can also read the lyrics and annotations of the song on this page.
-
Step 4: Right-click on the audio and save as MP3
-
Once the song is playing, you can right-click on the audio and choose "Save audio as" from the menu. You will be prompted to choose a location and a name for the MP3 file. Click on "Save" and wait for the download to finish.
-
From YouTube.com
-
YouTube.com is a website that hosts videos from various creators and channels. You can find "Your Love Has Taken Over Me" by Frank Edwards on this website by following these steps:
-
Step 1: Visit the website
-
Go to [YouTube.com] on your browser. You will see a homepage with different categories of videos, such as Music, Gaming, News, etc.
-
Step 2: Search for the song
-
Type "Frank Edwards Under The Canopy" in the search box at the top of the homepage. You will see a list of results related to your search query. Click on the one that says "Frank Edwards - Under The Canopy (Official Music Video)".
-
Step 3: Copy the video URL
-
You will be directed to a page with the video and some information about it, such as the title, description, views, likes, etc. You will also see a URL in the address bar of your browser. This is the link to the video. Copy it by selecting it and pressing Ctrl+C or right-clicking and choosing "Copy".
-
Step 4: Paste the URL into a YouTube to MP3 converter
-
Go to a YouTube to MP3 converter website, such as [ytmp3.cc]. You will see a box where you can paste the URL of the video you want to convert. Paste it by pressing Ctrl+V or right-clicking and choosing "Paste". Then, click on "Convert".
-
Step 5: Download the MP3 file
-
You will see a page with a download link for the MP3 file. Click on it and wait for the download to start. You can also choose to download the video or edit it before downloading.
-
Conclusion
-
Summary of the main points
-
In this article, we have shown you how to download "Your Love Has Taken Over Me" by Frank Edwards from different sources, such as PraiseZion.com, Genius.com, and YouTube.com. This is a gospel song that celebrates God's love and protection in every situation. It can inspire you to trust in God's promises and worship Him for His goodness and mercy.
-
Call to action
-
If you have not downloaded this song yet, we encourage you to do so now and enjoy its uplifting message and melody. You can also share it with your friends and family who might need some encouragement and hope in their lives. We hope you have found this article helpful and informative. Thank you for reading!
- FAQs - Q: Who is Frank Edwards? - A: Frank Edwards is a Nigerian gospel singer and producer who has won several awards and recognition for his music. - Q: What is the name of his album that contains "Your Love Has Taken Over Me"? - A: The name of his album is "Frankincense", which was released in 2016. - Q: What are some other songs by Frank Edwards that you can download? - A: Some other songs by Frank Edwards that you can download are "Mma Mma", "Okaka", "I See Him", "Miracle Rain", etc. - Q: How can you support Frank Edwards and his music? - A: You can support Frank Edwards and his music by following him on his social media platforms, such as Facebook, Twitter, Instagram, etc. You can also buy his albums or songs from online stores, such as iTunes, Amazon, Spotify, etc. You can also attend his concerts or events if he is performing near you. - Q: How can you learn more about gospel music and its benefits? - A: You can learn more about gospel music and its benefits by reading articles, books, blogs, magazines, etc. that talk about the history, culture, genres, artists, and impact of gospel music. You can also listen to gospel radio stations or podcasts that feature gospel music and interviews with gospel musicians. You can also join gospel music communities or groups online or offline that share your passion and interest in gospel music. 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Dummy Images for Any Project - Free and High-Quality.md b/spaces/1phancelerku/anime-remove-background/Dummy Images for Any Project - Free and High-Quality.md
deleted file mode 100644
index fb92cf1024b84582c3bd05832d66397569e05462..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Dummy Images for Any Project - Free and High-Quality.md
+++ /dev/null
@@ -1,102 +0,0 @@
-
-
How to Download Dummy Images for Your Web Design Projects
-
If you are a web designer, you know how important it is to have good images for your website. Images can make or break your design, attract or repel your visitors, and convey or confuse your message. However, finding and creating the perfect images for your website can be challenging and time-consuming. Sometimes, you may not have the final images ready when you are working on your layout, or you may want to experiment with different options before committing to one.
That's where dummy images come in handy. Dummy images are placeholder images that you can use to fill in the gaps in your web design projects until you have the final images ready. They can help you to visualize how your layout will look, test different sizes and formats, and avoid delays in your design process.
-
In this article, we will show you how to download dummy images for your web design projects using some of the best online tools available. We will also give you some tips on how to use them effectively and replace them with real ones when you are done.
-
What are Dummy Images and Why Use Them?
-
Dummy images are placeholder images that you can use to fill in the gaps in your web design projects until you have the final images ready.
-
Dummy images are exactly what they sound like – they’re temporary, universal images that temporarily replace graphics or text on a webpage. They’re important for designers and developers who want to present a design concept to their client before finalizing the layout and content.
-
Dummy images can be anything from a solid color block, a random photo, a text overlay, or a custom image that matches your theme or style. You can create them yourself using an image editor, or use one of the many online tools that can generate them for you.
-
Dummy images can help you to visualize how your layout will look, test different sizes and formats, and avoid delays in your design process.
-
Using dummy images has many benefits for web designers. Here are some of them:
-
-
They allow you to see how your layout will look with different types of images, such as photos, illustrations, icons, logos, etc.
-
They help you to test how your layout will respond to different image sizes and formats, such as landscape, portrait, square, circle, etc.
-
They enable you to experiment with different image styles and effects, such as grayscale, blur, opacity, etc.
-
They save you time and hassle by allowing you to work on your layout without waiting for the final images to be ready or approved.
-
They prevent you from using low-quality or inappropriate images that may ruin your design or cause legal issues.
-
-
How to Find and Download Dummy Images Online
-
There are many online tools that can help you to generate and download dummy images for free Here are some of the best ones:
-
Lorem Picsum: The Lorem Ipsum for Photos
-
Lorem Picsum is a simple and elegant tool that allows you to download random photos from the popular online photo platform Unsplash. You can specify the dimensions, format, and category of the photos you want, or let the tool choose them for you. You can also add filters, blur, and grayscale effects to your photos. To use Lorem Picsum, simply visit their website and enter the URL of the image you want, such as https://picsum.photos/200/300. You can also download multiple images at once by using the list feature, such as https://picsum.photos/v2/list?limit=10.
-
download dummy images for free
-download dummy images for web design
-download dummy images for testing
-download dummy images for placeholders
-download dummy images for mockups
-download dummy images for layout
-download dummy images for commercial use
-download dummy images for creative commons
-download dummy images from freepik
-download dummy images from placeimg
-download dummy images from unsplash
-download dummy images from lorem picsum
-download dummy images from pixabay
-download dummy images from pexels
-download dummy images from placeholder.com
-download dummy photos of people
-download dummy photos of animals
-download dummy photos of nature
-download dummy photos of food
-download dummy photos of cars
-download dummy photos of fashion
-download dummy photos of travel
-download dummy photos of sports
-download dummy photos of business
-download dummy photos of technology
-download dummy vectors of icons
-download dummy vectors of logos
-download dummy vectors of shapes
-download dummy vectors of patterns
-download dummy vectors of illustrations
-download dummy vectors of backgrounds
-download dummy vectors of infographics
-download dummy vectors of banners
-download dummy vectors of stickers
-download dummy vectors of cartoons
-download dummy PSD files for photoshop
-download dummy PSD files for editing
-download dummy PSD files for layers
-download dummy PSD files for templates
-download dummy PSD files for graphics
-download high-quality dummy images
-download low-quality dummy images
-download random dummy images
-download specific dummy images
-how to download dummy images
-where to download dummy images
-why to download dummy images
-best sites to download dummy images
-best tools to download dummy images
-
Placeholder.com: A Simple and Versatile Image Generator
-
Placeholder.com is a handy tool that allows you to create and download dummy images of any size, color, and text. You can use it to generate solid color blocks, gradients, patterns, text overlays, and more. You can also customize the font, size, alignment, and color of the text. To use Placeholder.com, simply visit their website and enter the URL of the image you want, such as https://via.placeholder.com/300x200.png/09f/fff?text=Dummy+Image. You can also use their API to generate images dynamically in your code.
-
LoremFlickr: A Flickr-Based Image Generator
-
LoremFlickr is a useful tool that allows you to download random photos from the popular online photo platform Flickr. You can specify the dimensions, format, and keyword of the photos you want, or let the tool choose them for you. You can also add filters, blur, and grayscale effects to your photos. To use LoremFlickr, simply visit their website and enter the URL of the image you want, such as https://loremflickr.com/320/240/dog. You can also download multiple images at once by using the g feature, such as https://loremflickr.com/g/320/240/dog/all.
-
Dummy Image Generator: A Customizable Image Tool
-
Dummy Image Generator is a powerful tool that allows you to create and download dummy images of any size, color, shape, and text. You can use it to generate circles, squares, triangles, stars, hearts, and more. You can also customize the background color, foreground color, border color, border width, font family, font size, font style, font color, and text content of your images. To use Dummy Image Generator, simply visit their website and enter the parameters of the image you want in the form. You can also use their API to generate images dynamically in your code.
-
How to Use Dummy Images in Your Web Design Projects
-
Once you have downloaded your dummy images, you can use them in your web design projects in various ways. Here are some tips:
-
Use the same dimensions and formats as your final images
-
One of the main purposes of using dummy images is to test how your layout will look with different types of images. Therefore, it is important to use dummy images that have the same dimensions and formats as your final images. This will help you to avoid any surprises or errors when you replace them with real ones. For example, if your final images are 300x200 pixels in JPEG format, you should use dummy images that are 300x200 pixels in JPEG format as well.
-
Use descriptive file names and alt text for your dummy images
-
Another purpose of using dummy images is to remind yourself and others what kind of images you need for your web design projects. Therefore, it is helpful to use descriptive file names and alt text for your dummy images. This will help you to keep track of what each image is supposed to represent and what content it should convey. For example, if your dummy image is a placeholder for a logo of a company called ABC Inc., you could name it abc-logo.jpg and use alt text like "Logo of ABC Inc."
-
Replace your dummy images with real ones as soon as possible
-
The final purpose of using dummy images is to speed up your web design process by allowing you to work on your layout without waiting for the final images to be ready or approved. However, you should always remember to replace your dummy images with real ones as soon as possible. This will help you to avoid any confusion or misunderstanding with your clients or users. It will also help you to improve the quality and credibility of your website.
-
Conclusion
-
Dummy images are a useful way to create realistic mockups and prototypes for your web design projects. They can save you time, hassle, and frustration. However, you should always remember to replace them with high-quality images before launching your website.
-
In this article, we have shown you how to download dummy images for your web design projects using some of the best online tools available. We have also given you some tips on how to use them effectively and replace them with real ones when you are done. We hope you have found this article helpful and informative. If you have any questions or comments, please feel free to leave them below.
-
FAQs
-
What are dummy images?
-
Dummy images are placeholder images that you can use to fill in the gaps in your web design projects until you have the final images ready.
-
Why use dummy images?
-
Dummy images can help you to visualize how your layout will look, test different sizes and formats, and avoid delays in your design process.
-
How to download dummy images?
-
You can download dummy images from various online tools that can generate them for you, such as Lorem Picsum, Placeholder.com, LoremFlickr, and Dummy Image Generator.
-
How to use dummy images?
-
You can use dummy images in your web design projects by inserting them in your code or image editor. You should use the same dimensions and formats as your final images, use descriptive file names and alt text for your dummy images, and replace them with real ones as soon as possible.
-
Where to find high-quality images for your website?
-
You can find high-quality images for your website from various online sources, such as stock photo websites, free image websites, or your own photography. You should always make sure that the images you use are relevant, appropriate, and legal for your website.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/7hao/bingo/src/components/providers.tsx b/spaces/7hao/bingo/src/components/providers.tsx
deleted file mode 100644
index 892226412d80fe0b05211911b9e245cd22876460..0000000000000000000000000000000000000000
--- a/spaces/7hao/bingo/src/components/providers.tsx
+++ /dev/null
@@ -1,15 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import { ThemeProvider as NextThemesProvider } from 'next-themes'
-import { ThemeProviderProps } from 'next-themes/dist/types'
-
-import { TooltipProvider } from '@/components/ui/tooltip'
-
-export function Providers({ children, ...props }: ThemeProviderProps) {
- return (
-
- {children}
-
- )
-}
diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/layers.py b/spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/layers.py
deleted file mode 100644
index 9835dc0f0dd66a7ef3517101180ec2c54eb6011d..0000000000000000000000000000000000000000
--- a/spaces/AI-Hobbyist/Hoyo-RVC/uvr5_pack/lib_v5/layers.py
+++ /dev/null
@@ -1,118 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-
-from uvr5_pack.lib_v5 import spec_utils
-
-
-class Conv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(Conv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nout,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- bias=False,
- ),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class SeperableConv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(SeperableConv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nin,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- groups=nin,
- bias=False,
- ),
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class Encoder(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
- super(Encoder, self).__init__()
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
-
- def __call__(self, x):
- skip = self.conv1(x)
- h = self.conv2(skip)
-
- return h, skip
-
-
-class Decoder(nn.Module):
- def __init__(
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
- ):
- super(Decoder, self).__init__()
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.dropout = nn.Dropout2d(0.1) if dropout else None
-
- def __call__(self, x, skip=None):
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
- if skip is not None:
- skip = spec_utils.crop_center(skip, x)
- x = torch.cat([x, skip], dim=1)
- h = self.conv(x)
-
- if self.dropout is not None:
- h = self.dropout(h)
-
- return h
-
-
-class ASPPModule(nn.Module):
- def __init__(self, nin, nout, dilations=(4, 8, 16), activ=nn.ReLU):
- super(ASPPModule, self).__init__()
- self.conv1 = nn.Sequential(
- nn.AdaptiveAvgPool2d((1, None)),
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
- )
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
- self.conv3 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
- )
- self.conv4 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
- )
- self.conv5 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.bottleneck = nn.Sequential(
- Conv2DBNActiv(nin * 5, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
- )
-
- def forward(self, x):
- _, _, h, w = x.size()
- feat1 = F.interpolate(
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
- )
- feat2 = self.conv2(x)
- feat3 = self.conv3(x)
- feat4 = self.conv4(x)
- feat5 = self.conv5(x)
- out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1)
- bottle = self.bottleneck(out)
- return bottle
diff --git a/spaces/AIConsultant/MusicGen/audiocraft/grids/diffusion/_explorers.py b/spaces/AIConsultant/MusicGen/audiocraft/grids/diffusion/_explorers.py
deleted file mode 100644
index 0bf4ca57b63f5f9308bd1178ddbde5d8f06748e5..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/audiocraft/grids/diffusion/_explorers.py
+++ /dev/null
@@ -1,66 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import treetable as tt
-
-from .._base_explorers import BaseExplorer
-
-
-class DiffusionExplorer(BaseExplorer):
- eval_metrics = ["sisnr", "visqol"]
-
- def stages(self):
- return ["train", "valid", "valid_ema", "evaluate", "evaluate_ema"]
-
- def get_grid_meta(self):
- """Returns the list of Meta information to display for each XP/job.
- """
- return [
- tt.leaf("index", align=">"),
- tt.leaf("name", wrap=140),
- tt.leaf("state"),
- tt.leaf("sig", align=">"),
- ]
-
- def get_grid_metrics(self):
- """Return the metrics that should be displayed in the tracking table.
- """
- return [
- tt.group(
- "train",
- [
- tt.leaf("epoch"),
- tt.leaf("loss", ".3%"),
- ],
- align=">",
- ),
- tt.group(
- "valid",
- [
- tt.leaf("loss", ".3%"),
- # tt.leaf("loss_0", ".3%"),
- ],
- align=">",
- ),
- tt.group(
- "valid_ema",
- [
- tt.leaf("loss", ".3%"),
- # tt.leaf("loss_0", ".3%"),
- ],
- align=">",
- ),
- tt.group(
- "evaluate", [tt.leaf("rvm", ".4f"), tt.leaf("rvm_0", ".4f"),
- tt.leaf("rvm_1", ".4f"), tt.leaf("rvm_2", ".4f"),
- tt.leaf("rvm_3", ".4f"), ], align=">"
- ),
- tt.group(
- "evaluate_ema", [tt.leaf("rvm", ".4f"), tt.leaf("rvm_0", ".4f"),
- tt.leaf("rvm_1", ".4f"), tt.leaf("rvm_2", ".4f"),
- tt.leaf("rvm_3", ".4f")], align=">"
- ),
- ]
diff --git a/spaces/AIFILMS/StyleGANEX/utils/__init__.py b/spaces/AIFILMS/StyleGANEX/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/AIFILMS/generate_human_motion/pyrender/pyrender/platforms/pyglet_platform.py b/spaces/AIFILMS/generate_human_motion/pyrender/pyrender/platforms/pyglet_platform.py
deleted file mode 100644
index a70cf7b659bc85a92f6c9c8ebcc360662a068507..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/generate_human_motion/pyrender/pyrender/platforms/pyglet_platform.py
+++ /dev/null
@@ -1,90 +0,0 @@
-from pyrender.constants import (TARGET_OPEN_GL_MAJOR, TARGET_OPEN_GL_MINOR,
- MIN_OPEN_GL_MAJOR, MIN_OPEN_GL_MINOR)
-from .base import Platform
-
-import OpenGL
-
-
-__all__ = ['PygletPlatform']
-
-
-class PygletPlatform(Platform):
- """Renders on-screen using a 1x1 hidden Pyglet window for getting
- an OpenGL context.
- """
-
- def __init__(self, viewport_width, viewport_height):
- super(PygletPlatform, self).__init__(viewport_width, viewport_height)
- self._window = None
-
- def init_context(self):
- import pyglet
- pyglet.options['shadow_window'] = False
-
- try:
- pyglet.lib.x11.xlib.XInitThreads()
- except Exception:
- pass
-
- self._window = None
- confs = [pyglet.gl.Config(sample_buffers=1, samples=4,
- depth_size=24,
- double_buffer=True,
- major_version=TARGET_OPEN_GL_MAJOR,
- minor_version=TARGET_OPEN_GL_MINOR),
- pyglet.gl.Config(depth_size=24,
- double_buffer=True,
- major_version=TARGET_OPEN_GL_MAJOR,
- minor_version=TARGET_OPEN_GL_MINOR),
- pyglet.gl.Config(sample_buffers=1, samples=4,
- depth_size=24,
- double_buffer=True,
- major_version=MIN_OPEN_GL_MAJOR,
- minor_version=MIN_OPEN_GL_MINOR),
- pyglet.gl.Config(depth_size=24,
- double_buffer=True,
- major_version=MIN_OPEN_GL_MAJOR,
- minor_version=MIN_OPEN_GL_MINOR)]
- for conf in confs:
- try:
- self._window = pyglet.window.Window(config=conf, visible=False,
- resizable=False,
- width=1, height=1)
- break
- except pyglet.window.NoSuchConfigException as e:
- pass
-
- if not self._window:
- raise ValueError(
- 'Failed to initialize Pyglet window with an OpenGL >= 3+ '
- 'context. If you\'re logged in via SSH, ensure that you\'re '
- 'running your script with vglrun (i.e. VirtualGL). The '
- 'internal error message was "{}"'.format(e)
- )
-
- def make_current(self):
- if self._window:
- self._window.switch_to()
-
- def make_uncurrent(self):
- try:
- import pyglet
- pyglet.gl.xlib.glx.glXMakeContextCurrent(self._window.context.x_display, 0, 0, None)
- except Exception:
- pass
-
- def delete_context(self):
- if self._window is not None:
- self.make_current()
- cid = OpenGL.contextdata.getContext()
- try:
- self._window.context.destroy()
- self._window.close()
- except Exception:
- pass
- self._window = None
- OpenGL.contextdata.cleanupContext(cid)
- del cid
-
- def supports_framebuffers(self):
- return True
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/metrics/laplace_var.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/metrics/laplace_var.py
deleted file mode 100644
index ec6f5f8d877195e7ee512d7e9f6f8a879d3ef32c..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/metrics/laplace_var.py
+++ /dev/null
@@ -1,4 +0,0 @@
-import scipy.ndimage
-
-def laplace_var(x):
- return scipy.ndimage.laplace(x).var()
diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/model.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/model.py
deleted file mode 100644
index d5069bad0d9311e6e2c082a63eca165f7a908675..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/model.py
+++ /dev/null
@@ -1,77 +0,0 @@
-import torch
-import torch.nn as nn
-
-
-class VGGishish(nn.Module):
-
- def __init__(self, conv_layers, use_bn, num_classes):
- '''
- Mostly from
- https://pytorch.org/vision/0.8/_modules/torchvision/models/vgg.html
- '''
- super().__init__()
- layers = []
- in_channels = 1
-
- # a list of channels with 'MP' (maxpool) from config
- for v in conv_layers:
- if v == 'MP':
- layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
- else:
- conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1, stride=1)
- if use_bn:
- layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
- else:
- layers += [conv2d, nn.ReLU(inplace=True)]
- in_channels = v
- self.features = nn.Sequential(*layers)
-
- self.avgpool = nn.AdaptiveAvgPool2d((5, 10))
-
- self.flatten = nn.Flatten()
- self.classifier = nn.Sequential(
- nn.Linear(512 * 5 * 10, 4096),
- nn.ReLU(True),
- nn.Linear(4096, 4096),
- nn.ReLU(True),
- nn.Linear(4096, num_classes)
- )
-
- # weight init
- self.reset_parameters()
-
- def forward(self, x):
- # adding channel dim for conv2d (B, 1, F, T) <-
- x = x.unsqueeze(1)
- # backbone (B, 1, 5, 53) <- (B, 1, 80, 860)
- x = self.features(x)
- # adaptive avg pooling (B, 1, 5, 10) <- (B, 1, 5, 53) – if no MP is used as the end of VGG
- x = self.avgpool(x)
- # flatten
- x = self.flatten(x)
- # classify
- x = self.classifier(x)
- return x
-
- def reset_parameters(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.BatchNorm2d):
- nn.init.constant_(m.weight, 1)
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.Linear):
- nn.init.normal_(m.weight, 0, 0.01)
- nn.init.constant_(m.bias, 0)
-
-
-if __name__ == '__main__':
- num_classes = 309
- inputs = torch.rand(3, 80, 848)
- conv_layers = [64, 64, 'MP', 128, 128, 'MP', 256, 256, 256, 'MP', 512, 512, 512, 'MP', 512, 512, 512]
- # conv_layers = [64, 'MP', 128, 'MP', 256, 256, 'MP', 512, 512, 'MP']
- model = VGGishish(conv_layers, use_bn=False, num_classes=num_classes)
- outputs = model(inputs)
- print(outputs.shape)
diff --git a/spaces/AILab-CVC/SEED-LLaMA/gradio_demo/seed_llama_flask.py b/spaces/AILab-CVC/SEED-LLaMA/gradio_demo/seed_llama_flask.py
deleted file mode 100644
index 55b72dc3483db60761d2f53d5fc2309b065c4380..0000000000000000000000000000000000000000
--- a/spaces/AILab-CVC/SEED-LLaMA/gradio_demo/seed_llama_flask.py
+++ /dev/null
@@ -1,231 +0,0 @@
-import hydra
-
-import pyrootutils
-import os
-import torch
-
-from omegaconf import OmegaConf
-from flask import Flask, request
-import json
-from typing import Optional
-import transformers
-from dataclasses import dataclass, field
-import io
-import base64
-from PIL import Image
-import gc
-
-pyrootutils.setup_root(__file__, indicator=".project-root", pythonpath=True)
-
-BOI_TOKEN = ''
-EOI_TOKEN = ''
-IMG_TOKEN = ''
-
-IMG_FLAG = ''
-NUM_IMG_TOKNES = 32
-NUM_IMG_CODES = 8192
-
-app = Flask(__name__)
-
-
-def decode_image(encoded_image: str) -> Image:
- decoded_bytes = base64.b64decode(encoded_image.encode('utf-8'))
- buffer = io.BytesIO(decoded_bytes)
- image = Image.open(buffer)
- return image
-
-
-def encode_image(image: Image.Image, format: str = 'PNG') -> str:
- with io.BytesIO() as buffer:
- image.save(buffer, format=format)
- encoded_image = base64.b64encode(buffer.getvalue()).decode('utf-8')
- return encoded_image
-
-
-@dataclass
-class Arguments:
- image_transform: Optional[str] = field(default=None, metadata={"help": "config path of image transform"})
- tokenizer: Optional[str] = field(default=None, metadata={"help": "config path of tokenizer used to initialize tokenizer"})
- model: Optional[str] = field(default=None, metadata={"help": "config path of llm"})
- port: Optional[str] = field(default=80, metadata={"help": "network port"})
- llm_device: Optional[str] = field(default='cuda:0', metadata={"help": "llm device"})
- tokenizer_device: Optional[str] = field(default='cuda:0', metadata={"help": "tokenizer device"})
- offload_encoder: Optional[bool] = field(default=False, metadata={"help": "offload image tokenizer"})
- offload_decoder: Optional[bool] = field(default=False, metadata={"help": "offload image tokenizer"})
-
-
-parser = transformers.HfArgumentParser(Arguments)
-args, = parser.parse_args_into_dataclasses()
-
-
-class LLMService:
- def __init__(self, args) -> None:
- image_transform_cfg = OmegaConf.load(args.image_transform)
- tokenizer_cfg = OmegaConf.load(args.tokenizer)
- model_cfg = OmegaConf.load(args.model)
- self.image_id_shift = 32000
-
- self.image_transform = hydra.utils.instantiate(image_transform_cfg)
-
- model = hydra.utils.instantiate(model_cfg, device_map=args.llm_device).eval()
- self.model = model
- print(model.get_memory_footprint())
-
- self.tokenizer = hydra.utils.instantiate(tokenizer_cfg, device=args.tokenizer_device, load_diffusion=True)
- if args.offload_encoder:
- self.tokenizer.image_tokenizer.model.visual_encoder.to('cpu')
- if args.offload_decoder:
- self.tokenizer.image_tokenizer.diffusion_model.to('cpu')
-
- # model = hydra.utils.instantiate(model_cfg, torch_dtype=torch.float16)
- # self.model = model.eval().to(args.llm_device)
- self.llm_device = args.llm_device
- self.tokenizer_device = args.tokenizer_device
- self.offload_encoder = args.offload_encoder
- self.offload_decoder = args.offload_decoder
- self.boi_token_id = self.tokenizer(BOI_TOKEN, add_special_tokens=False).input_ids[0]
- self.eoi_token_id = self.tokenizer(EOI_TOKEN, add_special_tokens=False).input_ids[0]
- print('Init Done...')
-
-
-service = LLMService(args)
-
-
-@app.route('/generate', methods=['GET', 'POST'])
-def generate():
-
- request_info = request.get_json()
-
- text_list = request_info['text'].split(IMG_FLAG)
- image_list = request_info['images']
- temperature = request_info.get('temperature', 0.7)
- num_beams = request_info.get('num_beams', 1)
- max_new_tokens = request_info.get('max_new_tokens', 256)
- top_p = request_info.get('top_p', 0.5)
- force_boi = request_info.get('force_boi', False)
-
- assert len(text_list) == len(image_list) + 1
-
- if len(image_list) > 0:
- images_tensor_list = []
- images_tensor_indices = []
- images_ids_list = []
- images_ids_indices = []
- for idx, image_item in enumerate(image_list):
- if isinstance(image_item, str):
- image = decode_image(image_item)
- image_tensor = service.image_transform(image)
- images_tensor_list.append(image_tensor)
- images_tensor_indices.append(idx)
- else:
- images_ids_list.append(image_item)
- images_ids_indices.append(idx)
-
- if len(images_tensor_list) > 0:
- images_tensor = torch.stack(images_tensor_list, dim=0).to(service.tokenizer_device)
- if service.offload_encoder:
- service.tokenizer.image_tokenizer.model.visual_encoder.to(service.tokenizer_device)
-
- images_ids_1 = service.tokenizer.encode_image(image_torch=images_tensor).cpu()
- if args.offload_encoder:
- service.tokenizer.image_tokenizer.model.visual_encoder.to('cpu')
- torch.cuda.empty_cache()
- gc.collect()
- num_image_ids = images_ids_1.shape[-1]
- else:
- num_image_ids = len(images_ids_list[-1])
- images_ids_2 = torch.tensor(images_ids_list, dtype=torch.long)
-
- images_ids = torch.zeros((len(image_list), num_image_ids), dtype=torch.long)
- if len(images_tensor_indices) > 0:
- images_ids[images_tensor_indices, :] = images_ids_1
- if len(images_ids_indices) > 0:
- images_ids[images_ids_indices, :] = images_ids_2
-
- input_text = ''
- for i in range(images_ids.shape[0]):
- single_image_ids = images_ids[i].view(-1).tolist()
- image_tokens = BOI_TOKEN + ''.join([IMG_TOKEN.format(int(item)) for item in single_image_ids]) + EOI_TOKEN
- input_text += text_list[i] + image_tokens
-
- input_text = service.tokenizer.bos_token + input_text + text_list[-1]
-
- images_ids_list = images_ids.tolist()
- else:
-
- input_text = service.tokenizer.bos_token + ''.join(text_list)
- images_ids_list = []
-
- if force_boi:
- input_text += BOI_TOKEN
-
- print(input_text)
- input_ids = service.tokenizer(input_text, add_special_tokens=False, return_tensors='pt').input_ids
- input_ids = input_ids.to(service.llm_device)
- generation_config = {
- 'temperature': temperature,
- 'num_beams': num_beams,
- 'max_new_tokens': max_new_tokens,
- 'top_p': top_p,
- 'do_sample': True
- }
-
- generate_ids = service.model.generate(input_ids=input_ids, **generation_config)
-
- if force_boi:
- generate_ids = generate_ids[0][input_ids.shape[1] - 1:]
- else:
- generate_ids = generate_ids[0][input_ids.shape[1]:]
- print('generated_ids: ', generate_ids)
- boi_indices = torch.where(generate_ids == service.boi_token_id)[0].tolist()
- eoi_indices = torch.where(generate_ids == service.eoi_token_id)[0].tolist()
- # assert len(boi_indices) == len(eoi_indices)
-
- generated_image_base64_list = []
- text_mask = torch.ones_like(generate_ids, dtype=torch.bool)
-
- error_msg = []
- if len(boi_indices) != len(eoi_indices):
- error_msg.append(
- f'Num of BOI (begain of image) tokens: {len(boi_indices)} is not equal to EOI(end of image tokens): {len(eoi_indices)}, some image Some images will fail to decode.'
- )
-
- num_images = min(len(boi_indices), len(eoi_indices))
- for idx in range(num_images):
- boi_index, eoi_index = boi_indices[idx], eoi_indices[idx]
- # for boi_index, eoi_index in zip(boi_indices, eoi_indices):
- image_ids = generate_ids[boi_index + 1:eoi_index].unsqueeze(0).to(service.tokenizer_device)
- image_ids = image_ids - service.image_id_shift
- if image_ids.shape[-1] != NUM_IMG_TOKNES:
- error_msg.append(f'Len(image_ids) {image_ids.shape[-1]} is not equal to {NUM_IMG_TOKNES}')
- image_base64 = ''
- elif (image_ids < 0).any() or (image_ids >= NUM_IMG_CODES).any():
- error_msg.append(f'Some image_id out of range: [0, {NUM_IMG_CODES})')
- image_base64 = ''
- else:
- if service.offload_decoder:
- service.tokenizer.image_tokenizer.diffusion_model.to(service.tokenizer_device)
- image = service.tokenizer.decode_image(image_ids)[0]
- if service.offload_decoder:
- service.tokenizer.image_tokenizer.diffusion_model.to('cpu')
- torch.cuda.empty_cache()
- gc.collect()
- image_base64 = encode_image(image)
-
- generated_image_base64_list.append(image_base64)
- text_mask[boi_index + 1:eoi_index] = False
- images_ids_list.append(image_ids.view(-1).tolist())
- generate_ids = generate_ids[text_mask]
-
- # print('generate_ids: ', generate_ids)
- # generate_text = service.tokenizer.decode(generate_ids, skip_special_tokens=True)
- generate_text = service.tokenizer.decode(generate_ids, skip_special_tokens=False)
- # print('generate_text before: ', generate_text)
- generate_text = generate_text.replace(BOI_TOKEN + ' ' + EOI_TOKEN + ' ', IMG_FLAG)
- generate_text = generate_text.replace(service.tokenizer.eos_token, '')
- print('generate_text: ', generate_text)
- return {'text': generate_text, 'images': generated_image_base64_list, 'images_ids': images_ids_list, 'error_msg': error_msg}
-
-
-if __name__ == '__main__':
- app.run(host='0.0.0.0', port=args.port)
diff --git a/spaces/AchyuthGamer/Free-Accounts-Generator/style.css b/spaces/AchyuthGamer/Free-Accounts-Generator/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/Free-Accounts-Generator/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/openpose/body.py b/spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/openpose/body.py
deleted file mode 100644
index ecfa8a0946ee9f653f7c00e928ae54b0109a9bdf..0000000000000000000000000000000000000000
--- a/spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/openpose/body.py
+++ /dev/null
@@ -1,211 +0,0 @@
-import cv2
-import math
-import matplotlib
-import matplotlib.pyplot as plt
-import numpy as np
-import time
-import torch
-from scipy.ndimage.filters import gaussian_filter
-from torchvision import transforms
-
-from . import util
-from .model import bodypose_model
-
-
-class Body(object):
-
- def __init__(self, model_path):
- self.model = bodypose_model()
- if torch.cuda.is_available():
- self.model = self.model.cuda()
- print('cuda')
- model_dict = util.transfer(self.model, torch.load(model_path))
- self.model.load_state_dict(model_dict)
- self.model.eval()
-
- def __call__(self, oriImg):
- # scale_search = [0.5, 1.0, 1.5, 2.0]
- scale_search = [0.5]
- boxsize = 368
- stride = 8
- padValue = 128
- thre1 = 0.1
- thre2 = 0.05
- multiplier = [x * boxsize / oriImg.shape[0] for x in scale_search]
- heatmap_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 19))
- paf_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 38))
-
- for m in range(len(multiplier)):
- scale = multiplier[m]
- imageToTest = cv2.resize(oriImg, (0, 0), fx=scale, fy=scale, interpolation=cv2.INTER_CUBIC)
- imageToTest_padded, pad = util.padRightDownCorner(imageToTest, stride, padValue)
- im = np.transpose(np.float32(imageToTest_padded[:, :, :, np.newaxis]), (3, 2, 0, 1)) / 256 - 0.5
- im = np.ascontiguousarray(im)
-
- data = torch.from_numpy(im).float()
- if torch.cuda.is_available():
- data = data.cuda()
- # data = data.permute([2, 0, 1]).unsqueeze(0).float()
- with torch.no_grad():
- Mconv7_stage6_L1, Mconv7_stage6_L2 = self.model(data)
- Mconv7_stage6_L1 = Mconv7_stage6_L1.cpu().numpy()
- Mconv7_stage6_L2 = Mconv7_stage6_L2.cpu().numpy()
-
- # extract outputs, resize, and remove padding
- # heatmap = np.transpose(np.squeeze(net.blobs[output_blobs.keys()[1]].data), (1, 2, 0)) # output 1 is heatmaps
- heatmap = np.transpose(np.squeeze(Mconv7_stage6_L2), (1, 2, 0)) # output 1 is heatmaps
- heatmap = cv2.resize(heatmap, (0, 0), fx=stride, fy=stride, interpolation=cv2.INTER_CUBIC)
- heatmap = heatmap[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :]
- heatmap = cv2.resize(heatmap, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC)
-
- # paf = np.transpose(np.squeeze(net.blobs[output_blobs.keys()[0]].data), (1, 2, 0)) # output 0 is PAFs
- paf = np.transpose(np.squeeze(Mconv7_stage6_L1), (1, 2, 0)) # output 0 is PAFs
- paf = cv2.resize(paf, (0, 0), fx=stride, fy=stride, interpolation=cv2.INTER_CUBIC)
- paf = paf[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :]
- paf = cv2.resize(paf, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC)
-
- heatmap_avg += heatmap_avg + heatmap / len(multiplier)
- paf_avg += +paf / len(multiplier)
-
- all_peaks = []
- peak_counter = 0
-
- for part in range(18):
- map_ori = heatmap_avg[:, :, part]
- one_heatmap = gaussian_filter(map_ori, sigma=3)
-
- map_left = np.zeros(one_heatmap.shape)
- map_left[1:, :] = one_heatmap[:-1, :]
- map_right = np.zeros(one_heatmap.shape)
- map_right[:-1, :] = one_heatmap[1:, :]
- map_up = np.zeros(one_heatmap.shape)
- map_up[:, 1:] = one_heatmap[:, :-1]
- map_down = np.zeros(one_heatmap.shape)
- map_down[:, :-1] = one_heatmap[:, 1:]
-
- peaks_binary = np.logical_and.reduce((one_heatmap >= map_left, one_heatmap >= map_right,
- one_heatmap >= map_up, one_heatmap >= map_down, one_heatmap > thre1))
- peaks = list(zip(np.nonzero(peaks_binary)[1], np.nonzero(peaks_binary)[0])) # note reverse
- peaks_with_score = [x + (map_ori[x[1], x[0]], ) for x in peaks]
- peak_id = range(peak_counter, peak_counter + len(peaks))
- peaks_with_score_and_id = [peaks_with_score[i] + (peak_id[i], ) for i in range(len(peak_id))]
-
- all_peaks.append(peaks_with_score_and_id)
- peak_counter += len(peaks)
-
- # find connection in the specified sequence, center 29 is in the position 15
- limbSeq = [[2, 3], [2, 6], [3, 4], [4, 5], [6, 7], [7, 8], [2, 9], [9, 10], \
- [10, 11], [2, 12], [12, 13], [13, 14], [2, 1], [1, 15], [15, 17], \
- [1, 16], [16, 18], [3, 17], [6, 18]]
- # the middle joints heatmap correpondence
- mapIdx = [[31, 32], [39, 40], [33, 34], [35, 36], [41, 42], [43, 44], [19, 20], [21, 22], \
- [23, 24], [25, 26], [27, 28], [29, 30], [47, 48], [49, 50], [53, 54], [51, 52], \
- [55, 56], [37, 38], [45, 46]]
-
- connection_all = []
- special_k = []
- mid_num = 10
-
- for k in range(len(mapIdx)):
- score_mid = paf_avg[:, :, [x - 19 for x in mapIdx[k]]]
- candA = all_peaks[limbSeq[k][0] - 1]
- candB = all_peaks[limbSeq[k][1] - 1]
- nA = len(candA)
- nB = len(candB)
- indexA, indexB = limbSeq[k]
- if (nA != 0 and nB != 0):
- connection_candidate = []
- for i in range(nA):
- for j in range(nB):
- vec = np.subtract(candB[j][:2], candA[i][:2])
- norm = math.sqrt(vec[0] * vec[0] + vec[1] * vec[1])
- norm = max(0.001, norm)
- vec = np.divide(vec, norm)
-
- startend = list(zip(np.linspace(candA[i][0], candB[j][0], num=mid_num), \
- np.linspace(candA[i][1], candB[j][1], num=mid_num)))
-
- vec_x = np.array([score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 0] \
- for I in range(len(startend))])
- vec_y = np.array([score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 1] \
- for I in range(len(startend))])
-
- score_midpts = np.multiply(vec_x, vec[0]) + np.multiply(vec_y, vec[1])
- score_with_dist_prior = sum(score_midpts) / len(score_midpts) + min(
- 0.5 * oriImg.shape[0] / norm - 1, 0)
- criterion1 = len(np.nonzero(score_midpts > thre2)[0]) > 0.8 * len(score_midpts)
- criterion2 = score_with_dist_prior > 0
- if criterion1 and criterion2:
- connection_candidate.append(
- [i, j, score_with_dist_prior, score_with_dist_prior + candA[i][2] + candB[j][2]])
-
- connection_candidate = sorted(connection_candidate, key=lambda x: x[2], reverse=True)
- connection = np.zeros((0, 5))
- for c in range(len(connection_candidate)):
- i, j, s = connection_candidate[c][0:3]
- if (i not in connection[:, 3] and j not in connection[:, 4]):
- connection = np.vstack([connection, [candA[i][3], candB[j][3], s, i, j]])
- if (len(connection) >= min(nA, nB)):
- break
-
- connection_all.append(connection)
- else:
- special_k.append(k)
- connection_all.append([])
-
- # last number in each row is the total parts number of that person
- # the second last number in each row is the score of the overall configuration
- subset = -1 * np.ones((0, 20))
- candidate = np.array([item for sublist in all_peaks for item in sublist])
-
- for k in range(len(mapIdx)):
- if k not in special_k:
- partAs = connection_all[k][:, 0]
- partBs = connection_all[k][:, 1]
- indexA, indexB = np.array(limbSeq[k]) - 1
-
- for i in range(len(connection_all[k])): # = 1:size(temp,1)
- found = 0
- subset_idx = [-1, -1]
- for j in range(len(subset)): # 1:size(subset,1):
- if subset[j][indexA] == partAs[i] or subset[j][indexB] == partBs[i]:
- subset_idx[found] = j
- found += 1
-
- if found == 1:
- j = subset_idx[0]
- if subset[j][indexB] != partBs[i]:
- subset[j][indexB] = partBs[i]
- subset[j][-1] += 1
- subset[j][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2]
- elif found == 2: # if found 2 and disjoint, merge them
- j1, j2 = subset_idx
- membership = ((subset[j1] >= 0).astype(int) + (subset[j2] >= 0).astype(int))[:-2]
- if len(np.nonzero(membership == 2)[0]) == 0: # merge
- subset[j1][:-2] += (subset[j2][:-2] + 1)
- subset[j1][-2:] += subset[j2][-2:]
- subset[j1][-2] += connection_all[k][i][2]
- subset = np.delete(subset, j2, 0)
- else: # as like found == 1
- subset[j1][indexB] = partBs[i]
- subset[j1][-1] += 1
- subset[j1][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2]
-
- # if find no partA in the subset, create a new subset
- elif not found and k < 17:
- row = -1 * np.ones(20)
- row[indexA] = partAs[i]
- row[indexB] = partBs[i]
- row[-1] = 2
- row[-2] = sum(candidate[connection_all[k][i, :2].astype(int), 2]) + connection_all[k][i][2]
- subset = np.vstack([subset, row])
- # delete some rows of subset which has few parts occur
- deleteIdx = []
- for i in range(len(subset)):
- if subset[i][-1] < 4 or subset[i][-2] / subset[i][-1] < 0.4:
- deleteIdx.append(i)
- subset = np.delete(subset, deleteIdx, axis=0)
-
- # subset: n*20 array, 0-17 is the index in candidate, 18 is the total score, 19 is the total parts
- # candidate: x, y, score, id
- return candidate, subset
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/perlin.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/perlin.js
deleted file mode 100644
index de0156982497fb008f82f08287c6a129724b95a3..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/perlin.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import Perlin from './utils/math/noise/Perlin.js';
-export default Perlin;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/badgelabel/BadgeLabel.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/badgelabel/BadgeLabel.d.ts
deleted file mode 100644
index a63bf75bc550c3df5d69bfe5fdafa745b276da26..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/badgelabel/BadgeLabel.d.ts
+++ /dev/null
@@ -1,30 +0,0 @@
-// import * as Phaser from 'phaser';
-import OverlapSizer from '../overlapsizer/OverlapSizer';
-
-export default BadgeLabel;
-
-declare namespace BadgeLabel {
-
- interface IConfig extends OverlapSizer.IConfig {
- background?: Phaser.GameObjects.GameObject,
- main?: Phaser.GameObjects.GameObject,
-
- leftTop?: Phaser.GameObjects.GameObject,
- centerTop?: Phaser.GameObjects.GameObject,
- rightTop?: Phaser.GameObjects.GameObject,
- leftCenter?: Phaser.GameObjects.GameObject,
- center?: Phaser.GameObjects.GameObject,
- rightCenter?: Phaser.GameObjects.GameObject,
- leftBottom?: Phaser.GameObjects.GameObject,
- centerBottom?: Phaser.GameObjects.GameObject,
- rightBottom?: Phaser.GameObjects.GameObject,
- }
-}
-
-declare class BadgeLabel extends OverlapSizer {
-
- constructor(
- scene: Phaser.Scene,
- config?: BadgeLabel.IConfig
- );
-}
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/rotate/Rotate.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/rotate/Rotate.d.ts
deleted file mode 100644
index c3ee6b2d926309c431258a1f3b80a93ddf882bdc..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/rotate/Rotate.d.ts
+++ /dev/null
@@ -1,2 +0,0 @@
-import { Rotate } from '../../../plugins/gestures';
-export default Rotate;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/slider/OnDragThumb.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/slider/OnDragThumb.js
deleted file mode 100644
index 8d509b024cf9dc3450f70a98318bc9ecb399076c..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/slider/OnDragThumb.js
+++ /dev/null
@@ -1,22 +0,0 @@
-import PositionToPercent from './PositionToPercent.js';
-
-var OnDragThumb = function (pointer, dragX, dragY) {
- if (!this.enable) {
- return;
- }
- tmpPoint.x = dragX;
- tmpPoint.y = dragY;
-
- var startPoint, endPoint;
- if (!this.reverseAxis) {
- startPoint = this.getStartPoint();
- endPoint = this.getEndPoint();
- } else {
- startPoint = this.getEndPoint();
- endPoint = this.getStartPoint();
- }
- this.value = PositionToPercent(startPoint, endPoint, tmpPoint);
-}
-var tmpPoint = {};
-
-export default OnDragThumb;
\ No newline at end of file
diff --git a/spaces/Ali-Maq/Calorie_Calculator/app.py b/spaces/Ali-Maq/Calorie_Calculator/app.py
deleted file mode 100644
index 0469ddfdcaa0c34d355a136497367faac90037de..0000000000000000000000000000000000000000
--- a/spaces/Ali-Maq/Calorie_Calculator/app.py
+++ /dev/null
@@ -1,106 +0,0 @@
-import numpy as np
-import gradio as gr
-import requests
-import json
-
-def list_to_dict(data):
- results = {}
-
- for i in range(len(data)):
- # Access the i-th dictionary in the list using an integer index
- d = data[i]
- # Assign the value of the 'label' key to the 'score' value in the results dictionary
- results[d['label']] = d['score']
-
- # The results dictionary will now contain the label-score pairs from the data list
- return results
-
-API_URL = "https://api-inference.huggingface.co/models/nateraw/food"
-headers = {"Authorization": "Bearer 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"}
-
-def query(filename):
- with open(filename, "rb") as f:
- data = f.read()
- response = requests.request("POST", API_URL, headers=headers, data=data)
- output = json.loads(response.content.decode("utf-8"))
- return list_to_dict(output),json.dumps(output, indent=2, sort_keys=True)
-
-def get_nutrition_info(food_name):
- #Make request to Nutritionix API
- response = requests.get(
- "https://trackapi.nutritionix.com/v2/search/instant",
- params={"query": food_name},
- headers={
- "x-app-id": "63a710ef",
- "x-app-key": "3ddc7e3feda88e1cf6dd355fb26cb261"
- }
- )
- #Parse response and return relevant information
- data = response.json()
- response = data["branded"][0]["photo"]["thumb"]
- val = {
- "food_name": data["branded"][0]["food_name"],
- "calories": data["branded"][0]["nf_calories"],
- "serving_size": data["branded"][0]["serving_qty"],
- "serving_unit": data["branded"][0]["serving_unit"],
- #"images": data["branded"][0]["photo"]
- }
- # Open the image using PIL
- output = json.dumps(val, indent=2, sort_keys=True)
- return output,response
-
-def volume_estimations(ali):
- return None
-
-with gr.Blocks() as demo:
- gr.Markdown("Food-Classification-Calorie-Estimation and Volume-Estimation")
- with gr.Tab("Food Classification"):
- text_input = gr.Image(type="filepath",interactive=True,label="Upload the food Image and Zoom in to the item you want to get the calorie for")
- text_output = [gr.Label(num_top_classes=6),
- gr.Textbox()
- ]
- text_button = gr.Button("Food Classification")
- with gr.Tab("Food Calorie Estimation"):
- image_input = gr.Textbox(label="Please enter the name of the Food you want to get calorie")
- image_output = [gr.Textbox(),
- gr.Image(type="filepath")
- ]
- image_button = gr.Button("Estimate Calories!")
- with gr.Tab("Volume Estimation"):
- _image_input = gr.Textbox(label="Please Download the Photogrammetry File trained on APPLE AR KIT and follow the instruction mention below to generate the 3D Vortex of the object")
- _image_output = gr.Image()
- gr.Markdown("-----------------------------------------------------------------------------")
- gr.Markdown("Directory where HelloPhotogrammetry app Saved. Example:/Users/ali/Desktop/HelloPhotogrammetry")
- gr.Markdown("Directory where all the images are saved. Example:: ~/Desktop/Burger_Data_3")
- gr.Markdown("Directory where the usdz or obj file has to be saved. Example: ~/Desktop/Burger_Data_3/Burger.usdz")
- gr.Markdown("File Quality that you want your 3D model to be. Example: --detail medium ")
- gr.Markdown("-----------------------------------------------------------------------------")
- gr.Markdown("/Users/ali/Desktop/HelloPhotogrammetry ~/Desktop/Burger_Data_3 ~/Desktop/Burger_Data_3/Burger.obj --detail medium")
- gr.Markdown("You can download the photogrammetry demo and files using this Google drive link")
- gr.Markdown("-----------------------------------------------------------------------------")
- gr.Markdown("https://drive.google.com/drive/folders/1QrL0Vhvw5GvIQ8fbHfb9EOsnOlPMmXLG?usp=share_link")
- gr.Markdown("-----------------------------------------------------------------------------")
-
-
-
- _image_button = gr.Button("Volume Calculation")
- with gr.Tab("Future Works"):
- gr.Markdown("Future work on Food Classification")
- gr.Markdown(
- "Currently the Model is trained on food-101 Dataset, which has 100 classes, In the future iteration of the project we would like to train the model on UNIMIB Dataset with 256 Food Classes")
- gr.Markdown("Future work on Volume Estimation")
- gr.Markdown(
- "The volume model has been trained on Apple AR Toolkit and thus can be executred only on Apple devices ie a iOS platform, In futur we would like to train the volume model such that it is Platform independent")
- gr.Markdown("Future work on Calorie Estimation")
- gr.Markdown(
- "The Calorie Estimation currently relies on Nutritionix API , In Future Iteration we would like to build our own Custom Database of Major Food Product across New York Restaurent")
- gr.Markdown("https://github.com/Ali-Maq/Food-Classification-Volume-Estimation-and-Calorie-Estimation/blob/main/README.md")
-
- text_button.click(query, inputs=text_input, outputs=text_output,scroll_to_output=True,show_progress=True)
- image_button.click(get_nutrition_info, inputs=image_input, outputs=image_output,scroll_to_output=True,show_progress=True)
- #_image_button.click(get_nutrition_info, inputs=_image_input, outputs=_image_output)
- with gr.Accordion("Open for More!"):
- gr.Markdown("🍎 Designed and built by Ali Under the Guidance of Professor Dennis Shasha")
- gr.Markdown("Contact me at ali.quidwai@nyu.edu 😊")
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/training_stats.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/training_stats.py
deleted file mode 100644
index 7017b775bd47056e7daf2b098a1dd29372342f0a..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/training_stats.py
+++ /dev/null
@@ -1,285 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Facilities for reporting and collecting training statistics across
-multiple processes and devices. The interface is designed to minimize
-synchronization overhead as well as the amount of boilerplate in user
-code."""
-
-import re
-import numpy as np
-import torch
-import dnnlib
-
-from . import misc
-
-# ----------------------------------------------------------------------------
-
-_num_moments = 3 # [num_scalars, sum_of_scalars, sum_of_squares]
-# Data type to use for initial per-tensor reduction.
-_reduce_dtype = torch.float32
-_counter_dtype = torch.float64 # Data type to use for the internal counters.
-_rank = 0 # Rank of the current process.
-# Device to use for multiprocess communication. None = single-process.
-_sync_device = None
-_sync_called = False # Has _sync() been called yet?
-# Running counters on each device, updated by report(): name => device => torch.Tensor
-_counters = dict()
-# Cumulative counters on the CPU, updated by _sync(): name => torch.Tensor
-_cumulative = dict()
-
-# ----------------------------------------------------------------------------
-
-
-def init_multiprocessing(rank, sync_device):
- r"""Initializes `torch_utils.training_stats` for collecting statistics
- across multiple processes.
-
- This function must be called after
- `torch.distributed.init_process_group()` and before `Collector.update()`.
- The call is not necessary if multi-process collection is not needed.
-
- Args:
- rank: Rank of the current process.
- sync_device: PyTorch device to use for inter-process
- communication, or None to disable multi-process
- collection. Typically `torch.device('cuda', rank)`.
- """
- global _rank, _sync_device
- assert not _sync_called
- _rank = rank
- _sync_device = sync_device
-
-# ----------------------------------------------------------------------------
-
-
-@misc.profiled_function
-def report(name, value):
- r"""Broadcasts the given set of scalars to all interested instances of
- `Collector`, across device and process boundaries.
-
- This function is expected to be extremely cheap and can be safely
- called from anywhere in the training loop, loss function, or inside a
- `torch.nn.Module`.
-
- Warning: The current implementation expects the set of unique names to
- be consistent across processes. Please make sure that `report()` is
- called at least once for each unique name by each process, and in the
- same order. If a given process has no scalars to broadcast, it can do
- `report(name, [])` (empty list).
-
- Args:
- name: Arbitrary string specifying the name of the statistic.
- Averages are accumulated separately for each unique name.
- value: Arbitrary set of scalars. Can be a list, tuple,
- NumPy array, PyTorch tensor, or Python scalar.
-
- Returns:
- The same `value` that was passed in.
- """
- if name not in _counters:
- _counters[name] = dict()
-
- elems = torch.as_tensor(value)
- if elems.numel() == 0:
- return value
-
- elems = elems.detach().flatten().to(_reduce_dtype)
- moments = torch.stack([
- torch.ones_like(elems).sum(),
- elems.sum(),
- elems.square().sum(),
- ])
- assert moments.ndim == 1 and moments.shape[0] == _num_moments
- moments = moments.to(_counter_dtype)
-
- device = moments.device
- if device not in _counters[name]:
- _counters[name][device] = torch.zeros_like(moments)
- _counters[name][device].add_(moments)
- return value
-
-# ----------------------------------------------------------------------------
-
-
-def report0(name, value):
- r"""Broadcasts the given set of scalars by the first process (`rank = 0`),
- but ignores any scalars provided by the other processes.
- See `report()` for further details.
- """
- report(name, value if _rank == 0 else [])
- return value
-
-# ----------------------------------------------------------------------------
-
-
-class Collector:
- r"""Collects the scalars broadcasted by `report()` and `report0()` and
- computes their long-term averages (mean and standard deviation) over
- user-defined periods of time.
-
- The averages are first collected into internal counters that are not
- directly visible to the user. They are then copied to the user-visible
- state as a result of calling `update()` and can then be queried using
- `mean()`, `std()`, `as_dict()`, etc. Calling `update()` also resets the
- internal counters for the next round, so that the user-visible state
- effectively reflects averages collected between the last two calls to
- `update()`.
-
- Args:
- regex: Regular expression defining which statistics to
- collect. The default is to collect everything.
- keep_previous: Whether to retain the previous averages if no
- scalars were collected on a given round
- (default: True).
- """
-
- def __init__(self, regex='.*', keep_previous=True):
- self._regex = re.compile(regex)
- self._keep_previous = keep_previous
- self._cumulative = dict()
- self._moments = dict()
- self.update()
- self._moments.clear()
-
- def names(self):
- r"""Returns the names of all statistics broadcasted so far that
- match the regular expression specified at construction time.
- """
- return [name for name in _counters if self._regex.fullmatch(name)]
-
- def update(self):
- r"""Copies current values of the internal counters to the
- user-visible state and resets them for the next round.
-
- If `keep_previous=True` was specified at construction time, the
- operation is skipped for statistics that have received no scalars
- since the last update, retaining their previous averages.
-
- This method performs a number of GPU-to-CPU transfers and one
- `torch.distributed.all_reduce()`. It is intended to be called
- periodically in the main training loop, typically once every
- N training steps.
- """
- if not self._keep_previous:
- self._moments.clear()
- for name, cumulative in _sync(self.names()):
- if name not in self._cumulative:
- self._cumulative[name] = torch.zeros(
- [_num_moments], dtype=_counter_dtype)
- delta = cumulative - self._cumulative[name]
- self._cumulative[name].copy_(cumulative)
- if float(delta[0]) != 0:
- self._moments[name] = delta
-
- def _get_delta(self, name):
- r"""Returns the raw moments that were accumulated for the given
- statistic between the last two calls to `update()`, or zero if
- no scalars were collected.
- """
- assert self._regex.fullmatch(name)
- if name not in self._moments:
- self._moments[name] = torch.zeros(
- [_num_moments], dtype=_counter_dtype)
- return self._moments[name]
-
- def num(self, name):
- r"""Returns the number of scalars that were accumulated for the given
- statistic between the last two calls to `update()`, or zero if
- no scalars were collected.
- """
- delta = self._get_delta(name)
- return int(delta[0])
-
- def mean(self, name):
- r"""Returns the mean of the scalars that were accumulated for the
- given statistic between the last two calls to `update()`, or NaN if
- no scalars were collected.
- """
- delta = self._get_delta(name)
- if int(delta[0]) == 0:
- return float('nan')
- return float(delta[1] / delta[0])
-
- def std(self, name):
- r"""Returns the standard deviation of the scalars that were
- accumulated for the given statistic between the last two calls to
- `update()`, or NaN if no scalars were collected.
- """
- delta = self._get_delta(name)
- if int(delta[0]) == 0 or not np.isfinite(float(delta[1])):
- return float('nan')
- if int(delta[0]) == 1:
- return float(0)
- mean = float(delta[1] / delta[0])
- raw_var = float(delta[2] / delta[0])
- return np.sqrt(max(raw_var - np.square(mean), 0))
-
- def as_dict(self):
- r"""Returns the averages accumulated between the last two calls to
- `update()` as an `dnnlib.EasyDict`. The contents are as follows:
-
- dnnlib.EasyDict(
- NAME = dnnlib.EasyDict(num=FLOAT, mean=FLOAT, std=FLOAT),
- ...
- )
- """
- stats = dnnlib.EasyDict()
- for name in self.names():
- stats[name] = dnnlib.EasyDict(num=self.num(
- name), mean=self.mean(name), std=self.std(name))
- return stats
-
- def __getitem__(self, name):
- r"""Convenience getter.
- `collector[name]` is a synonym for `collector.mean(name)`.
- """
- return self.mean(name)
-
-# ----------------------------------------------------------------------------
-
-
-def _sync(names):
- r"""Synchronize the global cumulative counters across devices and
- processes. Called internally by `Collector.update()`.
- """
- if len(names) == 0:
- return []
- global _sync_called
- _sync_called = True
-
- # Collect deltas within current rank.
- deltas = []
- device = _sync_device if _sync_device is not None else torch.device('cpu')
- for name in names:
- delta = torch.zeros(
- [_num_moments], dtype=_counter_dtype, device=device)
- for counter in _counters[name].values():
- delta.add_(counter.to(device))
- counter.copy_(torch.zeros_like(counter))
- deltas.append(delta)
- deltas = torch.stack(deltas)
-
- # Sum deltas across ranks.
- if _sync_device is not None:
- torch.distributed.all_reduce(deltas)
-
- # Update cumulative values.
- deltas = deltas.cpu()
- for idx, name in enumerate(names):
- if name not in _cumulative:
- _cumulative[name] = torch.zeros(
- [_num_moments], dtype=_counter_dtype)
- _cumulative[name].add_(deltas[idx])
-
- # Return name-value pairs.
- return [(name, _cumulative[name]) for name in names]
-
-# ----------------------------------------------------------------------------
diff --git a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/utils/device.py b/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/utils/device.py
deleted file mode 100644
index 7ddbfb8150e4e190dc92927bb67fbf28f083796e..0000000000000000000000000000000000000000
--- a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/utils/device.py
+++ /dev/null
@@ -1,24 +0,0 @@
-from typing import Any
-import torch
-
-
-def detach(obj: Any):
- """Credit: https://discuss.pytorch.org/t/pytorch-tensor-to-device-for-a-list-of-dict/66283
- Arguments:
- obj {dict, list} -- Object to be moved to cpu
- Raises:
- TypeError: Invalid type for detach
- Returns:
- type(obj) -- same object but moved to cpu
- """
- if torch.is_tensor(obj):
- return obj.detach()
- elif isinstance(obj, dict):
- res = {k: detach(v) for k, v in obj.items()}
- return res
- elif isinstance(obj, list):
- return [detach(v) for v in obj]
- elif isinstance(obj, tuple):
- return tuple(detach(list(obj)))
- else:
- raise TypeError("Invalid type for detach")
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/clip_guided_stable_diffusion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/clip_guided_stable_diffusion.py
deleted file mode 100644
index 3f4ab2ab9f4ad2417d6dbf40e1fd2e479df88b73..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/clip_guided_stable_diffusion.py
+++ /dev/null
@@ -1,347 +0,0 @@
-import inspect
-from typing import List, Optional, Union
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-from torchvision import transforms
-from transformers import CLIPImageProcessor, CLIPModel, CLIPTextModel, CLIPTokenizer
-
-from diffusers import (
- AutoencoderKL,
- DDIMScheduler,
- DiffusionPipeline,
- DPMSolverMultistepScheduler,
- LMSDiscreteScheduler,
- PNDMScheduler,
- UNet2DConditionModel,
-)
-from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput
-
-
-class MakeCutouts(nn.Module):
- def __init__(self, cut_size, cut_power=1.0):
- super().__init__()
-
- self.cut_size = cut_size
- self.cut_power = cut_power
-
- def forward(self, pixel_values, num_cutouts):
- sideY, sideX = pixel_values.shape[2:4]
- max_size = min(sideX, sideY)
- min_size = min(sideX, sideY, self.cut_size)
- cutouts = []
- for _ in range(num_cutouts):
- size = int(torch.rand([]) ** self.cut_power * (max_size - min_size) + min_size)
- offsetx = torch.randint(0, sideX - size + 1, ())
- offsety = torch.randint(0, sideY - size + 1, ())
- cutout = pixel_values[:, :, offsety : offsety + size, offsetx : offsetx + size]
- cutouts.append(F.adaptive_avg_pool2d(cutout, self.cut_size))
- return torch.cat(cutouts)
-
-
-def spherical_dist_loss(x, y):
- x = F.normalize(x, dim=-1)
- y = F.normalize(y, dim=-1)
- return (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2)
-
-
-def set_requires_grad(model, value):
- for param in model.parameters():
- param.requires_grad = value
-
-
-class CLIPGuidedStableDiffusion(DiffusionPipeline):
- """CLIP guided stable diffusion based on the amazing repo by @crowsonkb and @Jack000
- - https://github.com/Jack000/glid-3-xl
- - https://github.dev/crowsonkb/k-diffusion
- """
-
- def __init__(
- self,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- clip_model: CLIPModel,
- tokenizer: CLIPTokenizer,
- unet: UNet2DConditionModel,
- scheduler: Union[PNDMScheduler, LMSDiscreteScheduler, DDIMScheduler, DPMSolverMultistepScheduler],
- feature_extractor: CLIPImageProcessor,
- ):
- super().__init__()
- self.register_modules(
- vae=vae,
- text_encoder=text_encoder,
- clip_model=clip_model,
- tokenizer=tokenizer,
- unet=unet,
- scheduler=scheduler,
- feature_extractor=feature_extractor,
- )
-
- self.normalize = transforms.Normalize(mean=feature_extractor.image_mean, std=feature_extractor.image_std)
- self.cut_out_size = (
- feature_extractor.size
- if isinstance(feature_extractor.size, int)
- else feature_extractor.size["shortest_edge"]
- )
- self.make_cutouts = MakeCutouts(self.cut_out_size)
-
- set_requires_grad(self.text_encoder, False)
- set_requires_grad(self.clip_model, False)
-
- def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
- if slice_size == "auto":
- # half the attention head size is usually a good trade-off between
- # speed and memory
- slice_size = self.unet.config.attention_head_dim // 2
- self.unet.set_attention_slice(slice_size)
-
- def disable_attention_slicing(self):
- self.enable_attention_slicing(None)
-
- def freeze_vae(self):
- set_requires_grad(self.vae, False)
-
- def unfreeze_vae(self):
- set_requires_grad(self.vae, True)
-
- def freeze_unet(self):
- set_requires_grad(self.unet, False)
-
- def unfreeze_unet(self):
- set_requires_grad(self.unet, True)
-
- @torch.enable_grad()
- def cond_fn(
- self,
- latents,
- timestep,
- index,
- text_embeddings,
- noise_pred_original,
- text_embeddings_clip,
- clip_guidance_scale,
- num_cutouts,
- use_cutouts=True,
- ):
- latents = latents.detach().requires_grad_()
-
- latent_model_input = self.scheduler.scale_model_input(latents, timestep)
-
- # predict the noise residual
- noise_pred = self.unet(latent_model_input, timestep, encoder_hidden_states=text_embeddings).sample
-
- if isinstance(self.scheduler, (PNDMScheduler, DDIMScheduler, DPMSolverMultistepScheduler)):
- alpha_prod_t = self.scheduler.alphas_cumprod[timestep]
- beta_prod_t = 1 - alpha_prod_t
- # compute predicted original sample from predicted noise also called
- # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
- pred_original_sample = (latents - beta_prod_t ** (0.5) * noise_pred) / alpha_prod_t ** (0.5)
-
- fac = torch.sqrt(beta_prod_t)
- sample = pred_original_sample * (fac) + latents * (1 - fac)
- elif isinstance(self.scheduler, LMSDiscreteScheduler):
- sigma = self.scheduler.sigmas[index]
- sample = latents - sigma * noise_pred
- else:
- raise ValueError(f"scheduler type {type(self.scheduler)} not supported")
-
- sample = 1 / self.vae.config.scaling_factor * sample
- image = self.vae.decode(sample).sample
- image = (image / 2 + 0.5).clamp(0, 1)
-
- if use_cutouts:
- image = self.make_cutouts(image, num_cutouts)
- else:
- image = transforms.Resize(self.cut_out_size)(image)
- image = self.normalize(image).to(latents.dtype)
-
- image_embeddings_clip = self.clip_model.get_image_features(image)
- image_embeddings_clip = image_embeddings_clip / image_embeddings_clip.norm(p=2, dim=-1, keepdim=True)
-
- if use_cutouts:
- dists = spherical_dist_loss(image_embeddings_clip, text_embeddings_clip)
- dists = dists.view([num_cutouts, sample.shape[0], -1])
- loss = dists.sum(2).mean(0).sum() * clip_guidance_scale
- else:
- loss = spherical_dist_loss(image_embeddings_clip, text_embeddings_clip).mean() * clip_guidance_scale
-
- grads = -torch.autograd.grad(loss, latents)[0]
-
- if isinstance(self.scheduler, LMSDiscreteScheduler):
- latents = latents.detach() + grads * (sigma**2)
- noise_pred = noise_pred_original
- else:
- noise_pred = noise_pred_original - torch.sqrt(beta_prod_t) * grads
- return noise_pred, latents
-
- @torch.no_grad()
- def __call__(
- self,
- prompt: Union[str, List[str]],
- height: Optional[int] = 512,
- width: Optional[int] = 512,
- num_inference_steps: Optional[int] = 50,
- guidance_scale: Optional[float] = 7.5,
- num_images_per_prompt: Optional[int] = 1,
- eta: float = 0.0,
- clip_guidance_scale: Optional[float] = 100,
- clip_prompt: Optional[Union[str, List[str]]] = None,
- num_cutouts: Optional[int] = 4,
- use_cutouts: Optional[bool] = True,
- generator: Optional[torch.Generator] = None,
- latents: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- ):
- if isinstance(prompt, str):
- batch_size = 1
- elif isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
-
- # get prompt text embeddings
- text_input = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- text_embeddings = self.text_encoder(text_input.input_ids.to(self.device))[0]
- # duplicate text embeddings for each generation per prompt
- text_embeddings = text_embeddings.repeat_interleave(num_images_per_prompt, dim=0)
-
- if clip_guidance_scale > 0:
- if clip_prompt is not None:
- clip_text_input = self.tokenizer(
- clip_prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- ).input_ids.to(self.device)
- else:
- clip_text_input = text_input.input_ids.to(self.device)
- text_embeddings_clip = self.clip_model.get_text_features(clip_text_input)
- text_embeddings_clip = text_embeddings_clip / text_embeddings_clip.norm(p=2, dim=-1, keepdim=True)
- # duplicate text embeddings clip for each generation per prompt
- text_embeddings_clip = text_embeddings_clip.repeat_interleave(num_images_per_prompt, dim=0)
-
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
- # get unconditional embeddings for classifier free guidance
- if do_classifier_free_guidance:
- max_length = text_input.input_ids.shape[-1]
- uncond_input = self.tokenizer([""], padding="max_length", max_length=max_length, return_tensors="pt")
- uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
- # duplicate unconditional embeddings for each generation per prompt
- uncond_embeddings = uncond_embeddings.repeat_interleave(num_images_per_prompt, dim=0)
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
-
- # get the initial random noise unless the user supplied it
-
- # Unlike in other pipelines, latents need to be generated in the target device
- # for 1-to-1 results reproducibility with the CompVis implementation.
- # However this currently doesn't work in `mps`.
- latents_shape = (batch_size * num_images_per_prompt, self.unet.config.in_channels, height // 8, width // 8)
- latents_dtype = text_embeddings.dtype
- if latents is None:
- if self.device.type == "mps":
- # randn does not work reproducibly on mps
- latents = torch.randn(latents_shape, generator=generator, device="cpu", dtype=latents_dtype).to(
- self.device
- )
- else:
- latents = torch.randn(latents_shape, generator=generator, device=self.device, dtype=latents_dtype)
- else:
- if latents.shape != latents_shape:
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
- latents = latents.to(self.device)
-
- # set timesteps
- accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys())
- extra_set_kwargs = {}
- if accepts_offset:
- extra_set_kwargs["offset"] = 1
-
- self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs)
-
- # Some schedulers like PNDM have timesteps as arrays
- # It's more optimized to move all timesteps to correct device beforehand
- timesteps_tensor = self.scheduler.timesteps.to(self.device)
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * self.scheduler.init_noise_sigma
-
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- # check if the scheduler accepts generator
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
- if accepts_generator:
- extra_step_kwargs["generator"] = generator
-
- for i, t in enumerate(self.progress_bar(timesteps_tensor)):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
-
- # perform classifier free guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- # perform clip guidance
- if clip_guidance_scale > 0:
- text_embeddings_for_guidance = (
- text_embeddings.chunk(2)[1] if do_classifier_free_guidance else text_embeddings
- )
- noise_pred, latents = self.cond_fn(
- latents,
- t,
- i,
- text_embeddings_for_guidance,
- noise_pred,
- text_embeddings_clip,
- clip_guidance_scale,
- num_cutouts,
- use_cutouts,
- )
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
-
- # scale and decode the image latents with vae
- latents = 1 / self.vae.config.scaling_factor * latents
- image = self.vae.decode(latents).sample
-
- image = (image / 2 + 0.5).clamp(0, 1)
- image = image.cpu().permute(0, 2, 3, 1).numpy()
-
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image, None)
-
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=None)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/README.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/README.md
deleted file mode 100644
index 66df9a811afbf70a5e943ed1a1e3e7c6955e6c25..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/README.md
+++ /dev/null
@@ -1,176 +0,0 @@
-# Stable Diffusion
-
-## Overview
-
-Stable Diffusion was proposed in [Stable Diffusion Announcement](https://stability.ai/blog/stable-diffusion-announcement) by Patrick Esser and Robin Rombach and the Stability AI team.
-
-The summary of the model is the following:
-
-*Stable Diffusion is a text-to-image model that will empower billions of people to create stunning art within seconds. It is a breakthrough in speed and quality meaning that it can run on consumer GPUs. You can see some of the amazing output that has been created by this model without pre or post-processing on this page. The model itself builds upon the work of the team at CompVis and Runway in their widely used latent diffusion model combined with insights from the conditional diffusion models by our lead generative AI developer Katherine Crowson, Dall-E 2 by Open AI, Imagen by Google Brain and many others. We are delighted that AI media generation is a cooperative field and hope it can continue this way to bring the gift of creativity to all.*
-
-## Tips:
-
-- Stable Diffusion has the same architecture as [Latent Diffusion](https://arxiv.org/abs/2112.10752) but uses a frozen CLIP Text Encoder instead of training the text encoder jointly with the diffusion model.
-- An in-detail explanation of the Stable Diffusion model can be found under [Stable Diffusion with 🧨 Diffusers](https://huggingface.co/blog/stable_diffusion).
-- If you don't want to rely on the Hugging Face Hub and having to pass a authentication token, you can
-download the weights with `git lfs install; git clone https://huggingface.co/runwayml/stable-diffusion-v1-5` and instead pass the local path to the cloned folder to `from_pretrained` as shown below.
-- Stable Diffusion can work with a variety of different samplers as is shown below.
-
-## Available Pipelines:
-
-| Pipeline | Tasks | Colab
-|---|---|:---:|
-| [pipeline_stable_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py) | *Text-to-Image Generation* | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
-| [pipeline_stable_diffusion_img2img](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py) | *Image-to-Image Text-Guided Generation* | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb)
-| [pipeline_stable_diffusion_inpaint](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py) | *Text-Guided Image Inpainting* | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)
-
-## Examples:
-
-### Using Stable Diffusion without being logged into the Hub.
-
-If you want to download the model weights using a single Python line, you need to be logged in via `huggingface-cli login`.
-
-```python
-from diffusers import DiffusionPipeline
-
-pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
-```
-
-This however can make it difficult to build applications on top of `diffusers` as you will always have to pass the token around. A potential way to solve this issue is by downloading the weights to a local path `"./stable-diffusion-v1-5"`:
-
-```
-git lfs install
-git clone https://huggingface.co/runwayml/stable-diffusion-v1-5
-```
-
-and simply passing the local path to `from_pretrained`:
-
-```python
-from diffusers import StableDiffusionPipeline
-
-pipe = StableDiffusionPipeline.from_pretrained("./stable-diffusion-v1-5")
-```
-
-### Text-to-Image with default PLMS scheduler
-
-```python
-# make sure you're logged in with `huggingface-cli login`
-from diffusers import StableDiffusionPipeline
-
-pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
-pipe = pipe.to("cuda")
-
-prompt = "a photo of an astronaut riding a horse on mars"
-image = pipe(prompt).images[0]
-
-image.save("astronaut_rides_horse.png")
-```
-
-### Text-to-Image with DDIM scheduler
-
-```python
-# make sure you're logged in with `huggingface-cli login`
-from diffusers import StableDiffusionPipeline, DDIMScheduler
-
-scheduler = DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler")
-
-pipe = StableDiffusionPipeline.from_pretrained(
- "runwayml/stable-diffusion-v1-5",
- scheduler=scheduler,
-).to("cuda")
-
-prompt = "a photo of an astronaut riding a horse on mars"
-image = pipe(prompt).images[0]
-
-image.save("astronaut_rides_horse.png")
-```
-
-### Text-to-Image with K-LMS scheduler
-
-```python
-# make sure you're logged in with `huggingface-cli login`
-from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
-
-lms = LMSDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler")
-
-pipe = StableDiffusionPipeline.from_pretrained(
- "runwayml/stable-diffusion-v1-5",
- scheduler=lms,
-).to("cuda")
-
-prompt = "a photo of an astronaut riding a horse on mars"
-image = pipe(prompt).images[0]
-
-image.save("astronaut_rides_horse.png")
-```
-
-### CycleDiffusion using Stable Diffusion and DDIM scheduler
-
-```python
-import requests
-import torch
-from PIL import Image
-from io import BytesIO
-
-from diffusers import CycleDiffusionPipeline, DDIMScheduler
-
-
-# load the scheduler. CycleDiffusion only supports stochastic schedulers.
-
-# load the pipeline
-# make sure you're logged in with `huggingface-cli login`
-model_id_or_path = "CompVis/stable-diffusion-v1-4"
-scheduler = DDIMScheduler.from_pretrained(model_id_or_path, subfolder="scheduler")
-pipe = CycleDiffusionPipeline.from_pretrained(model_id_or_path, scheduler=scheduler).to("cuda")
-
-# let's download an initial image
-url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/An%20astronaut%20riding%20a%20horse.png"
-response = requests.get(url)
-init_image = Image.open(BytesIO(response.content)).convert("RGB")
-init_image = init_image.resize((512, 512))
-init_image.save("horse.png")
-
-# let's specify a prompt
-source_prompt = "An astronaut riding a horse"
-prompt = "An astronaut riding an elephant"
-
-# call the pipeline
-image = pipe(
- prompt=prompt,
- source_prompt=source_prompt,
- image=init_image,
- num_inference_steps=100,
- eta=0.1,
- strength=0.8,
- guidance_scale=2,
- source_guidance_scale=1,
-).images[0]
-
-image.save("horse_to_elephant.png")
-
-# let's try another example
-# See more samples at the original repo: https://github.com/ChenWu98/cycle-diffusion
-url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/A%20black%20colored%20car.png"
-response = requests.get(url)
-init_image = Image.open(BytesIO(response.content)).convert("RGB")
-init_image = init_image.resize((512, 512))
-init_image.save("black.png")
-
-source_prompt = "A black colored car"
-prompt = "A blue colored car"
-
-# call the pipeline
-torch.manual_seed(0)
-image = pipe(
- prompt=prompt,
- source_prompt=source_prompt,
- image=init_image,
- num_inference_steps=100,
- eta=0.1,
- strength=0.85,
- guidance_scale=3,
- source_guidance_scale=1,
-).images[0]
-
-image.save("black_to_blue.png")
-```
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion.py
deleted file mode 100644
index a10462a345c1270f1867ac2f44454c82f08105fb..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion.py
+++ /dev/null
@@ -1,1182 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-import gc
-import tempfile
-import time
-import traceback
-import unittest
-
-import numpy as np
-import torch
-from huggingface_hub import hf_hub_download
-from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer
-
-from diffusers import (
- AutoencoderKL,
- DDIMScheduler,
- DPMSolverMultistepScheduler,
- EulerAncestralDiscreteScheduler,
- EulerDiscreteScheduler,
- LMSDiscreteScheduler,
- PNDMScheduler,
- StableDiffusionPipeline,
- UNet2DConditionModel,
- logging,
-)
-from diffusers.models.attention_processor import AttnProcessor, LoRAXFormersAttnProcessor
-from diffusers.utils import load_numpy, nightly, slow, torch_device
-from diffusers.utils.testing_utils import (
- CaptureLogger,
- enable_full_determinism,
- require_torch_2,
- require_torch_gpu,
- run_test_in_subprocess,
-)
-
-from ...models.test_lora_layers import create_unet_lora_layers
-from ...models.test_models_unet_2d_condition import create_lora_layers
-from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS
-from ..test_pipelines_common import PipelineKarrasSchedulerTesterMixin, PipelineLatentTesterMixin, PipelineTesterMixin
-
-
-enable_full_determinism()
-
-
-# Will be run via run_test_in_subprocess
-def _test_stable_diffusion_compile(in_queue, out_queue, timeout):
- error = None
- try:
- inputs = in_queue.get(timeout=timeout)
- torch_device = inputs.pop("torch_device")
- seed = inputs.pop("seed")
- inputs["generator"] = torch.Generator(device=torch_device).manual_seed(seed)
-
- sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", safety_checker=None)
- sd_pipe.scheduler = DDIMScheduler.from_config(sd_pipe.scheduler.config)
- sd_pipe = sd_pipe.to(torch_device)
-
- sd_pipe.unet.to(memory_format=torch.channels_last)
- sd_pipe.unet = torch.compile(sd_pipe.unet, mode="reduce-overhead", fullgraph=True)
-
- sd_pipe.set_progress_bar_config(disable=None)
-
- image = sd_pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1].flatten()
-
- assert image.shape == (1, 512, 512, 3)
- expected_slice = np.array([0.38019, 0.28647, 0.27321, 0.40377, 0.38290, 0.35446, 0.39218, 0.38165, 0.42239])
- assert np.abs(image_slice - expected_slice).max() < 5e-3
- except Exception:
- error = f"{traceback.format_exc()}"
-
- results = {"error": error}
- out_queue.put(results, timeout=timeout)
- out_queue.join()
-
-
-class StableDiffusionPipelineFastTests(
- PipelineLatentTesterMixin, PipelineKarrasSchedulerTesterMixin, PipelineTesterMixin, unittest.TestCase
-):
- pipeline_class = StableDiffusionPipeline
- params = TEXT_TO_IMAGE_PARAMS
- batch_params = TEXT_TO_IMAGE_BATCH_PARAMS
- image_params = TEXT_TO_IMAGE_IMAGE_PARAMS
- image_latents_params = TEXT_TO_IMAGE_IMAGE_PARAMS
-
- def get_dummy_components(self):
- torch.manual_seed(0)
- unet = UNet2DConditionModel(
- block_out_channels=(32, 64),
- layers_per_block=2,
- sample_size=32,
- in_channels=4,
- out_channels=4,
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
- up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
- cross_attention_dim=32,
- )
- scheduler = DDIMScheduler(
- beta_start=0.00085,
- beta_end=0.012,
- beta_schedule="scaled_linear",
- clip_sample=False,
- set_alpha_to_one=False,
- )
- torch.manual_seed(0)
- vae = AutoencoderKL(
- block_out_channels=[32, 64],
- in_channels=3,
- out_channels=3,
- down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
- up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
- latent_channels=4,
- )
- torch.manual_seed(0)
- text_encoder_config = CLIPTextConfig(
- bos_token_id=0,
- eos_token_id=2,
- hidden_size=32,
- intermediate_size=37,
- layer_norm_eps=1e-05,
- num_attention_heads=4,
- num_hidden_layers=5,
- pad_token_id=1,
- vocab_size=1000,
- )
- text_encoder = CLIPTextModel(text_encoder_config)
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
-
- components = {
- "unet": unet,
- "scheduler": scheduler,
- "vae": vae,
- "text_encoder": text_encoder,
- "tokenizer": tokenizer,
- "safety_checker": None,
- "feature_extractor": None,
- }
- return components
-
- def get_dummy_inputs(self, device, seed=0):
- if str(device).startswith("mps"):
- generator = torch.manual_seed(seed)
- else:
- generator = torch.Generator(device=device).manual_seed(seed)
- inputs = {
- "prompt": "A painting of a squirrel eating a burger",
- "generator": generator,
- "num_inference_steps": 2,
- "guidance_scale": 6.0,
- "output_type": "numpy",
- }
- return inputs
-
- def test_stable_diffusion_ddim(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
-
- components = self.get_dummy_components()
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- output = sd_pipe(**inputs)
- image = output.images
-
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
- expected_slice = np.array([0.5756, 0.6118, 0.5005, 0.5041, 0.5471, 0.4726, 0.4976, 0.4865, 0.4864])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_lora(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
-
- components = self.get_dummy_components()
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- # forward 1
- inputs = self.get_dummy_inputs(device)
- output = sd_pipe(**inputs)
- image = output.images
- image_slice = image[0, -3:, -3:, -1]
-
- # set lora layers
- lora_attn_procs = create_lora_layers(sd_pipe.unet)
- sd_pipe.unet.set_attn_processor(lora_attn_procs)
- sd_pipe = sd_pipe.to(torch_device)
-
- # forward 2
- inputs = self.get_dummy_inputs(device)
- output = sd_pipe(**inputs, cross_attention_kwargs={"scale": 0.0})
- image = output.images
- image_slice_1 = image[0, -3:, -3:, -1]
-
- # forward 3
- inputs = self.get_dummy_inputs(device)
- output = sd_pipe(**inputs, cross_attention_kwargs={"scale": 0.5})
- image = output.images
- image_slice_2 = image[0, -3:, -3:, -1]
-
- assert np.abs(image_slice - image_slice_1).max() < 1e-2
- assert np.abs(image_slice - image_slice_2).max() > 1e-2
-
- def test_stable_diffusion_prompt_embeds(self):
- components = self.get_dummy_components()
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(torch_device)
- inputs["prompt"] = 3 * [inputs["prompt"]]
-
- # forward
- output = sd_pipe(**inputs)
- image_slice_1 = output.images[0, -3:, -3:, -1]
-
- inputs = self.get_dummy_inputs(torch_device)
- prompt = 3 * [inputs.pop("prompt")]
-
- text_inputs = sd_pipe.tokenizer(
- prompt,
- padding="max_length",
- max_length=sd_pipe.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- text_inputs = text_inputs["input_ids"].to(torch_device)
-
- prompt_embeds = sd_pipe.text_encoder(text_inputs)[0]
-
- inputs["prompt_embeds"] = prompt_embeds
-
- # forward
- output = sd_pipe(**inputs)
- image_slice_2 = output.images[0, -3:, -3:, -1]
-
- assert np.abs(image_slice_1.flatten() - image_slice_2.flatten()).max() < 1e-4
-
- def test_stable_diffusion_negative_prompt_embeds(self):
- components = self.get_dummy_components()
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(torch_device)
- negative_prompt = 3 * ["this is a negative prompt"]
- inputs["negative_prompt"] = negative_prompt
- inputs["prompt"] = 3 * [inputs["prompt"]]
-
- # forward
- output = sd_pipe(**inputs)
- image_slice_1 = output.images[0, -3:, -3:, -1]
-
- inputs = self.get_dummy_inputs(torch_device)
- prompt = 3 * [inputs.pop("prompt")]
-
- embeds = []
- for p in [prompt, negative_prompt]:
- text_inputs = sd_pipe.tokenizer(
- p,
- padding="max_length",
- max_length=sd_pipe.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- text_inputs = text_inputs["input_ids"].to(torch_device)
-
- embeds.append(sd_pipe.text_encoder(text_inputs)[0])
-
- inputs["prompt_embeds"], inputs["negative_prompt_embeds"] = embeds
-
- # forward
- output = sd_pipe(**inputs)
- image_slice_2 = output.images[0, -3:, -3:, -1]
-
- assert np.abs(image_slice_1.flatten() - image_slice_2.flatten()).max() < 1e-4
-
- def test_stable_diffusion_prompt_embeds_with_plain_negative_prompt_list(self):
- components = self.get_dummy_components()
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(torch_device)
- negative_prompt = 3 * ["this is a negative prompt"]
- inputs["negative_prompt"] = negative_prompt
- inputs["prompt"] = 3 * [inputs["prompt"]]
-
- # forward
- output = sd_pipe(**inputs)
- image_slice_1 = output.images[0, -3:, -3:, -1]
-
- inputs = self.get_dummy_inputs(torch_device)
- inputs["negative_prompt"] = negative_prompt
- prompt = 3 * [inputs.pop("prompt")]
-
- text_inputs = sd_pipe.tokenizer(
- prompt,
- padding="max_length",
- max_length=sd_pipe.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- text_inputs = text_inputs["input_ids"].to(torch_device)
-
- prompt_embeds = sd_pipe.text_encoder(text_inputs)[0]
-
- inputs["prompt_embeds"] = prompt_embeds
-
- # forward
- output = sd_pipe(**inputs)
- image_slice_2 = output.images[0, -3:, -3:, -1]
-
- assert np.abs(image_slice_1.flatten() - image_slice_2.flatten()).max() < 1e-4
-
- def test_stable_diffusion_ddim_factor_8(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
-
- components = self.get_dummy_components()
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- output = sd_pipe(**inputs, height=136, width=136)
- image = output.images
-
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 136, 136, 3)
- expected_slice = np.array([0.5524, 0.5626, 0.6069, 0.4727, 0.386, 0.3995, 0.4613, 0.4328, 0.4269])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_pndm(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe.scheduler = PNDMScheduler(skip_prk_steps=True)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- output = sd_pipe(**inputs)
- image = output.images
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
- expected_slice = np.array([0.5122, 0.5712, 0.4825, 0.5053, 0.5646, 0.4769, 0.5179, 0.4894, 0.4994])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- @unittest.skipIf(not torch.cuda.is_available(), reason="xformers requires cuda")
- def test_stable_diffusion_attn_processors(self):
- # disable_full_determinism()
- device = "cuda" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
-
- # run normal sd pipe
- image = sd_pipe(**inputs).images
- assert image.shape == (1, 64, 64, 3)
-
- # run xformers attention
- sd_pipe.enable_xformers_memory_efficient_attention()
- image = sd_pipe(**inputs).images
- assert image.shape == (1, 64, 64, 3)
-
- # run attention slicing
- sd_pipe.enable_attention_slicing()
- image = sd_pipe(**inputs).images
- assert image.shape == (1, 64, 64, 3)
-
- # run vae attention slicing
- sd_pipe.enable_vae_slicing()
- image = sd_pipe(**inputs).images
- assert image.shape == (1, 64, 64, 3)
-
- # run lora attention
- attn_processors, _ = create_unet_lora_layers(sd_pipe.unet)
- attn_processors = {k: v.to("cuda") for k, v in attn_processors.items()}
- sd_pipe.unet.set_attn_processor(attn_processors)
- image = sd_pipe(**inputs).images
- assert image.shape == (1, 64, 64, 3)
-
- # run lora xformers attention
- attn_processors, _ = create_unet_lora_layers(sd_pipe.unet)
- attn_processors = {
- k: LoRAXFormersAttnProcessor(hidden_size=v.hidden_size, cross_attention_dim=v.cross_attention_dim)
- for k, v in attn_processors.items()
- }
- attn_processors = {k: v.to("cuda") for k, v in attn_processors.items()}
- sd_pipe.unet.set_attn_processor(attn_processors)
- image = sd_pipe(**inputs).images
- assert image.shape == (1, 64, 64, 3)
-
- # enable_full_determinism()
-
- def test_stable_diffusion_no_safety_checker(self):
- pipe = StableDiffusionPipeline.from_pretrained(
- "hf-internal-testing/tiny-stable-diffusion-lms-pipe", safety_checker=None
- )
- assert isinstance(pipe, StableDiffusionPipeline)
- assert isinstance(pipe.scheduler, LMSDiscreteScheduler)
- assert pipe.safety_checker is None
-
- image = pipe("example prompt", num_inference_steps=2).images[0]
- assert image is not None
-
- # check that there's no error when saving a pipeline with one of the models being None
- with tempfile.TemporaryDirectory() as tmpdirname:
- pipe.save_pretrained(tmpdirname)
- pipe = StableDiffusionPipeline.from_pretrained(tmpdirname)
-
- # sanity check that the pipeline still works
- assert pipe.safety_checker is None
- image = pipe("example prompt", num_inference_steps=2).images[0]
- assert image is not None
-
- def test_stable_diffusion_k_lms(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
-
- components = self.get_dummy_components()
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe.scheduler = LMSDiscreteScheduler.from_config(sd_pipe.scheduler.config)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- output = sd_pipe(**inputs)
- image = output.images
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
- expected_slice = np.array([0.4873, 0.5443, 0.4845, 0.5004, 0.5549, 0.4850, 0.5191, 0.4941, 0.5065])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_k_euler_ancestral(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
-
- components = self.get_dummy_components()
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(sd_pipe.scheduler.config)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- output = sd_pipe(**inputs)
- image = output.images
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
- expected_slice = np.array([0.4872, 0.5444, 0.4846, 0.5003, 0.5549, 0.4850, 0.5189, 0.4941, 0.5067])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_k_euler(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
-
- components = self.get_dummy_components()
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe.scheduler = EulerDiscreteScheduler.from_config(sd_pipe.scheduler.config)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- output = sd_pipe(**inputs)
- image = output.images
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
- expected_slice = np.array([0.4873, 0.5443, 0.4845, 0.5004, 0.5549, 0.4850, 0.5191, 0.4941, 0.5065])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_vae_slicing(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- components["scheduler"] = LMSDiscreteScheduler.from_config(components["scheduler"].config)
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- image_count = 4
-
- inputs = self.get_dummy_inputs(device)
- inputs["prompt"] = [inputs["prompt"]] * image_count
- output_1 = sd_pipe(**inputs)
-
- # make sure sliced vae decode yields the same result
- sd_pipe.enable_vae_slicing()
- inputs = self.get_dummy_inputs(device)
- inputs["prompt"] = [inputs["prompt"]] * image_count
- output_2 = sd_pipe(**inputs)
-
- # there is a small discrepancy at image borders vs. full batch decode
- assert np.abs(output_2.images.flatten() - output_1.images.flatten()).max() < 3e-3
-
- def test_stable_diffusion_vae_tiling(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
-
- # make sure here that pndm scheduler skips prk
- components["safety_checker"] = None
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- prompt = "A painting of a squirrel eating a burger"
-
- # Test that tiled decode at 512x512 yields the same result as the non-tiled decode
- generator = torch.Generator(device=device).manual_seed(0)
- output_1 = sd_pipe([prompt], generator=generator, guidance_scale=6.0, num_inference_steps=2, output_type="np")
-
- # make sure tiled vae decode yields the same result
- sd_pipe.enable_vae_tiling()
- generator = torch.Generator(device=device).manual_seed(0)
- output_2 = sd_pipe([prompt], generator=generator, guidance_scale=6.0, num_inference_steps=2, output_type="np")
-
- assert np.abs(output_2.images.flatten() - output_1.images.flatten()).max() < 5e-1
-
- # test that tiled decode works with various shapes
- shapes = [(1, 4, 73, 97), (1, 4, 97, 73), (1, 4, 49, 65), (1, 4, 65, 49)]
- for shape in shapes:
- zeros = torch.zeros(shape).to(device)
- sd_pipe.vae.decode(zeros)
-
- def test_stable_diffusion_negative_prompt(self):
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
- components = self.get_dummy_components()
- components["scheduler"] = PNDMScheduler(skip_prk_steps=True)
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe = sd_pipe.to(device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_dummy_inputs(device)
- negative_prompt = "french fries"
- output = sd_pipe(**inputs, negative_prompt=negative_prompt)
-
- image = output.images
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (1, 64, 64, 3)
- expected_slice = np.array([0.5114, 0.5706, 0.4772, 0.5028, 0.5637, 0.4732, 0.5169, 0.4881, 0.4977])
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_stable_diffusion_long_prompt(self):
- components = self.get_dummy_components()
- components["scheduler"] = LMSDiscreteScheduler.from_config(components["scheduler"].config)
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- do_classifier_free_guidance = True
- negative_prompt = None
- num_images_per_prompt = 1
- logger = logging.get_logger("diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion")
-
- prompt = 25 * "@"
- with CaptureLogger(logger) as cap_logger_3:
- text_embeddings_3 = sd_pipe._encode_prompt(
- prompt, torch_device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt
- )
-
- prompt = 100 * "@"
- with CaptureLogger(logger) as cap_logger:
- text_embeddings = sd_pipe._encode_prompt(
- prompt, torch_device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt
- )
-
- negative_prompt = "Hello"
- with CaptureLogger(logger) as cap_logger_2:
- text_embeddings_2 = sd_pipe._encode_prompt(
- prompt, torch_device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt
- )
-
- assert text_embeddings_3.shape == text_embeddings_2.shape == text_embeddings.shape
- assert text_embeddings.shape[1] == 77
-
- assert cap_logger.out == cap_logger_2.out
- # 100 - 77 + 1 (BOS token) + 1 (EOS token) = 25
- assert cap_logger.out.count("@") == 25
- assert cap_logger_3.out == ""
-
- def test_stable_diffusion_height_width_opt(self):
- components = self.get_dummy_components()
- components["scheduler"] = LMSDiscreteScheduler.from_config(components["scheduler"].config)
- sd_pipe = StableDiffusionPipeline(**components)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- prompt = "hey"
-
- output = sd_pipe(prompt, num_inference_steps=1, output_type="np")
- image_shape = output.images[0].shape[:2]
- assert image_shape == (64, 64)
-
- output = sd_pipe(prompt, num_inference_steps=1, height=96, width=96, output_type="np")
- image_shape = output.images[0].shape[:2]
- assert image_shape == (96, 96)
-
- config = dict(sd_pipe.unet.config)
- config["sample_size"] = 96
- sd_pipe.unet = UNet2DConditionModel.from_config(config).to(torch_device)
- output = sd_pipe(prompt, num_inference_steps=1, output_type="np")
- image_shape = output.images[0].shape[:2]
- assert image_shape == (192, 192)
-
- def test_attention_slicing_forward_pass(self):
- super().test_attention_slicing_forward_pass(expected_max_diff=3e-3)
-
- def test_inference_batch_single_identical(self):
- super().test_inference_batch_single_identical(expected_max_diff=3e-3)
-
-
-@slow
-@require_torch_gpu
-class StableDiffusionPipelineSlowTests(unittest.TestCase):
- def setUp(self):
- gc.collect()
- torch.cuda.empty_cache()
-
- def get_inputs(self, device, generator_device="cpu", dtype=torch.float32, seed=0):
- generator = torch.Generator(device=generator_device).manual_seed(seed)
- latents = np.random.RandomState(seed).standard_normal((1, 4, 64, 64))
- latents = torch.from_numpy(latents).to(device=device, dtype=dtype)
- inputs = {
- "prompt": "a photograph of an astronaut riding a horse",
- "latents": latents,
- "generator": generator,
- "num_inference_steps": 3,
- "guidance_scale": 7.5,
- "output_type": "numpy",
- }
- return inputs
-
- def test_stable_diffusion_1_1_pndm(self):
- sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-1")
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = sd_pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1].flatten()
-
- assert image.shape == (1, 512, 512, 3)
- expected_slice = np.array([0.43625, 0.43554, 0.36670, 0.40660, 0.39703, 0.38658, 0.43936, 0.43557, 0.40592])
- assert np.abs(image_slice - expected_slice).max() < 3e-3
-
- def test_stable_diffusion_1_4_pndm(self):
- sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = sd_pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1].flatten()
-
- assert image.shape == (1, 512, 512, 3)
- expected_slice = np.array([0.57400, 0.47841, 0.31625, 0.63583, 0.58306, 0.55056, 0.50825, 0.56306, 0.55748])
- assert np.abs(image_slice - expected_slice).max() < 3e-3
-
- def test_stable_diffusion_ddim(self):
- sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", safety_checker=None)
- sd_pipe.scheduler = DDIMScheduler.from_config(sd_pipe.scheduler.config)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = sd_pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1].flatten()
-
- assert image.shape == (1, 512, 512, 3)
- expected_slice = np.array([0.38019, 0.28647, 0.27321, 0.40377, 0.38290, 0.35446, 0.39218, 0.38165, 0.42239])
- assert np.abs(image_slice - expected_slice).max() < 1e-4
-
- def test_stable_diffusion_lms(self):
- sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", safety_checker=None)
- sd_pipe.scheduler = LMSDiscreteScheduler.from_config(sd_pipe.scheduler.config)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = sd_pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1].flatten()
-
- assert image.shape == (1, 512, 512, 3)
- expected_slice = np.array([0.10542, 0.09620, 0.07332, 0.09015, 0.09382, 0.07597, 0.08496, 0.07806, 0.06455])
- assert np.abs(image_slice - expected_slice).max() < 3e-3
-
- def test_stable_diffusion_dpm(self):
- sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", safety_checker=None)
- sd_pipe.scheduler = DPMSolverMultistepScheduler.from_config(sd_pipe.scheduler.config)
- sd_pipe = sd_pipe.to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = sd_pipe(**inputs).images
- image_slice = image[0, -3:, -3:, -1].flatten()
-
- assert image.shape == (1, 512, 512, 3)
- expected_slice = np.array([0.03503, 0.03494, 0.01087, 0.03128, 0.02552, 0.00803, 0.00742, 0.00372, 0.00000])
- assert np.abs(image_slice - expected_slice).max() < 3e-3
-
- def test_stable_diffusion_attention_slicing(self):
- torch.cuda.reset_peak_memory_stats()
- pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16)
- pipe = pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
-
- # enable attention slicing
- pipe.enable_attention_slicing()
- inputs = self.get_inputs(torch_device, dtype=torch.float16)
- image_sliced = pipe(**inputs).images
-
- mem_bytes = torch.cuda.max_memory_allocated()
- torch.cuda.reset_peak_memory_stats()
- # make sure that less than 3.75 GB is allocated
- assert mem_bytes < 3.75 * 10**9
-
- # disable slicing
- pipe.disable_attention_slicing()
- inputs = self.get_inputs(torch_device, dtype=torch.float16)
- image = pipe(**inputs).images
-
- # make sure that more than 3.75 GB is allocated
- mem_bytes = torch.cuda.max_memory_allocated()
- assert mem_bytes > 3.75 * 10**9
- assert np.abs(image_sliced - image).max() < 1e-3
-
- def test_stable_diffusion_vae_slicing(self):
- torch.cuda.reset_peak_memory_stats()
- pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16)
- pipe = pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- pipe.enable_attention_slicing()
-
- # enable vae slicing
- pipe.enable_vae_slicing()
- inputs = self.get_inputs(torch_device, dtype=torch.float16)
- inputs["prompt"] = [inputs["prompt"]] * 4
- inputs["latents"] = torch.cat([inputs["latents"]] * 4)
- image_sliced = pipe(**inputs).images
-
- mem_bytes = torch.cuda.max_memory_allocated()
- torch.cuda.reset_peak_memory_stats()
- # make sure that less than 4 GB is allocated
- assert mem_bytes < 4e9
-
- # disable vae slicing
- pipe.disable_vae_slicing()
- inputs = self.get_inputs(torch_device, dtype=torch.float16)
- inputs["prompt"] = [inputs["prompt"]] * 4
- inputs["latents"] = torch.cat([inputs["latents"]] * 4)
- image = pipe(**inputs).images
-
- # make sure that more than 4 GB is allocated
- mem_bytes = torch.cuda.max_memory_allocated()
- assert mem_bytes > 4e9
- # There is a small discrepancy at the image borders vs. a fully batched version.
- assert np.abs(image_sliced - image).max() < 1e-2
-
- def test_stable_diffusion_vae_tiling(self):
- torch.cuda.reset_peak_memory_stats()
- model_id = "CompVis/stable-diffusion-v1-4"
- pipe = StableDiffusionPipeline.from_pretrained(model_id, revision="fp16", torch_dtype=torch.float16)
- pipe.set_progress_bar_config(disable=None)
- pipe.enable_attention_slicing()
- pipe.unet = pipe.unet.to(memory_format=torch.channels_last)
- pipe.vae = pipe.vae.to(memory_format=torch.channels_last)
-
- prompt = "a photograph of an astronaut riding a horse"
-
- # enable vae tiling
- pipe.enable_vae_tiling()
- pipe.enable_model_cpu_offload()
- generator = torch.Generator(device="cpu").manual_seed(0)
- output_chunked = pipe(
- [prompt],
- width=1024,
- height=1024,
- generator=generator,
- guidance_scale=7.5,
- num_inference_steps=2,
- output_type="numpy",
- )
- image_chunked = output_chunked.images
-
- mem_bytes = torch.cuda.max_memory_allocated()
-
- # disable vae tiling
- pipe.disable_vae_tiling()
- generator = torch.Generator(device="cpu").manual_seed(0)
- output = pipe(
- [prompt],
- width=1024,
- height=1024,
- generator=generator,
- guidance_scale=7.5,
- num_inference_steps=2,
- output_type="numpy",
- )
- image = output.images
-
- assert mem_bytes < 1e10
- assert np.abs(image_chunked.flatten() - image.flatten()).max() < 1e-2
-
- def test_stable_diffusion_fp16_vs_autocast(self):
- # this test makes sure that the original model with autocast
- # and the new model with fp16 yield the same result
- pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16)
- pipe = pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device, dtype=torch.float16)
- image_fp16 = pipe(**inputs).images
-
- with torch.autocast(torch_device):
- inputs = self.get_inputs(torch_device)
- image_autocast = pipe(**inputs).images
-
- # Make sure results are close enough
- diff = np.abs(image_fp16.flatten() - image_autocast.flatten())
- # They ARE different since ops are not run always at the same precision
- # however, they should be extremely close.
- assert diff.mean() < 2e-2
-
- def test_stable_diffusion_intermediate_state(self):
- number_of_steps = 0
-
- def callback_fn(step: int, timestep: int, latents: torch.FloatTensor) -> None:
- callback_fn.has_been_called = True
- nonlocal number_of_steps
- number_of_steps += 1
- if step == 1:
- latents = latents.detach().cpu().numpy()
- assert latents.shape == (1, 4, 64, 64)
- latents_slice = latents[0, -3:, -3:, -1]
- expected_slice = np.array(
- [-0.5693, -0.3018, -0.9746, 0.0518, -0.8770, 0.7559, -1.7402, 0.1022, 1.1582]
- )
-
- assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2
- elif step == 2:
- latents = latents.detach().cpu().numpy()
- assert latents.shape == (1, 4, 64, 64)
- latents_slice = latents[0, -3:, -3:, -1]
- expected_slice = np.array(
- [-0.1958, -0.2993, -1.0166, -0.5005, -0.4810, 0.6162, -0.9492, 0.6621, 1.4492]
- )
-
- assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2
-
- callback_fn.has_been_called = False
-
- pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16)
- pipe = pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- pipe.enable_attention_slicing()
-
- inputs = self.get_inputs(torch_device, dtype=torch.float16)
- pipe(**inputs, callback=callback_fn, callback_steps=1)
- assert callback_fn.has_been_called
- assert number_of_steps == inputs["num_inference_steps"]
-
- def test_stable_diffusion_low_cpu_mem_usage(self):
- pipeline_id = "CompVis/stable-diffusion-v1-4"
-
- start_time = time.time()
- pipeline_low_cpu_mem_usage = StableDiffusionPipeline.from_pretrained(pipeline_id, torch_dtype=torch.float16)
- pipeline_low_cpu_mem_usage.to(torch_device)
- low_cpu_mem_usage_time = time.time() - start_time
-
- start_time = time.time()
- _ = StableDiffusionPipeline.from_pretrained(pipeline_id, torch_dtype=torch.float16, low_cpu_mem_usage=False)
- normal_load_time = time.time() - start_time
-
- assert 2 * low_cpu_mem_usage_time < normal_load_time
-
- def test_stable_diffusion_pipeline_with_sequential_cpu_offloading(self):
- torch.cuda.empty_cache()
- torch.cuda.reset_max_memory_allocated()
- torch.cuda.reset_peak_memory_stats()
-
- pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16)
- pipe = pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- pipe.enable_attention_slicing(1)
- pipe.enable_sequential_cpu_offload()
-
- inputs = self.get_inputs(torch_device, dtype=torch.float16)
- _ = pipe(**inputs)
-
- mem_bytes = torch.cuda.max_memory_allocated()
- # make sure that less than 2.8 GB is allocated
- assert mem_bytes < 2.8 * 10**9
-
- def test_stable_diffusion_pipeline_with_model_offloading(self):
- torch.cuda.empty_cache()
- torch.cuda.reset_max_memory_allocated()
- torch.cuda.reset_peak_memory_stats()
-
- inputs = self.get_inputs(torch_device, dtype=torch.float16)
-
- # Normal inference
-
- pipe = StableDiffusionPipeline.from_pretrained(
- "CompVis/stable-diffusion-v1-4",
- torch_dtype=torch.float16,
- )
- pipe.unet.set_default_attn_processor()
- pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
- outputs = pipe(**inputs)
- mem_bytes = torch.cuda.max_memory_allocated()
-
- # With model offloading
-
- # Reload but don't move to cuda
- pipe = StableDiffusionPipeline.from_pretrained(
- "CompVis/stable-diffusion-v1-4",
- torch_dtype=torch.float16,
- )
- pipe.unet.set_default_attn_processor()
-
- torch.cuda.empty_cache()
- torch.cuda.reset_max_memory_allocated()
- torch.cuda.reset_peak_memory_stats()
-
- pipe.enable_model_cpu_offload()
- pipe.set_progress_bar_config(disable=None)
- inputs = self.get_inputs(torch_device, dtype=torch.float16)
-
- outputs_offloaded = pipe(**inputs)
- mem_bytes_offloaded = torch.cuda.max_memory_allocated()
-
- assert np.abs(outputs.images - outputs_offloaded.images).max() < 1e-3
- assert mem_bytes_offloaded < mem_bytes
- assert mem_bytes_offloaded < 3.5 * 10**9
- for module in pipe.text_encoder, pipe.unet, pipe.vae, pipe.safety_checker:
- assert module.device == torch.device("cpu")
-
- # With attention slicing
- torch.cuda.empty_cache()
- torch.cuda.reset_max_memory_allocated()
- torch.cuda.reset_peak_memory_stats()
-
- pipe.enable_attention_slicing()
- _ = pipe(**inputs)
- mem_bytes_slicing = torch.cuda.max_memory_allocated()
-
- assert mem_bytes_slicing < mem_bytes_offloaded
- assert mem_bytes_slicing < 3 * 10**9
-
- def test_stable_diffusion_textual_inversion(self):
- pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
- pipe.load_textual_inversion("sd-concepts-library/low-poly-hd-logos-icons")
-
- a111_file = hf_hub_download("hf-internal-testing/text_inv_embedding_a1111_format", "winter_style.pt")
- a111_file_neg = hf_hub_download(
- "hf-internal-testing/text_inv_embedding_a1111_format", "winter_style_negative.pt"
- )
- pipe.load_textual_inversion(a111_file)
- pipe.load_textual_inversion(a111_file_neg)
- pipe.to("cuda")
-
- generator = torch.Generator(device="cpu").manual_seed(1)
-
- prompt = "An logo of a turtle in strong Style-Winter with "
- neg_prompt = "Style-Winter-neg"
-
- image = pipe(prompt=prompt, negative_prompt=neg_prompt, generator=generator, output_type="np").images[0]
- expected_image = load_numpy(
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/text_inv/winter_logo_style.npy"
- )
-
- max_diff = np.abs(expected_image - image).max()
- assert max_diff < 8e-1
-
- @require_torch_2
- def test_stable_diffusion_compile(self):
- seed = 0
- inputs = self.get_inputs(torch_device, seed=seed)
- # Can't pickle a Generator object
- del inputs["generator"]
- inputs["torch_device"] = torch_device
- inputs["seed"] = seed
- run_test_in_subprocess(test_case=self, target_func=_test_stable_diffusion_compile, inputs=inputs)
-
-
-@slow
-@require_torch_gpu
-class StableDiffusionPipelineCkptTests(unittest.TestCase):
- def tearDown(self):
- super().tearDown()
- gc.collect()
- torch.cuda.empty_cache()
-
- def test_download_from_hub(self):
- ckpt_paths = [
- "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt",
- "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix_base.ckpt",
- ]
-
- for ckpt_path in ckpt_paths:
- pipe = StableDiffusionPipeline.from_single_file(ckpt_path, torch_dtype=torch.float16)
- pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
- pipe.to("cuda")
-
- image_out = pipe("test", num_inference_steps=1, output_type="np").images[0]
-
- assert image_out.shape == (512, 512, 3)
-
- def test_download_local(self):
- filename = hf_hub_download("runwayml/stable-diffusion-v1-5", filename="v1-5-pruned-emaonly.ckpt")
-
- pipe = StableDiffusionPipeline.from_single_file(filename, torch_dtype=torch.float16)
- pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
- pipe.to("cuda")
-
- image_out = pipe("test", num_inference_steps=1, output_type="np").images[0]
-
- assert image_out.shape == (512, 512, 3)
-
- def test_download_ckpt_diff_format_is_same(self):
- ckpt_path = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt"
-
- pipe = StableDiffusionPipeline.from_single_file(ckpt_path)
- pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
- pipe.unet.set_attn_processor(AttnProcessor())
- pipe.to("cuda")
-
- generator = torch.Generator(device="cpu").manual_seed(0)
- image_ckpt = pipe("a turtle", num_inference_steps=5, generator=generator, output_type="np").images[0]
-
- pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
- pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
- pipe.unet.set_attn_processor(AttnProcessor())
- pipe.to("cuda")
-
- generator = torch.Generator(device="cpu").manual_seed(0)
- image = pipe("a turtle", num_inference_steps=5, generator=generator, output_type="np").images[0]
-
- assert np.max(np.abs(image - image_ckpt)) < 1e-4
-
-
-@nightly
-@require_torch_gpu
-class StableDiffusionPipelineNightlyTests(unittest.TestCase):
- def tearDown(self):
- super().tearDown()
- gc.collect()
- torch.cuda.empty_cache()
-
- def get_inputs(self, device, generator_device="cpu", dtype=torch.float32, seed=0):
- generator = torch.Generator(device=generator_device).manual_seed(seed)
- latents = np.random.RandomState(seed).standard_normal((1, 4, 64, 64))
- latents = torch.from_numpy(latents).to(device=device, dtype=dtype)
- inputs = {
- "prompt": "a photograph of an astronaut riding a horse",
- "latents": latents,
- "generator": generator,
- "num_inference_steps": 50,
- "guidance_scale": 7.5,
- "output_type": "numpy",
- }
- return inputs
-
- def test_stable_diffusion_1_4_pndm(self):
- sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4").to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = sd_pipe(**inputs).images[0]
-
- expected_image = load_numpy(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_text2img/stable_diffusion_1_4_pndm.npy"
- )
- max_diff = np.abs(expected_image - image).max()
- assert max_diff < 1e-3
-
- def test_stable_diffusion_1_5_pndm(self):
- sd_pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5").to(torch_device)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = sd_pipe(**inputs).images[0]
-
- expected_image = load_numpy(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_text2img/stable_diffusion_1_5_pndm.npy"
- )
- max_diff = np.abs(expected_image - image).max()
- assert max_diff < 1e-3
-
- def test_stable_diffusion_ddim(self):
- sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4").to(torch_device)
- sd_pipe.scheduler = DDIMScheduler.from_config(sd_pipe.scheduler.config)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = sd_pipe(**inputs).images[0]
-
- expected_image = load_numpy(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_text2img/stable_diffusion_1_4_ddim.npy"
- )
- max_diff = np.abs(expected_image - image).max()
- assert max_diff < 3e-3
-
- def test_stable_diffusion_lms(self):
- sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4").to(torch_device)
- sd_pipe.scheduler = LMSDiscreteScheduler.from_config(sd_pipe.scheduler.config)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = sd_pipe(**inputs).images[0]
-
- expected_image = load_numpy(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_text2img/stable_diffusion_1_4_lms.npy"
- )
- max_diff = np.abs(expected_image - image).max()
- assert max_diff < 1e-3
-
- def test_stable_diffusion_euler(self):
- sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4").to(torch_device)
- sd_pipe.scheduler = EulerDiscreteScheduler.from_config(sd_pipe.scheduler.config)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- image = sd_pipe(**inputs).images[0]
-
- expected_image = load_numpy(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_text2img/stable_diffusion_1_4_euler.npy"
- )
- max_diff = np.abs(expected_image - image).max()
- assert max_diff < 1e-3
-
- def test_stable_diffusion_dpm(self):
- sd_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4").to(torch_device)
- sd_pipe.scheduler = DPMSolverMultistepScheduler.from_config(sd_pipe.scheduler.config)
- sd_pipe.set_progress_bar_config(disable=None)
-
- inputs = self.get_inputs(torch_device)
- inputs["num_inference_steps"] = 25
- image = sd_pipe(**inputs).images[0]
-
- expected_image = load_numpy(
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
- "/stable_diffusion_text2img/stable_diffusion_1_4_dpm_multi.npy"
- )
- max_diff = np.abs(expected_image - image).max()
- assert max_diff < 1e-3
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_ddim_parallel.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_ddim_parallel.py
deleted file mode 100644
index b96e12f60fb3fc7a6f7dda235c048b93c242b034..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_ddim_parallel.py
+++ /dev/null
@@ -1,188 +0,0 @@
-# Copyright 2023 ParaDiGMS authors and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import torch
-
-from diffusers import DDIMParallelScheduler
-
-from .test_schedulers import SchedulerCommonTest
-
-
-class DDIMParallelSchedulerTest(SchedulerCommonTest):
- scheduler_classes = (DDIMParallelScheduler,)
- forward_default_kwargs = (("eta", 0.0), ("num_inference_steps", 50))
-
- def get_scheduler_config(self, **kwargs):
- config = {
- "num_train_timesteps": 1000,
- "beta_start": 0.0001,
- "beta_end": 0.02,
- "beta_schedule": "linear",
- "clip_sample": True,
- }
-
- config.update(**kwargs)
- return config
-
- def full_loop(self, **config):
- scheduler_class = self.scheduler_classes[0]
- scheduler_config = self.get_scheduler_config(**config)
- scheduler = scheduler_class(**scheduler_config)
-
- num_inference_steps, eta = 10, 0.0
-
- model = self.dummy_model()
- sample = self.dummy_sample_deter
-
- scheduler.set_timesteps(num_inference_steps)
-
- for t in scheduler.timesteps:
- residual = model(sample, t)
- sample = scheduler.step(residual, t, sample, eta).prev_sample
-
- return sample
-
- def test_timesteps(self):
- for timesteps in [100, 500, 1000]:
- self.check_over_configs(num_train_timesteps=timesteps)
-
- def test_steps_offset(self):
- for steps_offset in [0, 1]:
- self.check_over_configs(steps_offset=steps_offset)
-
- scheduler_class = self.scheduler_classes[0]
- scheduler_config = self.get_scheduler_config(steps_offset=1)
- scheduler = scheduler_class(**scheduler_config)
- scheduler.set_timesteps(5)
- assert torch.equal(scheduler.timesteps, torch.LongTensor([801, 601, 401, 201, 1]))
-
- def test_betas(self):
- for beta_start, beta_end in zip([0.0001, 0.001, 0.01, 0.1], [0.002, 0.02, 0.2, 2]):
- self.check_over_configs(beta_start=beta_start, beta_end=beta_end)
-
- def test_schedules(self):
- for schedule in ["linear", "squaredcos_cap_v2"]:
- self.check_over_configs(beta_schedule=schedule)
-
- def test_prediction_type(self):
- for prediction_type in ["epsilon", "v_prediction"]:
- self.check_over_configs(prediction_type=prediction_type)
-
- def test_clip_sample(self):
- for clip_sample in [True, False]:
- self.check_over_configs(clip_sample=clip_sample)
-
- def test_timestep_spacing(self):
- for timestep_spacing in ["trailing", "leading"]:
- self.check_over_configs(timestep_spacing=timestep_spacing)
-
- def test_rescale_betas_zero_snr(self):
- for rescale_betas_zero_snr in [True, False]:
- self.check_over_configs(rescale_betas_zero_snr=rescale_betas_zero_snr)
-
- def test_thresholding(self):
- self.check_over_configs(thresholding=False)
- for threshold in [0.5, 1.0, 2.0]:
- for prediction_type in ["epsilon", "v_prediction"]:
- self.check_over_configs(
- thresholding=True,
- prediction_type=prediction_type,
- sample_max_value=threshold,
- )
-
- def test_time_indices(self):
- for t in [1, 10, 49]:
- self.check_over_forward(time_step=t)
-
- def test_inference_steps(self):
- for t, num_inference_steps in zip([1, 10, 50], [10, 50, 500]):
- self.check_over_forward(time_step=t, num_inference_steps=num_inference_steps)
-
- def test_eta(self):
- for t, eta in zip([1, 10, 49], [0.0, 0.5, 1.0]):
- self.check_over_forward(time_step=t, eta=eta)
-
- def test_variance(self):
- scheduler_class = self.scheduler_classes[0]
- scheduler_config = self.get_scheduler_config()
- scheduler = scheduler_class(**scheduler_config)
-
- assert torch.sum(torch.abs(scheduler._get_variance(0, 0) - 0.0)) < 1e-5
- assert torch.sum(torch.abs(scheduler._get_variance(420, 400) - 0.14771)) < 1e-5
- assert torch.sum(torch.abs(scheduler._get_variance(980, 960) - 0.32460)) < 1e-5
- assert torch.sum(torch.abs(scheduler._get_variance(0, 0) - 0.0)) < 1e-5
- assert torch.sum(torch.abs(scheduler._get_variance(487, 486) - 0.00979)) < 1e-5
- assert torch.sum(torch.abs(scheduler._get_variance(999, 998) - 0.02)) < 1e-5
-
- def test_batch_step_no_noise(self):
- scheduler_class = self.scheduler_classes[0]
- scheduler_config = self.get_scheduler_config()
- scheduler = scheduler_class(**scheduler_config)
-
- num_inference_steps, eta = 10, 0.0
- scheduler.set_timesteps(num_inference_steps)
-
- model = self.dummy_model()
- sample1 = self.dummy_sample_deter
- sample2 = self.dummy_sample_deter + 0.1
- sample3 = self.dummy_sample_deter - 0.1
-
- per_sample_batch = sample1.shape[0]
- samples = torch.stack([sample1, sample2, sample3], dim=0)
- timesteps = torch.arange(num_inference_steps)[0:3, None].repeat(1, per_sample_batch)
-
- residual = model(samples.flatten(0, 1), timesteps.flatten(0, 1))
- pred_prev_sample = scheduler.batch_step_no_noise(residual, timesteps.flatten(0, 1), samples.flatten(0, 1), eta)
-
- result_sum = torch.sum(torch.abs(pred_prev_sample))
- result_mean = torch.mean(torch.abs(pred_prev_sample))
-
- assert abs(result_sum.item() - 1147.7904) < 1e-2
- assert abs(result_mean.item() - 0.4982) < 1e-3
-
- def test_full_loop_no_noise(self):
- sample = self.full_loop()
-
- result_sum = torch.sum(torch.abs(sample))
- result_mean = torch.mean(torch.abs(sample))
-
- assert abs(result_sum.item() - 172.0067) < 1e-2
- assert abs(result_mean.item() - 0.223967) < 1e-3
-
- def test_full_loop_with_v_prediction(self):
- sample = self.full_loop(prediction_type="v_prediction")
-
- result_sum = torch.sum(torch.abs(sample))
- result_mean = torch.mean(torch.abs(sample))
-
- assert abs(result_sum.item() - 52.5302) < 1e-2
- assert abs(result_mean.item() - 0.0684) < 1e-3
-
- def test_full_loop_with_set_alpha_to_one(self):
- # We specify different beta, so that the first alpha is 0.99
- sample = self.full_loop(set_alpha_to_one=True, beta_start=0.01)
- result_sum = torch.sum(torch.abs(sample))
- result_mean = torch.mean(torch.abs(sample))
-
- assert abs(result_sum.item() - 149.8295) < 1e-2
- assert abs(result_mean.item() - 0.1951) < 1e-3
-
- def test_full_loop_with_no_set_alpha_to_one(self):
- # We specify different beta, so that the first alpha is 0.99
- sample = self.full_loop(set_alpha_to_one=False, beta_start=0.01)
- result_sum = torch.sum(torch.abs(sample))
- result_mean = torch.mean(torch.abs(sample))
-
- assert abs(result_sum.item() - 149.0784) < 1e-2
- assert abs(result_mean.item() - 0.1941) < 1e-3
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/visualization/__init__.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/visualization/__init__.py
deleted file mode 100644
index 4ff995c0861490941f8cfc19ebbd41a2ee7e2d65..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/visualization/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from .image import (color_val_matplotlib, imshow_det_bboxes,
- imshow_gt_det_bboxes)
-
-__all__ = ['imshow_det_bboxes', 'imshow_gt_det_bboxes', 'color_val_matplotlib']
diff --git a/spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/benchmark.py b/spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/benchmark.py
deleted file mode 100644
index 76ecc3a96d6a60c75037976dd93edb9dc4a41d57..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/benchmark.py
+++ /dev/null
@@ -1,113 +0,0 @@
-import argparse
-import time
-
-import torch
-from mmcv import Config, DictAction
-from mmcv.cnn import fuse_conv_bn
-from mmcv.parallel import MMDataParallel
-from mmcv.runner import load_checkpoint, wrap_fp16_model
-
-from mmdet.datasets import (build_dataloader, build_dataset,
- replace_ImageToTensor)
-from mmdet.models import build_detector
-
-
-def parse_args():
- parser = argparse.ArgumentParser(description='MMDet benchmark a model')
- parser.add_argument('config', help='test config file path')
- parser.add_argument('checkpoint', help='checkpoint file')
- parser.add_argument(
- '--log-interval', default=50, help='interval of logging')
- parser.add_argument(
- '--fuse-conv-bn',
- action='store_true',
- help='Whether to fuse conv and bn, this will slightly increase'
- 'the inference speed')
- parser.add_argument(
- '--cfg-options',
- nargs='+',
- action=DictAction,
- help='override some settings in the used config, the key-value pair '
- 'in xxx=yyy format will be merged into config file. If the value to '
- 'be overwritten is a list, it should be like key="[a,b]" or key=a,b '
- 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" '
- 'Note that the quotation marks are necessary and that no white space '
- 'is allowed.')
- args = parser.parse_args()
- return args
-
-
-def main():
- args = parse_args()
-
- cfg = Config.fromfile(args.config)
- if args.cfg_options is not None:
- cfg.merge_from_dict(args.cfg_options)
- # import modules from string list.
- if cfg.get('custom_imports', None):
- from mmcv.utils import import_modules_from_strings
- import_modules_from_strings(**cfg['custom_imports'])
- # set cudnn_benchmark
- if cfg.get('cudnn_benchmark', False):
- torch.backends.cudnn.benchmark = True
- cfg.model.pretrained = None
- cfg.data.test.test_mode = True
-
- # build the dataloader
- samples_per_gpu = cfg.data.test.pop('samples_per_gpu', 1)
- if samples_per_gpu > 1:
- # Replace 'ImageToTensor' to 'DefaultFormatBundle'
- cfg.data.test.pipeline = replace_ImageToTensor(cfg.data.test.pipeline)
- dataset = build_dataset(cfg.data.test)
- data_loader = build_dataloader(
- dataset,
- samples_per_gpu=1,
- workers_per_gpu=cfg.data.workers_per_gpu,
- dist=False,
- shuffle=False)
-
- # build the model and load checkpoint
- cfg.model.train_cfg = None
- model = build_detector(cfg.model, test_cfg=cfg.get('test_cfg'))
- fp16_cfg = cfg.get('fp16', None)
- if fp16_cfg is not None:
- wrap_fp16_model(model)
- load_checkpoint(model, args.checkpoint, map_location='cpu')
- if args.fuse_conv_bn:
- model = fuse_conv_bn(model)
-
- model = MMDataParallel(model, device_ids=[0])
-
- model.eval()
-
- # the first several iterations may be very slow so skip them
- num_warmup = 5
- pure_inf_time = 0
-
- # benchmark with 2000 image and take the average
- for i, data in enumerate(data_loader):
-
- torch.cuda.synchronize()
- start_time = time.perf_counter()
-
- with torch.no_grad():
- model(return_loss=False, rescale=True, **data)
-
- torch.cuda.synchronize()
- elapsed = time.perf_counter() - start_time
-
- if i >= num_warmup:
- pure_inf_time += elapsed
- if (i + 1) % args.log_interval == 0:
- fps = (i + 1 - num_warmup) / pure_inf_time
- print(f'Done image [{i + 1:<3}/ 2000], fps: {fps:.1f} img / s')
-
- if (i + 1) == 2000:
- pure_inf_time += elapsed
- fps = (i + 1 - num_warmup) / pure_inf_time
- print(f'Overall fps: {fps:.1f} img / s')
- break
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_769x769_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_769x769_40k_cityscapes.py
deleted file mode 100644
index e35d1988f0bb7ad47a73ef1a64b73d9b40e0ba40..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_769x769_40k_cityscapes.py
+++ /dev/null
@@ -1,9 +0,0 @@
-_base_ = [
- '../_base_/models/deeplabv3_r50-d8.py',
- '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_40k.py'
-]
-model = dict(
- decode_head=dict(align_corners=True),
- auxiliary_head=dict(align_corners=True),
- test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513)))
diff --git a/spaces/Andy1621/uniformer_video_demo/transforms.py b/spaces/Andy1621/uniformer_video_demo/transforms.py
deleted file mode 100644
index 2483fdf8569e25978b922774e84cc2244315fe61..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_video_demo/transforms.py
+++ /dev/null
@@ -1,443 +0,0 @@
-import torchvision
-import random
-from PIL import Image, ImageOps
-import numpy as np
-import numbers
-import math
-import torch
-
-
-class GroupRandomCrop(object):
- def __init__(self, size):
- if isinstance(size, numbers.Number):
- self.size = (int(size), int(size))
- else:
- self.size = size
-
- def __call__(self, img_group):
-
- w, h = img_group[0].size
- th, tw = self.size
-
- out_images = list()
-
- x1 = random.randint(0, w - tw)
- y1 = random.randint(0, h - th)
-
- for img in img_group:
- assert(img.size[0] == w and img.size[1] == h)
- if w == tw and h == th:
- out_images.append(img)
- else:
- out_images.append(img.crop((x1, y1, x1 + tw, y1 + th)))
-
- return out_images
-
-
-class MultiGroupRandomCrop(object):
- def __init__(self, size, groups=1):
- if isinstance(size, numbers.Number):
- self.size = (int(size), int(size))
- else:
- self.size = size
- self.groups = groups
-
- def __call__(self, img_group):
-
- w, h = img_group[0].size
- th, tw = self.size
-
- out_images = list()
-
- for i in range(self.groups):
- x1 = random.randint(0, w - tw)
- y1 = random.randint(0, h - th)
-
- for img in img_group:
- assert(img.size[0] == w and img.size[1] == h)
- if w == tw and h == th:
- out_images.append(img)
- else:
- out_images.append(img.crop((x1, y1, x1 + tw, y1 + th)))
-
- return out_images
-
-
-class GroupCenterCrop(object):
- def __init__(self, size):
- self.worker = torchvision.transforms.CenterCrop(size)
-
- def __call__(self, img_group):
- return [self.worker(img) for img in img_group]
-
-
-class GroupRandomHorizontalFlip(object):
- """Randomly horizontally flips the given PIL.Image with a probability of 0.5
- """
-
- def __init__(self, is_flow=False):
- self.is_flow = is_flow
-
- def __call__(self, img_group, is_flow=False):
- v = random.random()
- if v < 0.5:
- ret = [img.transpose(Image.FLIP_LEFT_RIGHT) for img in img_group]
- if self.is_flow:
- for i in range(0, len(ret), 2):
- # invert flow pixel values when flipping
- ret[i] = ImageOps.invert(ret[i])
- return ret
- else:
- return img_group
-
-
-class GroupNormalize(object):
- def __init__(self, mean, std):
- self.mean = mean
- self.std = std
-
- def __call__(self, tensor):
- rep_mean = self.mean * (tensor.size()[0] // len(self.mean))
- rep_std = self.std * (tensor.size()[0] // len(self.std))
-
- # TODO: make efficient
- for t, m, s in zip(tensor, rep_mean, rep_std):
- t.sub_(m).div_(s)
-
- return tensor
-
-
-class GroupScale(object):
- """ Rescales the input PIL.Image to the given 'size'.
- 'size' will be the size of the smaller edge.
- For example, if height > width, then image will be
- rescaled to (size * height / width, size)
- size: size of the smaller edge
- interpolation: Default: PIL.Image.BILINEAR
- """
-
- def __init__(self, size, interpolation=Image.BILINEAR):
- self.worker = torchvision.transforms.Resize(size, interpolation)
-
- def __call__(self, img_group):
- return [self.worker(img) for img in img_group]
-
-
-class GroupOverSample(object):
- def __init__(self, crop_size, scale_size=None, flip=True):
- self.crop_size = crop_size if not isinstance(
- crop_size, int) else (crop_size, crop_size)
-
- if scale_size is not None:
- self.scale_worker = GroupScale(scale_size)
- else:
- self.scale_worker = None
- self.flip = flip
-
- def __call__(self, img_group):
-
- if self.scale_worker is not None:
- img_group = self.scale_worker(img_group)
-
- image_w, image_h = img_group[0].size
- crop_w, crop_h = self.crop_size
-
- offsets = GroupMultiScaleCrop.fill_fix_offset(
- False, image_w, image_h, crop_w, crop_h)
- oversample_group = list()
- for o_w, o_h in offsets:
- normal_group = list()
- flip_group = list()
- for i, img in enumerate(img_group):
- crop = img.crop((o_w, o_h, o_w + crop_w, o_h + crop_h))
- normal_group.append(crop)
- flip_crop = crop.copy().transpose(Image.FLIP_LEFT_RIGHT)
-
- if img.mode == 'L' and i % 2 == 0:
- flip_group.append(ImageOps.invert(flip_crop))
- else:
- flip_group.append(flip_crop)
-
- oversample_group.extend(normal_group)
- if self.flip:
- oversample_group.extend(flip_group)
- return oversample_group
-
-
-class GroupFullResSample(object):
- def __init__(self, crop_size, scale_size=None, flip=True):
- self.crop_size = crop_size if not isinstance(
- crop_size, int) else (crop_size, crop_size)
-
- if scale_size is not None:
- self.scale_worker = GroupScale(scale_size)
- else:
- self.scale_worker = None
- self.flip = flip
-
- def __call__(self, img_group):
-
- if self.scale_worker is not None:
- img_group = self.scale_worker(img_group)
-
- image_w, image_h = img_group[0].size
- crop_w, crop_h = self.crop_size
-
- w_step = (image_w - crop_w) // 4
- h_step = (image_h - crop_h) // 4
-
- offsets = list()
- offsets.append((0 * w_step, 2 * h_step)) # left
- offsets.append((4 * w_step, 2 * h_step)) # right
- offsets.append((2 * w_step, 2 * h_step)) # center
-
- oversample_group = list()
- for o_w, o_h in offsets:
- normal_group = list()
- flip_group = list()
- for i, img in enumerate(img_group):
- crop = img.crop((o_w, o_h, o_w + crop_w, o_h + crop_h))
- normal_group.append(crop)
- if self.flip:
- flip_crop = crop.copy().transpose(Image.FLIP_LEFT_RIGHT)
-
- if img.mode == 'L' and i % 2 == 0:
- flip_group.append(ImageOps.invert(flip_crop))
- else:
- flip_group.append(flip_crop)
-
- oversample_group.extend(normal_group)
- oversample_group.extend(flip_group)
- return oversample_group
-
-
-class GroupMultiScaleCrop(object):
-
- def __init__(self, input_size, scales=None, max_distort=1,
- fix_crop=True, more_fix_crop=True):
- self.scales = scales if scales is not None else [1, .875, .75, .66]
- self.max_distort = max_distort
- self.fix_crop = fix_crop
- self.more_fix_crop = more_fix_crop
- self.input_size = input_size if not isinstance(input_size, int) else [
- input_size, input_size]
- self.interpolation = Image.BILINEAR
-
- def __call__(self, img_group):
-
- im_size = img_group[0].size
-
- crop_w, crop_h, offset_w, offset_h = self._sample_crop_size(im_size)
- crop_img_group = [
- img.crop(
- (offset_w,
- offset_h,
- offset_w +
- crop_w,
- offset_h +
- crop_h)) for img in img_group]
- ret_img_group = [img.resize((self.input_size[0], self.input_size[1]), self.interpolation)
- for img in crop_img_group]
- return ret_img_group
-
- def _sample_crop_size(self, im_size):
- image_w, image_h = im_size[0], im_size[1]
-
- # find a crop size
- base_size = min(image_w, image_h)
- crop_sizes = [int(base_size * x) for x in self.scales]
- crop_h = [
- self.input_size[1] if abs(
- x - self.input_size[1]) < 3 else x for x in crop_sizes]
- crop_w = [
- self.input_size[0] if abs(
- x - self.input_size[0]) < 3 else x for x in crop_sizes]
-
- pairs = []
- for i, h in enumerate(crop_h):
- for j, w in enumerate(crop_w):
- if abs(i - j) <= self.max_distort:
- pairs.append((w, h))
-
- crop_pair = random.choice(pairs)
- if not self.fix_crop:
- w_offset = random.randint(0, image_w - crop_pair[0])
- h_offset = random.randint(0, image_h - crop_pair[1])
- else:
- w_offset, h_offset = self._sample_fix_offset(
- image_w, image_h, crop_pair[0], crop_pair[1])
-
- return crop_pair[0], crop_pair[1], w_offset, h_offset
-
- def _sample_fix_offset(self, image_w, image_h, crop_w, crop_h):
- offsets = self.fill_fix_offset(
- self.more_fix_crop, image_w, image_h, crop_w, crop_h)
- return random.choice(offsets)
-
- @staticmethod
- def fill_fix_offset(more_fix_crop, image_w, image_h, crop_w, crop_h):
- w_step = (image_w - crop_w) // 4
- h_step = (image_h - crop_h) // 4
-
- ret = list()
- ret.append((0, 0)) # upper left
- ret.append((4 * w_step, 0)) # upper right
- ret.append((0, 4 * h_step)) # lower left
- ret.append((4 * w_step, 4 * h_step)) # lower right
- ret.append((2 * w_step, 2 * h_step)) # center
-
- if more_fix_crop:
- ret.append((0, 2 * h_step)) # center left
- ret.append((4 * w_step, 2 * h_step)) # center right
- ret.append((2 * w_step, 4 * h_step)) # lower center
- ret.append((2 * w_step, 0 * h_step)) # upper center
-
- ret.append((1 * w_step, 1 * h_step)) # upper left quarter
- ret.append((3 * w_step, 1 * h_step)) # upper right quarter
- ret.append((1 * w_step, 3 * h_step)) # lower left quarter
- ret.append((3 * w_step, 3 * h_step)) # lower righ quarter
-
- return ret
-
-
-class GroupRandomSizedCrop(object):
- """Random crop the given PIL.Image to a random size of (0.08 to 1.0) of the original size
- and and a random aspect ratio of 3/4 to 4/3 of the original aspect ratio
- This is popularly used to train the Inception networks
- size: size of the smaller edge
- interpolation: Default: PIL.Image.BILINEAR
- """
-
- def __init__(self, size, interpolation=Image.BILINEAR):
- self.size = size
- self.interpolation = interpolation
-
- def __call__(self, img_group):
- for attempt in range(10):
- area = img_group[0].size[0] * img_group[0].size[1]
- target_area = random.uniform(0.08, 1.0) * area
- aspect_ratio = random.uniform(3. / 4, 4. / 3)
-
- w = int(round(math.sqrt(target_area * aspect_ratio)))
- h = int(round(math.sqrt(target_area / aspect_ratio)))
-
- if random.random() < 0.5:
- w, h = h, w
-
- if w <= img_group[0].size[0] and h <= img_group[0].size[1]:
- x1 = random.randint(0, img_group[0].size[0] - w)
- y1 = random.randint(0, img_group[0].size[1] - h)
- found = True
- break
- else:
- found = False
- x1 = 0
- y1 = 0
-
- if found:
- out_group = list()
- for img in img_group:
- img = img.crop((x1, y1, x1 + w, y1 + h))
- assert(img.size == (w, h))
- out_group.append(
- img.resize(
- (self.size, self.size), self.interpolation))
- return out_group
- else:
- # Fallback
- scale = GroupScale(self.size, interpolation=self.interpolation)
- crop = GroupRandomCrop(self.size)
- return crop(scale(img_group))
-
-
-class ConvertDataFormat(object):
- def __init__(self, model_type):
- self.model_type = model_type
-
- def __call__(self, images):
- if self.model_type == '2D':
- return images
- tc, h, w = images.size()
- t = tc // 3
- images = images.view(t, 3, h, w)
- images = images.permute(1, 0, 2, 3)
- return images
-
-
-class Stack(object):
-
- def __init__(self, roll=False):
- self.roll = roll
-
- def __call__(self, img_group):
- if img_group[0].mode == 'L':
- return np.concatenate([np.expand_dims(x, 2)
- for x in img_group], axis=2)
- elif img_group[0].mode == 'RGB':
- if self.roll:
- return np.concatenate([np.array(x)[:, :, ::-1]
- for x in img_group], axis=2)
- else:
- #print(np.concatenate(img_group, axis=2).shape)
- # print(img_group[0].shape)
- return np.concatenate(img_group, axis=2)
-
-
-class ToTorchFormatTensor(object):
- """ Converts a PIL.Image (RGB) or numpy.ndarray (H x W x C) in the range [0, 255]
- to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0] """
-
- def __init__(self, div=True):
- self.div = div
-
- def __call__(self, pic):
- if isinstance(pic, np.ndarray):
- # handle numpy array
- img = torch.from_numpy(pic).permute(2, 0, 1).contiguous()
- else:
- # handle PIL Image
- img = torch.ByteTensor(
- torch.ByteStorage.from_buffer(
- pic.tobytes()))
- img = img.view(pic.size[1], pic.size[0], len(pic.mode))
- # put it from HWC to CHW format
- # yikes, this transpose takes 80% of the loading time/CPU
- img = img.transpose(0, 1).transpose(0, 2).contiguous()
- return img.float().div(255) if self.div else img.float()
-
-
-class IdentityTransform(object):
-
- def __call__(self, data):
- return data
-
-
-if __name__ == "__main__":
- trans = torchvision.transforms.Compose([
- GroupScale(256),
- GroupRandomCrop(224),
- Stack(),
- ToTorchFormatTensor(),
- GroupNormalize(
- mean=[.485, .456, .406],
- std=[.229, .224, .225]
- )]
- )
-
- im = Image.open('../tensorflow-model-zoo.torch/lena_299.png')
-
- color_group = [im] * 3
- rst = trans(color_group)
-
- gray_group = [im.convert('L')] * 9
- gray_rst = trans(gray_group)
-
- trans2 = torchvision.transforms.Compose([
- GroupRandomSizedCrop(256),
- Stack(),
- ToTorchFormatTensor(),
- GroupNormalize(
- mean=[.485, .456, .406],
- std=[.229, .224, .225])
- ])
- print(trans2(color_group))
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/exllamav2_hf.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/exllamav2_hf.py
deleted file mode 100644
index 71cf513fc9e68d8456f3b6920667691768b54b44..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/exllamav2_hf.py
+++ /dev/null
@@ -1,152 +0,0 @@
-import os
-from pathlib import Path
-from typing import Any, Dict, Optional, Union
-
-import torch
-from exllamav2 import ExLlamaV2, ExLlamaV2Cache, ExLlamaV2Config
-from torch.nn import CrossEntropyLoss
-from transformers import GenerationConfig, PretrainedConfig, PreTrainedModel
-from transformers.modeling_outputs import CausalLMOutputWithPast
-
-from modules import shared
-from modules.logging_colors import logger
-
-try:
- import flash_attn
-except ModuleNotFoundError:
- logger.warning(
- 'You are running ExLlamaV2 without flash-attention. This will cause the VRAM usage '
- 'to be a lot higher than it could be.\n'
- 'Try installing flash-attention following the instructions here: '
- 'https://github.com/Dao-AILab/flash-attention#installation-and-features'
- )
- pass
-
-
-class Exllamav2HF(PreTrainedModel):
- def __init__(self, config: ExLlamaV2Config):
- super().__init__(PretrainedConfig())
- self.ex_config = config
- self.ex_model = ExLlamaV2(config)
- split = None
- if shared.args.gpu_split:
- split = [float(alloc) for alloc in shared.args.gpu_split.split(",")]
-
- self.ex_model.load(split)
-
- self.generation_config = GenerationConfig()
-
- self.ex_cache = ExLlamaV2Cache(self.ex_model)
- self.past_seq = None
-
- if shared.args.cfg_cache:
- self.ex_cache_negative = ExLlamaV2Cache(self.ex_model)
- self.past_seq_negative = None
-
- def _validate_model_class(self):
- pass
-
- def _validate_model_kwargs(self, model_kwargs: Dict[str, Any]):
- pass
-
- def prepare_inputs_for_generation(self, input_ids, **kwargs):
- return {'input_ids': input_ids, **kwargs}
-
- @property
- def device(self) -> torch.device:
- return torch.device(0)
-
- def __call__(self, *args, **kwargs):
- use_cache = kwargs.get('use_cache', True)
- labels = kwargs.get('labels', None)
- past_key_values = kwargs.get('past_key_values', None)
-
- if len(args) > 0:
- if not shared.args.cfg_cache:
- logger.error("Please enable the cfg-cache option to use CFG with ExLlamav2_HF.")
- return
-
- input_ids = args[0]
- is_negative = True
- past_seq = self.past_seq_negative
- ex_cache = self.ex_cache_negative
- else:
- input_ids = kwargs['input_ids']
- is_negative = False
- past_seq = self.past_seq
- ex_cache = self.ex_cache
-
- seq = input_ids[0].tolist()
- if is_negative and past_key_values is not None:
- seq = past_key_values + seq
-
- seq_tensor = torch.tensor(seq)
- reset = True
-
- # Make the forward call
- if labels is None:
- if past_seq is not None:
- min_length = min(past_seq.shape[0], seq_tensor.shape[0])
- indices = torch.nonzero(~torch.eq(past_seq[:min_length], seq_tensor[:min_length]))
- if len(indices) > 0:
- longest_prefix = indices[0].item()
- else:
- longest_prefix = min_length
-
- if longest_prefix > 0:
- reset = False
- ex_cache.current_seq_len = longest_prefix
- if len(seq_tensor) - longest_prefix > 1:
- self.ex_model.forward(seq_tensor[longest_prefix:-1].view(1, -1), ex_cache, preprocess_only=True)
- elif len(seq_tensor) == longest_prefix:
- # Very tricky: if the prefix we are reusing *is* the input_ids, then we have to back up the cache pointer by one,
- # because we feed input_ids[-1] to forward() below, but that last token is already in the cache!
- ex_cache.current_seq_len -= 1
-
- if reset:
- ex_cache.current_seq_len = 0
- if len(seq_tensor) > 1:
- self.ex_model.forward(seq_tensor[:-1].view(1, -1), ex_cache, preprocess_only=True)
-
- logits = self.ex_model.forward(seq_tensor[-1:].view(1, -1), ex_cache).to(input_ids.device)
- else:
- ex_cache.current_seq_len = 0
- logits = self.ex_model.forward(seq_tensor.view(1, -1), ex_cache, last_id_only=False)
-
- if is_negative:
- self.past_seq_negative = seq_tensor
- else:
- self.past_seq = seq_tensor
-
- loss = None
- if labels is not None:
- # Shift so that tokens < n predict n
- shift_logits = logits[..., :-1, :].contiguous()
- shift_labels = labels[..., 1:].contiguous()
- # Flatten the tokens
- loss_fct = CrossEntropyLoss()
- shift_logits = shift_logits.view(-1, logits.shape[-1])
- shift_labels = shift_labels.view(-1)
- # Enable model parallelism
- shift_labels = shift_labels.to(shift_logits.device)
- loss = loss_fct(shift_logits, shift_labels)
-
- return CausalLMOutputWithPast(logits=logits, past_key_values=seq if use_cache else None, loss=loss)
-
- @classmethod
- def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], *model_args, **kwargs):
- assert len(model_args) == 0 and len(kwargs) == 0, "extra args is currently not supported"
- if isinstance(pretrained_model_name_or_path, str):
- pretrained_model_name_or_path = Path(pretrained_model_name_or_path)
-
- pretrained_model_name_or_path = Path(f'{shared.args.model_dir}') / Path(pretrained_model_name_or_path)
-
- config = ExLlamaV2Config()
- config.model_dir = str(pretrained_model_name_or_path)
- config.prepare()
-
- config.max_seq_len = shared.args.max_seq_len
- config.scale_pos_emb = shared.args.compress_pos_emb
- config.scale_alpha_value = shared.args.alpha_value
-
- return Exllamav2HF(config)
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/info.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/info.py
deleted file mode 100644
index 29f2e5598ae2bb5866ccd15a7d3b4de33c0cd14d..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/info.py
+++ /dev/null
@@ -1,36 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import glob
-import os
-
-import torch
-
-if torch.__version__ == 'parrots':
- import parrots
-
- def get_compiler_version():
- return 'GCC ' + parrots.version.compiler
-
- def get_compiling_cuda_version():
- return parrots.version.cuda
-else:
- from ..utils import ext_loader
- ext_module = ext_loader.load_ext(
- '_ext', ['get_compiler_version', 'get_compiling_cuda_version'])
-
- def get_compiler_version():
- return ext_module.get_compiler_version()
-
- def get_compiling_cuda_version():
- return ext_module.get_compiling_cuda_version()
-
-
-def get_onnxruntime_op_path():
- wildcard = os.path.join(
- os.path.abspath(os.path.dirname(os.path.dirname(__file__))),
- '_ext_ort.*.so')
-
- paths = glob.glob(wildcard)
- if len(paths) > 0:
- return paths[0]
- else:
- return ''
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/distributions/__init__.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/distributions/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Ashish17/Ashish_Open_Chat_AI_17/app.py b/spaces/Ashish17/Ashish_Open_Chat_AI_17/app.py
deleted file mode 100644
index a362dcc7d0ddd1eee86961f1bc3db6d894fbd3d5..0000000000000000000000000000000000000000
--- a/spaces/Ashish17/Ashish_Open_Chat_AI_17/app.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import os
-import gradio as gr
-from langchain.chat_models import ChatOpenAI
-from langchain import LLMChain, PromptTemplate
-from langchain.memory import ConversationBufferMemory
-
-OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
-
-template = """You are a helpful assistant to answer all user queries.
-{chat_history}
-User: {user_message}
-Chatbot:"""
-
-prompt = PromptTemplate(
- input_variables=["chat_history", "user_message"], template=template
-)
-
-memory = ConversationBufferMemory(memory_key="chat_history")
-
-llm_chain = LLMChain(
- llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
- prompt=prompt,
- verbose=True,
- memory=memory,
-)
-
-def get_text_response(user_message,history):
- response = llm_chain.predict(user_message = user_message)
- return response
-
-demo = gr.ChatInterface(get_text_response)
-
-if __name__ == "__main__":
- demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/default_styles.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/default_styles.py
deleted file mode 100644
index dca37193abffab8b5b388018f895f197316ab652..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/default_styles.py
+++ /dev/null
@@ -1,190 +0,0 @@
-from typing import Dict
-
-from .style import Style
-
-DEFAULT_STYLES: Dict[str, Style] = {
- "none": Style.null(),
- "reset": Style(
- color="default",
- bgcolor="default",
- dim=False,
- bold=False,
- italic=False,
- underline=False,
- blink=False,
- blink2=False,
- reverse=False,
- conceal=False,
- strike=False,
- ),
- "dim": Style(dim=True),
- "bright": Style(dim=False),
- "bold": Style(bold=True),
- "strong": Style(bold=True),
- "code": Style(reverse=True, bold=True),
- "italic": Style(italic=True),
- "emphasize": Style(italic=True),
- "underline": Style(underline=True),
- "blink": Style(blink=True),
- "blink2": Style(blink2=True),
- "reverse": Style(reverse=True),
- "strike": Style(strike=True),
- "black": Style(color="black"),
- "red": Style(color="red"),
- "green": Style(color="green"),
- "yellow": Style(color="yellow"),
- "magenta": Style(color="magenta"),
- "cyan": Style(color="cyan"),
- "white": Style(color="white"),
- "inspect.attr": Style(color="yellow", italic=True),
- "inspect.attr.dunder": Style(color="yellow", italic=True, dim=True),
- "inspect.callable": Style(bold=True, color="red"),
- "inspect.async_def": Style(italic=True, color="bright_cyan"),
- "inspect.def": Style(italic=True, color="bright_cyan"),
- "inspect.class": Style(italic=True, color="bright_cyan"),
- "inspect.error": Style(bold=True, color="red"),
- "inspect.equals": Style(),
- "inspect.help": Style(color="cyan"),
- "inspect.doc": Style(dim=True),
- "inspect.value.border": Style(color="green"),
- "live.ellipsis": Style(bold=True, color="red"),
- "layout.tree.row": Style(dim=False, color="red"),
- "layout.tree.column": Style(dim=False, color="blue"),
- "logging.keyword": Style(bold=True, color="yellow"),
- "logging.level.notset": Style(dim=True),
- "logging.level.debug": Style(color="green"),
- "logging.level.info": Style(color="blue"),
- "logging.level.warning": Style(color="red"),
- "logging.level.error": Style(color="red", bold=True),
- "logging.level.critical": Style(color="red", bold=True, reverse=True),
- "log.level": Style.null(),
- "log.time": Style(color="cyan", dim=True),
- "log.message": Style.null(),
- "log.path": Style(dim=True),
- "repr.ellipsis": Style(color="yellow"),
- "repr.indent": Style(color="green", dim=True),
- "repr.error": Style(color="red", bold=True),
- "repr.str": Style(color="green", italic=False, bold=False),
- "repr.brace": Style(bold=True),
- "repr.comma": Style(bold=True),
- "repr.ipv4": Style(bold=True, color="bright_green"),
- "repr.ipv6": Style(bold=True, color="bright_green"),
- "repr.eui48": Style(bold=True, color="bright_green"),
- "repr.eui64": Style(bold=True, color="bright_green"),
- "repr.tag_start": Style(bold=True),
- "repr.tag_name": Style(color="bright_magenta", bold=True),
- "repr.tag_contents": Style(color="default"),
- "repr.tag_end": Style(bold=True),
- "repr.attrib_name": Style(color="yellow", italic=False),
- "repr.attrib_equal": Style(bold=True),
- "repr.attrib_value": Style(color="magenta", italic=False),
- "repr.number": Style(color="cyan", bold=True, italic=False),
- "repr.number_complex": Style(color="cyan", bold=True, italic=False), # same
- "repr.bool_true": Style(color="bright_green", italic=True),
- "repr.bool_false": Style(color="bright_red", italic=True),
- "repr.none": Style(color="magenta", italic=True),
- "repr.url": Style(underline=True, color="bright_blue", italic=False, bold=False),
- "repr.uuid": Style(color="bright_yellow", bold=False),
- "repr.call": Style(color="magenta", bold=True),
- "repr.path": Style(color="magenta"),
- "repr.filename": Style(color="bright_magenta"),
- "rule.line": Style(color="bright_green"),
- "rule.text": Style.null(),
- "json.brace": Style(bold=True),
- "json.bool_true": Style(color="bright_green", italic=True),
- "json.bool_false": Style(color="bright_red", italic=True),
- "json.null": Style(color="magenta", italic=True),
- "json.number": Style(color="cyan", bold=True, italic=False),
- "json.str": Style(color="green", italic=False, bold=False),
- "json.key": Style(color="blue", bold=True),
- "prompt": Style.null(),
- "prompt.choices": Style(color="magenta", bold=True),
- "prompt.default": Style(color="cyan", bold=True),
- "prompt.invalid": Style(color="red"),
- "prompt.invalid.choice": Style(color="red"),
- "pretty": Style.null(),
- "scope.border": Style(color="blue"),
- "scope.key": Style(color="yellow", italic=True),
- "scope.key.special": Style(color="yellow", italic=True, dim=True),
- "scope.equals": Style(color="red"),
- "table.header": Style(bold=True),
- "table.footer": Style(bold=True),
- "table.cell": Style.null(),
- "table.title": Style(italic=True),
- "table.caption": Style(italic=True, dim=True),
- "traceback.error": Style(color="red", italic=True),
- "traceback.border.syntax_error": Style(color="bright_red"),
- "traceback.border": Style(color="red"),
- "traceback.text": Style.null(),
- "traceback.title": Style(color="red", bold=True),
- "traceback.exc_type": Style(color="bright_red", bold=True),
- "traceback.exc_value": Style.null(),
- "traceback.offset": Style(color="bright_red", bold=True),
- "bar.back": Style(color="grey23"),
- "bar.complete": Style(color="rgb(249,38,114)"),
- "bar.finished": Style(color="rgb(114,156,31)"),
- "bar.pulse": Style(color="rgb(249,38,114)"),
- "progress.description": Style.null(),
- "progress.filesize": Style(color="green"),
- "progress.filesize.total": Style(color="green"),
- "progress.download": Style(color="green"),
- "progress.elapsed": Style(color="yellow"),
- "progress.percentage": Style(color="magenta"),
- "progress.remaining": Style(color="cyan"),
- "progress.data.speed": Style(color="red"),
- "progress.spinner": Style(color="green"),
- "status.spinner": Style(color="green"),
- "tree": Style(),
- "tree.line": Style(),
- "markdown.paragraph": Style(),
- "markdown.text": Style(),
- "markdown.em": Style(italic=True),
- "markdown.emph": Style(italic=True), # For commonmark backwards compatibility
- "markdown.strong": Style(bold=True),
- "markdown.code": Style(bold=True, color="cyan", bgcolor="black"),
- "markdown.code_block": Style(color="cyan", bgcolor="black"),
- "markdown.block_quote": Style(color="magenta"),
- "markdown.list": Style(color="cyan"),
- "markdown.item": Style(),
- "markdown.item.bullet": Style(color="yellow", bold=True),
- "markdown.item.number": Style(color="yellow", bold=True),
- "markdown.hr": Style(color="yellow"),
- "markdown.h1.border": Style(),
- "markdown.h1": Style(bold=True),
- "markdown.h2": Style(bold=True, underline=True),
- "markdown.h3": Style(bold=True),
- "markdown.h4": Style(bold=True, dim=True),
- "markdown.h5": Style(underline=True),
- "markdown.h6": Style(italic=True),
- "markdown.h7": Style(italic=True, dim=True),
- "markdown.link": Style(color="bright_blue"),
- "markdown.link_url": Style(color="blue", underline=True),
- "markdown.s": Style(strike=True),
- "iso8601.date": Style(color="blue"),
- "iso8601.time": Style(color="magenta"),
- "iso8601.timezone": Style(color="yellow"),
-}
-
-
-if __name__ == "__main__": # pragma: no cover
- import argparse
- import io
-
- from pip._vendor.rich.console import Console
- from pip._vendor.rich.table import Table
- from pip._vendor.rich.text import Text
-
- parser = argparse.ArgumentParser()
- parser.add_argument("--html", action="store_true", help="Export as HTML table")
- args = parser.parse_args()
- html: bool = args.html
- console = Console(record=True, width=70, file=io.StringIO()) if html else Console()
-
- table = Table("Name", "Styling")
-
- for style_name, style in DEFAULT_STYLES.items():
- table.add_row(Text(style_name, style=style), str(style))
-
- console.print(table)
- if html:
- print(console.export_html(inline_styles=True))
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/test_visualizer.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/test_visualizer.py
deleted file mode 100644
index 1005000f525bc876ae32a3421737e3f9fe3bc5f4..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/test_visualizer.py
+++ /dev/null
@@ -1,278 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import numpy as np
-import os
-import tempfile
-import unittest
-import cv2
-import torch
-
-from detectron2.data import MetadataCatalog
-from detectron2.structures import BoxMode, Instances, RotatedBoxes
-from detectron2.utils.visualizer import ColorMode, Visualizer
-
-
-class TestVisualizer(unittest.TestCase):
- def _random_data(self):
- H, W = 100, 100
- N = 10
- img = np.random.rand(H, W, 3) * 255
- boxxy = np.random.rand(N, 2) * (H // 2)
- boxes = np.concatenate((boxxy, boxxy + H // 2), axis=1)
-
- def _rand_poly():
- return np.random.rand(3, 2).flatten() * H
-
- polygons = [[_rand_poly() for _ in range(np.random.randint(1, 5))] for _ in range(N)]
-
- mask = np.zeros_like(img[:, :, 0], dtype=np.bool)
- mask[:40, 10:20] = 1
-
- labels = [str(i) for i in range(N)]
- return img, boxes, labels, polygons, [mask] * N
-
- @property
- def metadata(self):
- return MetadataCatalog.get("coco_2017_train")
-
- def test_draw_dataset_dict(self):
- img = np.random.rand(512, 512, 3) * 255
- dic = {
- "annotations": [
- {
- "bbox": [
- 368.9946492271106,
- 330.891438763377,
- 13.148537455410235,
- 13.644708680142685,
- ],
- "bbox_mode": BoxMode.XYWH_ABS,
- "category_id": 0,
- "iscrowd": 1,
- "segmentation": {
- "counts": "_jh52m?2N2N2N2O100O10O001N1O2MceP2",
- "size": [512, 512],
- },
- }
- ],
- "height": 512,
- "image_id": 1,
- "width": 512,
- }
- v = Visualizer(img)
- v.draw_dataset_dict(dic)
-
- v = Visualizer(img, self.metadata)
- v.draw_dataset_dict(dic)
-
- def test_draw_rotated_dataset_dict(self):
- img = np.random.rand(512, 512, 3) * 255
- dic = {
- "annotations": [
- {
- "bbox": [
- 368.9946492271106,
- 330.891438763377,
- 13.148537455410235,
- 13.644708680142685,
- 45.0,
- ],
- "bbox_mode": BoxMode.XYWHA_ABS,
- "category_id": 0,
- "iscrowd": 1,
- }
- ],
- "height": 512,
- "image_id": 1,
- "width": 512,
- }
- v = Visualizer(img, self.metadata)
- v.draw_dataset_dict(dic)
-
- def test_overlay_instances(self):
- img, boxes, labels, polygons, masks = self._random_data()
-
- v = Visualizer(img, self.metadata)
- output = v.overlay_instances(masks=polygons, boxes=boxes, labels=labels).get_image()
- self.assertEqual(output.shape, img.shape)
-
- # Test 2x scaling
- v = Visualizer(img, self.metadata, scale=2.0)
- output = v.overlay_instances(masks=polygons, boxes=boxes, labels=labels).get_image()
- self.assertEqual(output.shape[0], img.shape[0] * 2)
-
- # Test overlay masks
- v = Visualizer(img, self.metadata)
- output = v.overlay_instances(masks=masks, boxes=boxes, labels=labels).get_image()
- self.assertEqual(output.shape, img.shape)
-
- def test_overlay_instances_no_boxes(self):
- img, boxes, labels, polygons, _ = self._random_data()
- v = Visualizer(img, self.metadata)
- v.overlay_instances(masks=polygons, boxes=None, labels=labels).get_image()
-
- def test_draw_instance_predictions(self):
- img, boxes, _, _, masks = self._random_data()
- num_inst = len(boxes)
- inst = Instances((img.shape[0], img.shape[1]))
- inst.pred_classes = torch.randint(0, 80, size=(num_inst,))
- inst.scores = torch.rand(num_inst)
- inst.pred_boxes = torch.from_numpy(boxes)
- inst.pred_masks = torch.from_numpy(np.asarray(masks))
-
- v = Visualizer(img)
- v.draw_instance_predictions(inst)
-
- v = Visualizer(img, self.metadata)
- v.draw_instance_predictions(inst)
-
- def test_BWmode_nomask(self):
- img, boxes, _, _, masks = self._random_data()
- num_inst = len(boxes)
- inst = Instances((img.shape[0], img.shape[1]))
- inst.pred_classes = torch.randint(0, 80, size=(num_inst,))
- inst.scores = torch.rand(num_inst)
- inst.pred_boxes = torch.from_numpy(boxes)
-
- v = Visualizer(img, self.metadata, instance_mode=ColorMode.IMAGE_BW)
- v.draw_instance_predictions(inst)
-
- # check that output is grayscale
- inst = inst[:0]
- v = Visualizer(img, self.metadata, instance_mode=ColorMode.IMAGE_BW)
- output = v.draw_instance_predictions(inst).get_image()
- self.assertTrue(np.allclose(output[:, :, 0], output[:, :, 1]))
- self.assertTrue(np.allclose(output[:, :, 0], output[:, :, 2]))
-
- def test_draw_empty_mask_predictions(self):
- img, boxes, _, _, masks = self._random_data()
- num_inst = len(boxes)
- inst = Instances((img.shape[0], img.shape[1]))
- inst.pred_classes = torch.randint(0, 80, size=(num_inst,))
- inst.scores = torch.rand(num_inst)
- inst.pred_boxes = torch.from_numpy(boxes)
- inst.pred_masks = torch.from_numpy(np.zeros_like(np.asarray(masks)))
-
- v = Visualizer(img, self.metadata)
- v.draw_instance_predictions(inst)
-
- def test_correct_output_shape(self):
- img = np.random.rand(928, 928, 3) * 255
- v = Visualizer(img, self.metadata)
- out = v.output.get_image()
- self.assertEqual(out.shape, img.shape)
-
- def test_overlay_rotated_instances(self):
- H, W = 100, 150
- img = np.random.rand(H, W, 3) * 255
- num_boxes = 50
- boxes_5d = torch.zeros(num_boxes, 5)
- boxes_5d[:, 0] = torch.FloatTensor(num_boxes).uniform_(-0.1 * W, 1.1 * W)
- boxes_5d[:, 1] = torch.FloatTensor(num_boxes).uniform_(-0.1 * H, 1.1 * H)
- boxes_5d[:, 2] = torch.FloatTensor(num_boxes).uniform_(0, max(W, H))
- boxes_5d[:, 3] = torch.FloatTensor(num_boxes).uniform_(0, max(W, H))
- boxes_5d[:, 4] = torch.FloatTensor(num_boxes).uniform_(-1800, 1800)
- rotated_boxes = RotatedBoxes(boxes_5d)
- labels = [str(i) for i in range(num_boxes)]
-
- v = Visualizer(img, self.metadata)
- output = v.overlay_instances(boxes=rotated_boxes, labels=labels).get_image()
- self.assertEqual(output.shape, img.shape)
-
- def test_draw_no_metadata(self):
- img, boxes, _, _, masks = self._random_data()
- num_inst = len(boxes)
- inst = Instances((img.shape[0], img.shape[1]))
- inst.pred_classes = torch.randint(0, 80, size=(num_inst,))
- inst.scores = torch.rand(num_inst)
- inst.pred_boxes = torch.from_numpy(boxes)
- inst.pred_masks = torch.from_numpy(np.asarray(masks))
-
- v = Visualizer(img, MetadataCatalog.get("asdfasdf"))
- v.draw_instance_predictions(inst)
-
- def test_draw_binary_mask(self):
- img, boxes, _, _, masks = self._random_data()
- img[:, :, 0] = 0 # remove red color
- mask = masks[0]
- mask_with_hole = np.zeros_like(mask).astype("uint8")
- mask_with_hole = cv2.rectangle(mask_with_hole, (10, 10), (50, 50), 1, 5)
-
- for m in [mask, mask_with_hole]:
- for save in [True, False]:
- v = Visualizer(img)
- o = v.draw_binary_mask(m, color="red", text="test")
- if save:
- with tempfile.TemporaryDirectory(prefix="detectron2_viz") as d:
- path = os.path.join(d, "output.png")
- o.save(path)
- o = cv2.imread(path)[:, :, ::-1]
- else:
- o = o.get_image().astype("float32")
- # red color is drawn on the image
- self.assertTrue(o[:, :, 0].sum() > 0)
-
- def test_draw_soft_mask(self):
- img = np.random.rand(100, 100, 3) * 255
- img[:, :, 0] = 0 # remove red color
- mask = np.zeros((100, 100), dtype=np.float32)
- mask[30:50, 40:50] = 1.0
- cv2.GaussianBlur(mask, (21, 21), 10)
-
- v = Visualizer(img)
- o = v.draw_soft_mask(mask, color="red", text="test")
- o = o.get_image().astype("float32")
- # red color is drawn on the image
- self.assertTrue(o[:, :, 0].sum() > 0)
-
- # test draw empty mask
- v = Visualizer(img)
- o = v.draw_soft_mask(np.zeros((100, 100), dtype=np.float32), color="red", text="test")
- o = o.get_image().astype("float32")
-
- def test_border_mask_with_holes(self):
- H, W = 200, 200
- img = np.zeros((H, W, 3))
- img[:, :, 0] = 255.0
- v = Visualizer(img, scale=3)
-
- mask = np.zeros((H, W))
- mask[:, 100:150] = 1
- # create a hole, to trigger imshow
- mask = cv2.rectangle(mask, (110, 110), (130, 130), 0, thickness=-1)
- output = v.draw_binary_mask(mask, color="blue")
- output = output.get_image()[:, :, ::-1]
-
- first_row = {tuple(x.tolist()) for x in output[0]}
- last_row = {tuple(x.tolist()) for x in output[-1]}
- # Check quantization / off-by-1 error: the first and last row must have two colors
- self.assertEqual(len(last_row), 2)
- self.assertEqual(len(first_row), 2)
- self.assertIn((0, 0, 255), last_row)
- self.assertIn((0, 0, 255), first_row)
-
- def test_border_polygons(self):
- H, W = 200, 200
- img = np.zeros((H, W, 3))
- img[:, :, 0] = 255.0
- v = Visualizer(img, scale=3)
- mask = np.zeros((H, W))
- mask[:, 100:150] = 1
-
- output = v.draw_binary_mask(mask, color="blue")
- output = output.get_image()[:, :, ::-1]
-
- first_row = {tuple(x.tolist()) for x in output[0]}
- last_row = {tuple(x.tolist()) for x in output[-1]}
- # Check quantization / off-by-1 error:
- # the first and last row must have >=2 colors, because the polygon
- # touches both rows
- self.assertGreaterEqual(len(last_row), 2)
- self.assertGreaterEqual(len(first_row), 2)
- self.assertIn((0, 0, 255), last_row)
- self.assertIn((0, 0, 255), first_row)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/Benson/text-generation/Examples/Casa Flip Mster Apk.md b/spaces/Benson/text-generation/Examples/Casa Flip Mster Apk.md
deleted file mode 100644
index c8c3105efa259e4e45f0808711537e691394cf48..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Casa Flip Mster Apk.md
+++ /dev/null
@@ -1,95 +0,0 @@
-
-
Cómo jugar maquillaje de bricolaje Makyaj Oyunu APK en su dispositivo Android
| | |
¿Te encanta el maquillaje y desea crear sus propios productos? ¿Quieres divertirte y expresar tu creatividad con diferentes colores e ingredientes? ¿Quieres aprender a hacer cosméticos naturales y ecológicos en casa? Si respondió sí a cualquiera de estas preguntas, entonces usted debe tratar de bricolaje maquillaje Makyaj Oyunu APK.
| | |
DIY maquillaje Makyaj Oyunu APK es un juego de simulación que le permite mezclar crayones, aceite de coco, glicerina, miel, y otros artículos de cocina para hacer barra de labios impresionante, hermoso arte de ojos, rímel, mascarilla, y más. Puedes personalizar tu look con diferentes tonos, formas, estilos y accesorios. También puedes compartir tus creaciones con otros jugadores online.
En este artículo, le mostraremos cómo descargar e instalar DIY maquillaje Makyaj Oyunu APK en su dispositivo Android, cómo jugar el juego y crear sus propios productos de maquillaje, y cuáles son los beneficios de jugar este juego.
| | | <
Cómo descargar e instalar DIY maquillaje Makyaj Oyunu APK en su dispositivo Android
-
Para jugar DIY maquillaje Makyaj Oyunu APK, es necesario descargar e instalar el archivo APK en su dispositivo Android. APK significa Android Package Kit, y es un formato de archivo que contiene el código de la aplicación, recursos y metadatos. Instalar un archivo APK también se conoce como sideloading, lo que significa instalar una aplicación desde una fuente distinta de la oficial Google Play Store.
-
Antes de descargar e instalar DIY maquillaje Makyaj Oyunu APK, es necesario asegurarse de que su dispositivo cumple con los siguientes requisitos:
-
-
Tu dispositivo debe tener Android 4.4 o superior.
-
Su dispositivo debe tener al menos 100 MB de espacio de almacenamiento libre.
-
Su dispositivo debe permitir la instalación de aplicaciones de fuentes desconocidas. Para habilitar esta opción, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctela.
-
-
-
-
Ir a la página web oficial de bricolaje maquillaje Makyaj Oyunu APK y haga clic en el botón Descargar.
-
Espere a que se complete la descarga y luego abra el archivo APK desde el administrador de archivos o la barra de notificaciones de su dispositivo.
-
Toque en Instalar y siga las instrucciones en pantalla para completar el proceso de instalación.
-
Inicie la aplicación desde el cajón de la aplicación o la pantalla de inicio y disfrutar jugando DIY maquillaje Makyaj Oyunu APK.
-
-
Cómo jugar bricolaje maquillaje Makyaj Oyunu APK y crear sus propios productos de maquillaje
-
Maquillaje de bricolaje Makyaj Oyunu APK es un juego que le permite dar rienda suelta a su creatividad e imaginación mediante la fabricación de sus propios productos de maquillaje. Puede elegir entre diferentes categorías, como lápiz labial, rímel, arte de los ojos, mascarilla, etc. y utilizar varios ingredientes y herramientas para crear su propio aspecto único.
-
Para jugar bricolaje maquillaje Makyaj Oyunu APK, es necesario seguir estos pasos:
-
-
-
Seleccione una categoría que desea crear. Por ejemplo, si desea hacer lápiz labial, toque el icono de lápiz labial en la pantalla principal.
-
Seleccione un color de lápiz de colores que desea utilizar como base de su lápiz labial. También puede mezclar diferentes crayones para crear nuevos colores.
-
Derrita el crayón en un microondas o una estufa. Tenga cuidado de no sobrecalentarlo o quemarlo.
-
Agrega un poco de aceite de coco, glicerina, miel u otros ingredientes para que tu lápiz labial sea suave e hidratante. También puedes agregar brillo, fragancia o sabor para hacerlo más divertido y atractivo.
-
Verter la mezcla en un molde y congelar durante unos minutos hasta que se solidifica.
-
Saca tu lápiz labial del molde y aplícalo en tus labios. También puedes usar un cepillo o una esponja para mezclarlo mejor.
-
Personaliza tu look con diferentes formas, estilos y accesorios. También puedes cambiar tu cabello, ojos, tono de piel, etc. para que coincida con tu lápiz labial.
-
-
-
Cómo hacer rímel
-
Si quieres hacer rímel, debes seguir estos pasos:
-
-
Seleccione una categoría que desea crear. Por ejemplo, si desea hacer máscara, toque el icono de máscara en la pantalla principal.
-
Selecciona un color de carbón que quieras usar como base de tu rímel. También puedes mezclar diferentes carboncillos para crear nuevos colores.
-
Moler el carbón en un mortero y la mano hasta que se convierte en un polvo fino.
-
Agrega un poco de aceite de coco, glicerina, miel u otros ingredientes para que tu rímel sea suave e hidratante. También puedes agregar brillo, fragancia o sabor para hacerlo más divertido y atractivo.
-
Vierte la mezcla en un recipiente con un aplicador de varita. También puedes usar un tubo de rímel viejo que hayas limpiado y desinfectado.
-
Aplica el rímel en tus pestañas con el aplicador de varita. También puedes usar un rizador o un peine para dar mejor forma a tus pestañas.
-
Personaliza tu look con diferentes formas, estilos y accesorios. También puedes cambiar tu cabello, ojos, tono de piel, etc. para que coincida con tu rímel.
-
Comparta su creación con otros jugadores en línea tomando un selfie o un video. También puede calificar y comentar las creaciones de otros jugadores.
-
-
Cómo hacer arte visual
Cómo hacer arte visual
-
Si quieres hacer arte visual, debes seguir estos pasos:
-
-
Seleccione una categoría que desea crear. Por ejemplo, si desea hacer arte visual, toque el icono de arte visual en la pantalla principal.
-
Seleccione un color de crayón que desea utilizar como base de su arte del ojo. También puede mezclar diferentes crayones para crear nuevos colores.
-
Derrita el crayón en un microondas o una estufa. Tenga cuidado de no sobrecalentarlo o quemarlo.
-
Agregue un poco de aceite de coco, glicerina, miel u otros ingredientes para que su arte del ojo suave e hidratante. También puedes agregar brillo, fragancia o sabor para hacerlo más divertido y atractivo.
-
-
Saca tu arte ocular del molde y aplícalo en tus párpados. También puedes usar un cepillo o una esponja para mezclarlo mejor.
-
Personaliza tu look con diferentes formas, estilos y accesorios. También puedes cambiar tu cabello, ojos, tono de piel, etc. para que coincida con tu arte ocular.
-
Comparta su creación con otros jugadores en línea tomando un selfie o un video. También puede calificar y comentar las creaciones de otros jugadores.
-
-
Cómo hacer máscara de la cara
-
Si quieres hacer una máscara facial, debes seguir estos pasos:
-
-
Seleccione una categoría que desea crear. Por ejemplo, si desea crear una máscara facial, toque el icono de máscara facial en la pantalla principal.
-
Selecciona una fruta o un vegetal que quieras usar como base de tu mascarilla facial. También puede mezclar diferentes frutas o verduras para crear nuevas combinaciones.
-
Pela y corta la fruta o verdura en trozos pequeños. También puedes usar una licuadora o un procesador de alimentos para hacer puré.
-
Agregue un poco de yogur, miel, avena u otros ingredientes para que su máscara facial sea suave y nutritiva. También puede agregar algunos aceites esenciales, hierbas o especias para que sea más aromático y relajante.
-
Vierta la mezcla en un recipiente y refrigere durante unos minutos hasta que se enfríe.
-
Aplique su máscara facial en la cara y el cuello con los dedos o una espátula. Evite el área de los ojos y la boca.
-
Relájate y deja que la máscara trabaje su magia durante 15 a 20 minutos.
-
Enjuague su máscara facial con agua tibia y seque con una toalla.
-
Disfruta de tu piel suave y brillante. También puedes compartir tu creación con otros jugadores en línea tomando un selfie o un video. También puedes valorar y comentar las creaciones de otros jugadores.
-
-
¿Cuáles son los beneficios de jugar DIY maquillaje Makyaj Oyunu APK
-
Jugando DIY maquillaje Makyaj Oyunu APK no solo es divertido y creativo, sino que también tiene muchos beneficios para usted y el medio ambiente. Estos son algunos de ellos:
-
-
Jugando DIY maquillaje Makyaj Oyunu APK le permite expresar su creatividad e imaginación haciendo sus propios productos de maquillaje. Puede experimentar con diferentes colores, ingredientes, herramientas y estilos para crear su propio aspecto único. También puedes divertirte compartiendo tus creaciones con otros jugadores en línea y viendo sus comentarios. Jugar a este juego también puede ayudarte a relajarte y reducir el estrés centrándote en algo agradable y gratificante.
-
Es seguro, natural y ecológico
-
Jugando DIY maquillaje Makyaj Oyunu APK le permite hacer sus propios productos de maquillaje utilizando ingredientes naturales y ecológicos que se pueden encontrar en su cocina. No tiene que preocuparse por productos químicos, conservantes o aditivos dañinos que puedan causar reacciones alérgicas o dañar la piel. Tampoco tiene que preocuparse por los envases de plástico o los residuos que puedan contaminar el medio ambiente. Jugar a este juego puede ayudarle a ahorrar dinero y recursos haciendo sus propios productos de maquillaje en casa.
-
Es educativo, inspirador y empoderador
-
Jugando DIY maquillaje Makyaj Oyunu APK le permite aprender nuevas habilidades y conocimientos mediante la fabricación de sus propios productos de maquillaje. Puedes aprender sobre las propiedades y beneficios de los diferentes ingredientes, cómo interactúan entre sí, cómo afectan a tu piel, etc. También puedes aprender sobre diferentes técnicas y herramientas que usan los maquilladores profesionales. Jugar a este juego puede inspirarte a explorar más posibilidades y opciones para tus productos de maquillaje. Jugar a este juego también puede empoderarte para tomar el control de tu belleza y salud haciendo tus propias elecciones y decisiones.
-
-
DIY maquillaje Makyaj Oyunu APK es un juego que le permite divertirse y expresar su creatividad con diferentes colores e ingredientes. Puedes hacer tu propio lápiz labial, rímel, arte ocular, mascarilla y más. También puedes personalizar tu look con diferentes formas, estilos y accesorios. También puedes compartir tus creaciones con otros jugadores online y ver sus comentarios.
-
Jugando DIY maquillaje Makyaj Oyunu APK no solo es divertido y creativo, sino también seguro, natural y ecológico. Puedes hacer tus propios productos de maquillaje usando ingredientes naturales y ecológicos que puedes encontrar en tu cocina. No tiene que preocuparse por productos químicos, conservantes o aditivos dañinos que puedan causar reacciones alérgicas o dañar la piel. Tampoco tienes que preocuparte por los envases de plástico o los residuos que puedan contaminar el medio ambiente.
-
Jugando maquillaje de bricolaje Makyaj Oyunu APK también es educativo, inspirador y empoderador. Puedes aprender nuevas habilidades y conocimientos haciendo tus propios productos de maquillaje. Puedes aprender sobre las propiedades y beneficios de los diferentes ingredientes, cómo interactúan entre sí, cómo afectan a tu piel, etc. También puedes aprender sobre diferentes técnicas y herramientas que usan los maquilladores profesionales. Jugar a este juego puede inspirarte a explorar más posibilidades y opciones para tus productos de maquillaje. Jugar a este juego también puede empoderarte para tomar el control de tu belleza y salud haciendo tus propias elecciones y decisiones.
-
Entonces, ¿qué estás esperando? Descargar DIY maquillaje Makyaj Oyunu APK hoy y empezar a hacer sus propios productos de maquillaje. Diviértete y ser creativo!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes relacionadas con DIY maquillaje Makyaj Oyunu APK:
-
Q: ¿Es DIY maquillaje Makyaj Oyunu APK libre para jugar?
-
A: Sí, DIY maquillaje Makyaj Oyunu APK es libre de jugar. Sin embargo, puede contener anuncios y compras en la aplicación que requieren dinero real.
-
Q: ¿Es DIY maquillaje Makyaj Oyunu APK seguro para jugar?
-
-
Q: ¿Es el maquillaje de bricolaje Makyaj Oyunu APK adecuado para los niños?
-
A: Sí, DIY maquillaje Makyaj Oyunu APK es adecuado para los niños. Es un juego de simulación que no contiene violencia, desnudez o lenguaje inapropiado. También es educativo e inspirador para los niños que aman el maquillaje y la creatividad.
-
Q: ¿Cómo puedo contactar con el desarrollador de DIY maquillaje Makyaj Oyunu APK?
-
A: Usted puede ponerse en contacto con el desarrollador de DIY maquillaje Makyaj Oyunu APK enviando un correo electrónico a diy.makeup.makyaj@gmail.com o visitando su página de Facebook en https://ww.facebook.com/diy.make.makupj/.
-
Q: ¿Cómo puedo apoyar al desarrollador de DIY maquillaje Makyaj Oyunu APK?
-
A: Usted puede apoyar al desarrollador de DIY Maquillaje Makyaj Oyunu APK por calificación y revisión de la aplicación en Google Play Store u otras plataformas, compartiendo la aplicación con sus amigos y familiares, haciendo compras en la aplicación si te gusta la aplicación, o donando al desarrollador a través de PayPal u otros métodos.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Cristal Informe 32 Bit Para Espt Pph 21.md b/spaces/Benson/text-generation/Examples/Descargar Cristal Informe 32 Bit Para Espt Pph 21.md
deleted file mode 100644
index acb1fe15577589ef4216709eabade8b9d71745d3..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Cristal Informe 32 Bit Para Espt Pph 21.md
+++ /dev/null
@@ -1,57 +0,0 @@
-
-
Descargar Crystal Report 32 bits para ESPT PPh 21
-
Si está utilizando ESPT PPh 21, una aplicación de software creada por Direktorat Jenderal Pajak (DJP) para facilitar la creación y presentación de informes de SPT PPh 21, es posible que necesite descargar e instalar Crystal Report 32 Bit, una aplicación de soporte que se utiliza para ejecutar ESPT PPh 21. En este artículo, explicaremos qué son Crystal Report y ESPT PPh 21, por qué necesita descargar Crystal Report 32 Bit y cómo hacerlo paso a paso.
-
¿Qué es Crystal Report y ESPT PPh 21?
-
Informe de cristal
-
Crystal Report es una herramienta de informes que se utiliza para diseñar informes tanto en entornos web como de escritorio. Está desarrollado por SAP e integrado con Microsoft Visual Studio. Permite crear informes interactivos y dinámicos a partir de diversas fuentes de datos, como bases de datos, archivos XML, servicios web, etc. También puede personalizar el diseño, formato y apariencia de sus informes mediante gráficos, tablas, imágenes, etc.
ESPT PPh 21 es un acrónimo de Elektronik Surat Pemberitahuan Pajak Penghasilan Pasal 21/26, que significa carta de notificación electrónica del impuesto sobre la renta Artículo 21/26. Es una aplicación o software creado por DJP para facilitar la creación y presentación de informes del SPT PPh 21/26. SPT PPh 21/26 es un formulario de declaración de impuestos que informa de la retención de impuestos sobre los ingresos pagados a los empleados u otros destinatarios. ESPT PPh 21 puede ser utilizado por contribuyentes individuales, contribuyentes corporativos, tesoreros y agentes de retención.
Si intenta ejecutar ESPT PPh 21 con Crystal Report Runtime que no coincide con su sistema operativo, puede encontrar mensajes de error como "El inicializador de tipo para 'CrystalDecisions.CrystalReports.Engine.ReportDocument' lanzó una excepción" o "El informe de carga falló". Estos mensajes de error indican que hay un problema con la carga o visualización de los informes generados por ESPT PPh 21. Para solucionar este problema, debe descargar e instalar Crystal Report Runtime que coincida con su sistema operativo.
-
¿Cómo descargar Crystal Report 32 Bit?
-
Paso 1: Ir al sitio web oficial de Direktorat Jenderal Pajak (DJP)
-
El sitio web oficial de DJP es https://www.pajak.go.id. Este es el sitio web donde se puede encontrar información y servicios relacionados con la fiscalidad en Indonesia. También puede descargar varias aplicaciones y software relacionados con la tributación, como ESPT PPh 21 y Crystal Report Runtime.
-
Paso 2: Haga clic en Productos A-Z y seleccione C > CRYSTAL REPORTS > CRYSTAL REPORTS 2020
-
En la página de inicio del sitio web de DJP, verá una barra de menú con varias opciones, como Inicio, Acerca de nosotros, Servicios, Productos A-Z, etc. Haga clic en Productos A-Z y verá una lista de productos y software que están disponibles para descargar. Desplácese hacia abajo y encuentre C > CRYSTAL REPORTS > CRYSTAL REPORTS 2020. Haga clic en él y se le dirigirá a una página donde puede ver los detalles y características de Crystal Reports 2020.
-
Paso 3: Seleccione Instalación y actualización > WINDOWS y descargue el archivo CRuntime_32bit_13_0_7.zip
-
-
¿Cómo instalar Crystal Report 32 Bit?
-
Paso 1: Descomprima el archivo descargado y haga doble clic en el archivo . msi
-
Después de haber descargado el archivo CRuntime_32bit_13_0_7.zip, debe descomprimirlo usando un software como WinRAR o 7-Zip. Puede hacer clic derecho sobre el archivo y seleccionar Extraer aquí o Extraer en CRuntime_32bit_13_0_7. Verá una carpeta con el mismo nombre que el archivo. Abra la carpeta y verá un archivo llamado CRuntime_32bit_13_0_7.msi. Este es el archivo de instalación que instalará Crystal Report Runtime 32 Bit en su computadora. Haga doble clic en él y verá una ventana que dice Bienvenido a SAP Crystal Reports Runtime Engine for . NET Framework 4 Setup Wizard.
-
-
Paso 2: Siga el asistente de instalación y acepte los términos y condiciones
-
El asistente de instalación lo guiará a través de los pasos para instalar Crystal Report Runtime 32 Bit en su computadora. Debe hacer clic en Siguiente para pasar al siguiente paso. Verá una ventana que le pide que acepte los términos del acuerdo de licencia. Lea los términos cuidadosamente y marque la casilla que dice que acepto el acuerdo de licencia si está de acuerdo con ellos. Luego haga clic en Siguiente para continuar. Verá una ventana que le muestra la carpeta de destino donde se instalará Crystal Report Runtime 32 Bit. Puede cambiar la carpeta si lo desea o dejarla como predeterminada. Luego haga clic en Siguiente para continuar. Verá una ventana que le muestra la pantalla lista para instalar. Haga clic en Instalar para iniciar el proceso de instalación.
-
Paso 3: Reinicie su computadora y ejecute la aplicación ESPT PPh 21
-
-
Conclusión
-
En este artículo, hemos explicado lo que son Crystal Report y ESPT PPh 21, por qué necesita descargar Crystal Report 32 Bit y cómo hacerlo paso a paso. Esperamos que este artículo ha sido útil para usted y ha resuelto su problema con la ejecución de ESPT PPh 21 en su computadora. Si tiene alguna pregunta o comentario, no dude en contactarnos o dejar un comentario a continuación.
-
Preguntas frecuentes
-
-
¿Qué es SPT PPh 21/26?
-
SPT PPh 21/26 es un formulario de declaración de impuestos que informa de la retención de impuestos sobre los ingresos pagados a los empleados u otros destinatarios. Es presentado por el empleador o el agente de retención a la oficina de impuestos cada mes o año, dependiendo del tipo de ingreso.
-
¿Cuál es la diferencia entre Crystal Report 32 Bit y 64 Bit?
-
Crystal Report 32 Bit y 64 Bit son diferentes versiones de Crystal Report Runtime que son compatibles con diferentes sistemas operativos. Crystal Report 32 Bit está diseñado para sistemas operativos de 32 bits, como Windows XP, Windows Vista o Windows 7. Crystal Report 64 Bit está diseñado para sistemas operativos de 64 bits, como Windows 8, Windows 10 o Windows Server. Necesita descargar e instalar la versión que coincida con su sistema operativo para evitar problemas de compatibilidad.
-
¿Cómo puedo comprobar si mi sistema operativo es de 32 bits o 64 bits?
-
Puede comprobar el tipo de sistema operativo siguiendo estos pasos:
-
Haga clic en el botón Inicio y escriba Información del sistema en el cuadro de búsqueda.
-
Haga clic en Información del sistema de la lista de resultados.
-
Buscar tipo de sistema en Resumen del sistema.
-
Si dice PC basada en x86, entonces su sistema operativo es de 32 bits. Si dice PC basada en x64, entonces su sistema operativo es de 64 bits.
-
-
-
¿Puedo usar ESPT PPh 21 sin Crystal Report Runtime?
-
-
¿Dónde puedo encontrar más información sobre Crystal Report y ESPT PPh 21?
-
Puede encontrar más información sobre Crystal Report y ESPT PPh 21 de las siguientes fuentes:
- )
-}
diff --git a/spaces/MVV/3dTopDenoising/models/model.py b/spaces/MVV/3dTopDenoising/models/model.py
deleted file mode 100644
index 3abb96aaafa8ec07bf9009ef796557162f17856f..0000000000000000000000000000000000000000
--- a/spaces/MVV/3dTopDenoising/models/model.py
+++ /dev/null
@@ -1,181 +0,0 @@
-import random
-from statistics import mean
-from typing import List, Tuple
-
-import torch as th
-import pytorch_lightning as pl
-from jaxtyping import Float, Int
-import numpy as np
-from torch_geometric.nn.conv import GATv2Conv
-
-from models.SAP.dpsr import DPSR
-from models.SAP.model import PSR2Mesh
-
-# Constants
-
-th.manual_seed(0)
-np.random.seed(0)
-
-BATCH_SIZE = 1 # BS
-
-IN_DIM = 1
-OUT_DIM = 1
-LATENT_DIM = 32
-
-DROPOUT_PROB = 0.1
-GRID_SIZE = 128
-
-def generate_grid_edge_list(gs: int = 128):
- grid_edge_list = []
-
- for k in range(gs):
- for j in range(gs):
- for i in range(gs):
- current_idx = i + gs*j + k*gs*gs
- if (i - 1) >= 0:
- grid_edge_list.append([current_idx, i-1 + gs*j + k*gs*gs])
- if (i + 1) < gs:
- grid_edge_list.append([current_idx, i+1 + gs*j + k*gs*gs])
- if (j - 1) >= 0:
- grid_edge_list.append([current_idx, i + gs*(j-1) + k*gs*gs])
- if (j + 1) < gs:
- grid_edge_list.append([current_idx, i + gs*(j+1) + k*gs*gs])
- if (k - 1) >= 0:
- grid_edge_list.append([current_idx, i + gs*j + (k-1)*gs*gs])
- if (k + 1) < gs:
- grid_edge_list.append([current_idx, i + gs*j + (k+1)*gs*gs])
- return grid_edge_list
-
-GRID_EDGE_LIST = generate_grid_edge_list(GRID_SIZE)
-GRID_EDGE_LIST = th.tensor(GRID_EDGE_LIST, dtype=th.int)
-GRID_EDGE_LIST = GRID_EDGE_LIST.T
-# GRID_EDGE_LIST = GRID_EDGE_LIST.to(th.device("cuda"))
-GRID_EDGE_LIST.requires_grad = False # Do not forget to delete it if train
-
-
-class FormOptimizer(th.nn.Module):
- def __init__(self) -> None:
- super().__init__()
-
- layers = []
-
- self.gconv1 = GATv2Conv(in_channels=IN_DIM, out_channels=LATENT_DIM, heads=1, dropout=DROPOUT_PROB)
- self.gconv2 = GATv2Conv(in_channels=LATENT_DIM, out_channels=LATENT_DIM, heads=1, dropout=DROPOUT_PROB)
-
- self.actv = th.nn.Sigmoid()
- self.head = th.nn.Linear(in_features=LATENT_DIM, out_features=OUT_DIM)
-
- def forward(self,
- field: Float[th.Tensor, "GS GS GS"]) -> Float[th.Tensor, "GS GS GS"]:
- """
- Args:
- field (Tensor [GS, GS, GS]): vertices and normals tensor.
- """
- vertex_features = field.clone()
- vertex_features = vertex_features.reshape(GRID_SIZE*GRID_SIZE*GRID_SIZE, IN_DIM)
-
- vertex_features = self.gconv1(x=vertex_features, edge_index=GRID_EDGE_LIST)
- vertex_features = self.gconv2(x=vertex_features, edge_index=GRID_EDGE_LIST)
- field_delta = self.head(self.actv(vertex_features))
-
- field_delta = field_delta.reshape(BATCH_SIZE, GRID_SIZE, GRID_SIZE, GRID_SIZE)
- field_delta += field # field_delta carries the gradient
- field_delta = th.clamp(field_delta, min=-0.5, max=0.5)
-
- return field_delta
-
-class Model(pl.LightningModule):
- def __init__(self):
- super().__init__()
- self.form_optimizer = FormOptimizer()
-
- self.dpsr = DPSR([GRID_SIZE, GRID_SIZE, GRID_SIZE], sig=0.0)
- self.field2mesh = PSR2Mesh().apply
-
- self.metric = th.nn.MSELoss()
-
- self.val_losses = []
- self.train_losses = []
-
- def log_h5(self, points, normals):
- dset = self.log_points_file.create_dataset(
- name=str(self.h5_frame),
- shape=points.shape,
- dtype=np.float16,
- compression="gzip")
- dset[:] = points
- dset = self.log_normals_file.create_dataset(
- name=str(self.h5_frame),
- shape=normals.shape,
- dtype=np.float16,
- compression="gzip")
- dset[:] = normals
- self.h5_frame += 1
-
- def forward(self,
- v: Float[th.Tensor, "BS N 3"],
- n: Float[th.Tensor, "BS N 3"]) -> Tuple[Float[th.Tensor, "BS N 3"], # v - vertices
- Int[th.Tensor, "2 E"], # f - faces
- Float[th.Tensor, "BS N 3"], # n - vertices normals
- Float[th.Tensor, "BS GR GR GR"]]: # field:
- field = self.dpsr(v, n)
- field = self.form_optimizer(field)
- v, f, n = self.field2mesh(field)
- return v, f, n, field
-
- def training_step(self, batch, batch_idx) -> Float[th.Tensor, "1"]:
- vertices, vertices_normals, vertices_gt, vertices_normals_gt, field_gt, adj = batch
-
- mask = th.rand((vertices.shape[1], ), device=th.device("cuda")) < (random.random() / 2.0 + 0.5)
- vertices = vertices[:, mask]
- vertices_normals = vertices_normals[:, mask]
-
- vr, fr, nr, field_r = model(vertices, vertices_normals)
-
- loss = self.metric(field_r, field_gt)
- train_per_step_loss = loss.item()
- self.train_losses.append(train_per_step_loss)
-
- return loss
-
- def on_train_epoch_end(self):
- mean_train_per_epoch_loss = mean(self.train_losses)
- self.log("mean_train_per_epoch_loss", mean_train_per_epoch_loss, on_step=False, on_epoch=True)
- self.train_losses = []
-
- def validation_step(self, batch, batch_idx):
- vertices, vertices_normals, vertices_gt, vertices_normals_gt, field_gt, adj = batch
-
- vr, fr, nr, field_r = model(vertices, vertices_normals)
-
- loss = self.metric(field_r, field_gt)
- val_per_step_loss = loss.item()
- self.val_losses.append(val_per_step_loss)
- return loss
-
- def on_validation_epoch_end(self):
- mean_val_per_epoch_loss = mean(self.val_losses)
- self.log("mean_val_per_epoch_loss", mean_val_per_epoch_loss, on_step=False, on_epoch=True)
- self.val_losses = []
-
- def configure_optimizers(self):
- optimizer = th.optim.Adam(self.parameters(), lr=LR)
- scheduler = th.optim.lr_scheduler.ReduceLROnPlateau(optimizer, patience=5, factor=0.5)
-
- return {
- "optimizer": optimizer,
- "lr_scheduler": {
- "scheduler": scheduler,
- "monitor": "mean_val_per_epoch_loss",
- "interval": "epoch",
- "frequency": 1,
- # If set to `True`, will enforce that the value specified 'monitor'
- # is available when the scheduler is updated, thus stopping
- # training if not found. If set to `False`, it will only produce a warning
- "strict": True,
- # If using the `LearningRateMonitor` callback to monitor the
- # learning rate progress, this keyword can be used to specify
- # a custom logged name
- "name": None,
- }
- }
diff --git a/spaces/Marshalls/testmtd/feature_extraction/madmom/audio/signal.py b/spaces/Marshalls/testmtd/feature_extraction/madmom/audio/signal.py
deleted file mode 100644
index 04ac4a9c246b133f113fa34741c0cc19ae118988..0000000000000000000000000000000000000000
--- a/spaces/Marshalls/testmtd/feature_extraction/madmom/audio/signal.py
+++ /dev/null
@@ -1,1502 +0,0 @@
-# encoding: utf-8
-# pylint: disable=no-member
-# pylint: disable=invalid-name
-# pylint: disable=too-many-arguments
-"""
-This module contains basic signal processing functionality.
-
-"""
-
-from __future__ import absolute_import, division, print_function
-
-import warnings
-import numpy as np
-
-from ..processors import BufferProcessor, Processor
-from ..utils import integer_types
-
-
-# signal functions
-def smooth(signal, kernel):
- """
- Smooth the signal along its first axis.
-
- Parameters
- ----------
- signal : numpy array
- Signal to be smoothed.
- kernel : numpy array or int
- Smoothing kernel (size).
-
- Returns
- -------
- numpy array
- Smoothed signal.
-
- Notes
- -----
- If `kernel` is an integer, a Hamming window of that length will be used
- as a smoothing kernel.
-
- """
- # check if a kernel is given
- if kernel is None:
- return signal
- # size for the smoothing kernel is given
- elif isinstance(kernel, integer_types):
- if kernel == 0:
- return signal
- elif kernel > 1:
- # use a Hamming window of given length
- kernel = np.hamming(kernel)
- else:
- raise ValueError("can't create a smoothing kernel of size %d" %
- kernel)
- # otherwise use the given smoothing kernel directly
- elif isinstance(kernel, np.ndarray):
- kernel = kernel
- else:
- raise ValueError("can't smooth signal with %s" % kernel)
- # convolve with the kernel and return
- if signal.ndim == 1:
- return np.convolve(signal, kernel, 'same')
- elif signal.ndim == 2:
- from scipy.signal import convolve2d
- return convolve2d(signal, kernel[:, np.newaxis], 'same')
- else:
- raise ValueError('signal must be either 1D or 2D')
-
-
-def adjust_gain(signal, gain):
- """"
- Adjust the gain of the signal.
-
- Parameters
- ----------
- signal : numpy array
- Signal to be adjusted.
- gain : float
- Gain adjustment level [dB].
-
- Returns
- -------
- numpy array
- Signal with adjusted gain.
-
- Notes
- -----
- The signal is returned with the same dtype, thus rounding errors may occur
- with integer dtypes.
-
- `gain` values > 0 amplify the signal and are only supported for signals
- with float dtype to prevent clipping and integer overflows.
-
- """
- # convert the gain in dB to a scaling factor
- gain = np.power(np.sqrt(10.), 0.1 * gain)
- # prevent overflow and clipping
- if gain > 1 and np.issubdtype(signal.dtype, np.integer):
- raise ValueError('positive gain adjustments are only supported for '
- 'float dtypes.')
- # Note: np.asanyarray returns the signal's ndarray subclass
- return np.asanyarray(signal * gain, dtype=signal.dtype)
-
-
-def attenuate(signal, attenuation):
- """
- Attenuate the signal.
-
- Parameters
- ----------
- signal : numpy array
- Signal to be attenuated.
- attenuation : float
- Attenuation level [dB].
-
- Returns
- -------
- numpy array
- Attenuated signal (same dtype as `signal`).
-
- Notes
- -----
- The signal is returned with the same dtype, thus rounding errors may occur
- with integer dtypes.
-
- """
- # return the signal unaltered if no attenuation is given
- if attenuation == 0:
- return signal
- return adjust_gain(signal, -attenuation)
-
-
-def normalize(signal):
- """
- Normalize the signal to have maximum amplitude.
-
- Parameters
- ----------
- signal : numpy array
- Signal to be normalized.
-
- Returns
- -------
- numpy array
- Normalized signal.
-
- Notes
- -----
- Signals with float dtypes cover the range [-1, +1], signals with integer
- dtypes will cover the maximally possible range, e.g. [-32768, 32767] for
- np.int16.
-
- The signal is returned with the same dtype, thus rounding errors may occur
- with integer dtypes.
-
- """
- # scaling factor to be applied
- scaling = float(np.max(np.abs(signal)))
- if np.issubdtype(signal.dtype, np.integer):
- if signal.dtype in (np.int16, np.int32):
- scaling /= np.iinfo(signal.dtype).max
- else:
- raise ValueError('only float and np.int16/32 dtypes supported, '
- 'not %s.' % signal.dtype)
- # Note: np.asanyarray returns the signal's ndarray subclass
- return np.asanyarray(signal / scaling, dtype=signal.dtype)
-
-
-def remix(signal, num_channels, channel=None):
- """
- Remix the signal to have the desired number of channels.
-
- Parameters
- ----------
- signal : numpy array
- Signal to be remixed.
- num_channels : int
- Number of channels.
- channel : int, optional
- When reducing a signal to `num_channels` of 1, use this channel,
- or 'None' to return the average across all channels.
-
- Returns
- -------
- numpy array
- Remixed signal (same dtype as `signal`).
-
- Notes
- -----
- This function does not support arbitrary channel number conversions.
- Only down-mixing to and up-mixing from mono signals is supported.
-
- The signal is returned with the same dtype, thus rounding errors may occur
- with integer dtypes.
-
- If the signal should be down-mixed to mono and has an integer dtype, it
- will be converted to float internally and then back to the original dtype
- to prevent clipping of the signal. To avoid this double conversion,
- convert the dtype first.
-
- """
- if num_channels == signal.ndim or num_channels is None:
- # return as many channels as there are.
- return signal
- elif num_channels == 1 and signal.ndim > 1:
- if channel is None:
- # down-mix to mono
- # Note: to prevent clipping, the signal is converted to float first
- # and then converted back to the original dtype
- # TODO: add weighted mixing
- return np.mean(signal, axis=-1).astype(signal.dtype)
- else:
- # Use the requested channel verbatim
- return signal[:, channel]
- elif num_channels > 1 and signal.ndim == 1:
- # up-mix a mono signal simply by copying channels
- return np.tile(signal[:, np.newaxis], num_channels)
- else:
- # any other channel conversion is not supported
- raise NotImplementedError("Requested %d channels, but got %d channels "
- "and channel conversion is not implemented."
- % (num_channels, signal.shape[1]))
-
-
-def resample(signal, sample_rate, **kwargs):
- """
- Resample the signal.
-
- Parameters
- ----------
- signal : numpy array or Signal
- Signal to be resampled.
- sample_rate : int
- Sample rate of the signal.
- kwargs : dict, optional
- Keyword arguments passed to :func:`load_ffmpeg_file`.
-
- Returns
- -------
- numpy array or Signal
- Resampled signal.
-
- Notes
- -----
- This function uses ``ffmpeg`` to resample the signal.
-
- """
- from ..io.audio import load_ffmpeg_file
- # is the given signal a Signal?
- if not isinstance(signal, Signal):
- raise ValueError('only Signals can resampled, not %s' % type(signal))
- if signal.sample_rate == sample_rate:
- return signal
- # per default use the signal's dtype and num_channels
- dtype = kwargs.get('dtype', signal.dtype)
- num_channels = kwargs.get('num_channels', signal.num_channels)
- # resample the signal
- signal, sample_rate = load_ffmpeg_file(signal, sample_rate=sample_rate,
- num_channels=num_channels,
- dtype=dtype)
- # return it
- return Signal(signal, sample_rate=sample_rate)
-
-
-def rescale(signal, dtype=np.float32):
- """
- Rescale the signal to range [-1, 1] and return as float dtype.
-
- Parameters
- ----------
- signal : numpy array
- Signal to be remixed.
- dtype : numpy dtype
- Data type of the signal.
-
- Returns
- -------
- numpy array
- Signal rescaled to range [-1, 1].
-
- """
- # allow only float dtypes
- if not np.issubdtype(dtype, np.floating):
- raise ValueError('only float dtypes are supported, not %s.' % dtype)
- # float signals don't need rescaling
- if np.issubdtype(signal.dtype, np.floating):
- return signal.astype(dtype)
- elif np.issubdtype(signal.dtype, np.integer):
- return signal.astype(dtype) / np.iinfo(signal.dtype).max
- else:
- raise ValueError('unsupported signal dtype: %s.' % signal.dtype)
-
-
-def trim(signal, where='fb'):
- """
- Trim leading and trailing zeros of the signal.
-
- Parameters
- ----------
- signal : numpy array
- Signal to be trimmed.
- where : str, optional
- A string with 'f' representing trim from front and 'b' to trim from
- back. Default is 'fb', trim zeros from both ends of the signal.
-
- Returns
- -------
- numpy array
- Trimmed signal.
-
- """
- # code borrowed from np.trim_zeros()
- first = 0
- where = where.upper()
- if 'F' in where:
- for i in signal:
- if np.sum(i) != 0.:
- break
- else:
- first += 1
- last = len(signal)
- if 'B' in where:
- for i in signal[::-1]:
- if np.sum(i) != 0.:
- break
- else:
- last -= 1
- return signal[first:last]
-
-
-def energy(signal):
- """
- Compute the energy of a (framed) signal.
-
- Parameters
- ----------
- signal : numpy array
- Signal.
-
- Returns
- -------
- energy : float
- Energy of the signal.
-
- Notes
- -----
- If `signal` is a `FramedSignal`, the energy is computed for each frame
- individually.
-
- """
- # compute the energy for every frame of the signal
- if isinstance(signal, FramedSignal):
- return np.array([energy(frame) for frame in signal])
- # make sure the signal is a numpy array
- if not isinstance(signal, np.ndarray):
- raise TypeError("Invalid type for signal, must be a numpy array.")
- # take the abs if the signal is complex
- if np.iscomplex(signal).any():
- signal = np.abs(signal)
- # Note: type conversion needed because of integer overflows
- if signal.dtype != np.float:
- signal = signal.astype(np.float)
- # return energy
- return np.dot(signal.flatten(), signal.flatten())
-
-
-def root_mean_square(signal):
- """
- Compute the root mean square of a (framed) signal. This can be used as a
- measurement of power.
-
- Parameters
- ----------
- signal : numpy array
- Signal.
-
- Returns
- -------
- rms : float
- Root mean square of the signal.
-
- Notes
- -----
- If `signal` is a `FramedSignal`, the root mean square is computed for each
- frame individually.
-
- """
- # compute the root mean square for every frame of the signal
- if isinstance(signal, FramedSignal):
- return np.array([root_mean_square(frame) for frame in signal])
- return np.sqrt(energy(signal) / signal.size)
-
-
-def sound_pressure_level(signal, p_ref=None):
- """
- Compute the sound pressure level of a (framed) signal.
-
- Parameters
- ----------
- signal : numpy array
- Signal.
- p_ref : float, optional
- Reference sound pressure level; if 'None', take the max amplitude
- value for the data-type, if the data-type is float, assume amplitudes
- are between -1 and +1.
-
- Returns
- -------
- spl : float
- Sound pressure level of the signal [dB].
-
- Notes
- -----
- From http://en.wikipedia.org/wiki/Sound_pressure: Sound pressure level
- (SPL) or sound level is a logarithmic measure of the effective sound
- pressure of a sound relative to a reference value. It is measured in
- decibels (dB) above a standard reference level.
-
- If `signal` is a `FramedSignal`, the sound pressure level is computed for
- each frame individually.
-
- """
- # compute the sound pressure level for every frame of the signal
- if isinstance(signal, FramedSignal):
- return np.array([sound_pressure_level(frame) for frame in signal])
- # compute the RMS
- rms = root_mean_square(signal)
- # find a reasonable default reference value if None is given
- if p_ref is None:
- if np.issubdtype(signal.dtype, np.integer):
- p_ref = float(np.iinfo(signal.dtype).max)
- else:
- p_ref = 1.0
- # normal SPL computation. ignore warnings when taking the log of 0,
- # then replace the resulting -inf values with the smallest finite number
- with np.errstate(divide='ignore'):
- return np.nan_to_num(20.0 * np.log10(rms / p_ref))
-
-
-# functions to load / write audio files
-class LoadAudioFileError(Exception):
- """
- Deprecated as of version 0.16. Please use
- madmom.io.audio.LoadAudioFileError instead. Will be removed in version
- 0.18.
-
- """
- # pylint: disable=super-init-not-called
-
- def __init__(self, value=None):
- warnings.warn(LoadAudioFileError.__doc__)
- if value is None:
- value = 'Could not load audio file.'
- self.value = value
-
-
-def load_wave_file(*args, **kwargs):
- """
- Deprecated as of version 0.16. Please use madmom.io.audio.load_wave_file
- instead. Will be removed in version 0.18.
-
- """
- warnings.warn('Deprecated as of version 0.16. Please use madmom.io.audio.'
- 'load_wave_file instead. Will be removed in version 0.18.')
- from ..io.audio import load_wave_file
- return load_wave_file(*args, **kwargs)
-
-
-def write_wave_file(*args, **kwargs):
- """
- Deprecated as of version 0.16. Please use madmom.io.audio.write_wave_file
- instead. Will be removed in version 0.18.
-
- """
- warnings.warn('Deprecated as of version 0.16. Please use madmom.io.audio.'
- 'write_wave_file instead. Will be removed in version 0.18.')
- from ..io.audio import write_wave_file
- return write_wave_file(*args, **kwargs)
-
-
-# function for automatically determining how to open audio files
-def load_audio_file(*args, **kwargs):
- """
- Deprecated as of version 0.16. Please use madmom.io.audio.load_audio_file
- instead. Will be removed in version 0.18.
-
- """
- warnings.warn('Deprecated as of version 0.16. Please use madmom.io.audio.'
- 'load_audio_file instead. Will be removed in version 0.18.')
- from ..io.audio import load_audio_file
- return load_audio_file(*args, **kwargs)
-
-
-# signal classes
-SAMPLE_RATE = None
-NUM_CHANNELS = None
-CHANNEL = None
-START = None
-STOP = None
-NORM = False
-GAIN = 0.
-DTYPE = None
-
-
-class Signal(np.ndarray):
- """
- The :class:`Signal` class represents a signal as a (memory-mapped) numpy
- array and enhances it with a number of attributes.
-
- Parameters
- ----------
- data : numpy array, str or file handle
- Signal data or file name or file handle.
- sample_rate : int, optional
- Desired sample rate of the signal [Hz], or 'None' to return the
- signal in its original rate.
- num_channels : int, optional
- Reduce or expand the signal to `num_channels` channels, or 'None'
- to return the signal with its original channels.
- channel : int, optional
- When reducing a signal to `num_channels` of 1, use this channel,
- or 'None' to return the average across all channels.
- start : float, optional
- Start position [seconds].
- stop : float, optional
- Stop position [seconds].
- norm : bool, optional
- Normalize the signal to maximum range of the data type.
- gain : float, optional
- Adjust the gain of the signal [dB].
- dtype : numpy data type, optional
- The data is returned with the given dtype. If 'None', it is returned
- with its original dtype, otherwise the signal gets rescaled. Integer
- dtypes use the complete value range, float dtypes the range [-1, +1].
-
- Notes
- -----
- `sample_rate` or `num_channels` can be used to set the desired sample rate
- and number of channels if the audio is read from file. If set to 'None'
- the audio signal is used as is, i.e. the sample rate and number of channels
- are determined directly from the audio file.
-
- If the `data` is a numpy array, the `sample_rate` is set to the given value
- and `num_channels` is set to the number of columns of the array.
-
- The `gain` can be used to adjust the level of the signal.
-
- If both `norm` and `gain` are set, the signal is first normalized and then
- the gain is applied afterwards.
-
- If `norm` or `gain` is set, the selected part of the signal is loaded into
- memory completely, i.e. .wav files are not memory-mapped any more.
-
- Examples
- --------
- Load a mono audio file:
-
- >>> sig = Signal('tests/data/audio/sample.wav')
- >>> sig
- Signal([-2494, -2510, ..., 655, 639], dtype=int16)
- >>> sig.sample_rate
- 44100
-
- Load a stereo audio file, down-mix it to mono:
-
- >>> sig = Signal('tests/data/audio/stereo_sample.flac', num_channels=1)
- >>> sig
- Signal([ 36, 36, ..., 524, 495], dtype=int16)
- >>> sig.num_channels
- 1
-
- Load and re-sample an audio file:
-
- >>> sig = Signal('tests/data/audio/sample.wav', sample_rate=22050)
- >>> sig
- Signal([-2470, -2553, ..., 517, 677], dtype=int16)
- >>> sig.sample_rate
- 22050
-
- Load an audio file with `float32` data type (i.e. rescale it to [-1, 1]):
-
- >>> sig = Signal('tests/data/audio/sample.wav', dtype=np.float32)
- >>> sig
- Signal([-0.07611, -0.0766 , ..., 0.01999, 0.0195 ], dtype=float32)
- >>> sig.dtype
- dtype('float32')
-
- """
- # pylint: disable=super-on-old-class
- # pylint: disable=super-init-not-called
- # pylint: disable=attribute-defined-outside-init
-
- def __init__(self, data, sample_rate=SAMPLE_RATE,
- num_channels=NUM_CHANNELS, channel=CHANNEL, start=START,
- stop=STOP, norm=NORM, gain=GAIN, dtype=DTYPE, **kwargs):
- # this method is for documentation purposes only
- pass
-
- def __new__(cls, data, sample_rate=SAMPLE_RATE, num_channels=NUM_CHANNELS,
- channel=CHANNEL, start=START, stop=STOP, norm=NORM, gain=GAIN,
- dtype=DTYPE, **kwargs):
- from ..io.audio import load_audio_file
- # try to load an audio file if the data is not a numpy array
- if not isinstance(data, np.ndarray):
- data, sample_rate = load_audio_file(data, sample_rate=sample_rate,
- num_channels=num_channels,
- start=start, stop=stop,
- dtype=dtype)
- # cast as Signal if needed
- if not isinstance(data, Signal):
- data = np.asarray(data).view(cls)
- data.sample_rate = sample_rate
- # remix to desired number of channels
- if num_channels:
- data = remix(data, num_channels, channel)
- # normalize signal if needed
- if norm:
- data = normalize(data)
- # adjust the gain if needed
- if gain is not None and gain != 0:
- data = adjust_gain(data, gain)
- # resample if needed
- if sample_rate != data.sample_rate:
- data = resample(data, sample_rate)
- # save start and stop position
- if start is not None:
- # FIXME: start and stop settings are not checked
- data.start = start
- data.stop = start + float(len(data)) / sample_rate
- # return the object
- return data
-
- def __array_finalize__(self, obj):
- if obj is None:
- return
- # set default values here, also needed for views of the Signal
- self.sample_rate = getattr(obj, 'sample_rate', None)
- self.start = getattr(obj, 'start', None)
- self.stop = getattr(obj, 'stop', None)
-
- def __reduce__(self):
- # Get the parent's __reduce__ tuple
- state = super(Signal, self).__reduce__()
- # Create our own tuple to pass to __setstate__, but append the
- # __dict__ rather than individual members
- new_state = state[2] + (self.__dict__,)
- # Return a tuple that replaces the parent's __setstate__ tuple with
- # our own
- return state[0], state[1], new_state
-
- def __setstate__(self, state):
- # Update the internal dict from state
- self.__dict__.update(state[-1])
- # Call the parent's __setstate__ with the other tuple elements
- super(Signal, self).__setstate__(state[:-1])
-
- @property
- def num_samples(self):
- """Number of samples."""
- return len(self)
-
- @property
- def num_channels(self):
- """Number of channels."""
- # mono file
- if self.ndim == 1:
- return 1
- # multi channel file
- return np.shape(self)[1]
-
- @property
- def length(self):
- """Length of signal in seconds."""
- # n/a if the signal has no sample rate
- if self.sample_rate is None:
- return None
- return float(self.num_samples) / self.sample_rate
-
- def write(self, filename):
- """
- Write the signal to disk as a .wav file.
-
- Parameters
- ----------
- filename : str
- Name of the file.
-
- Returns
- -------
- filename : str
- Name of the written file.
-
- """
- return write_wave_file(self, filename)
-
- def energy(self):
- """Energy of signal."""
- return energy(self)
-
- def root_mean_square(self):
- """Root mean square of signal."""
- return root_mean_square(self)
-
- rms = root_mean_square
-
- def sound_pressure_level(self):
- """Sound pressure level of signal."""
- return sound_pressure_level(self)
-
- spl = sound_pressure_level
-
-
-class SignalProcessor(Processor):
- """
- The :class:`SignalProcessor` class is a basic signal processor.
-
- Parameters
- ----------
- sample_rate : int, optional
- Sample rate of the signal [Hz]; if set the signal will be re-sampled
- to that sample rate; if 'None' the sample rate of the audio file will
- be used.
- num_channels : int, optional
- Number of channels of the signal; if set, the signal will be reduced
- to that number of channels; if 'None' as many channels as present in
- the audio file are returned.
- start : float, optional
- Start position [seconds].
- stop : float, optional
- Stop position [seconds].
- norm : bool, optional
- Normalize the signal to the range [-1, +1].
- gain : float, optional
- Adjust the gain of the signal [dB].
- dtype : numpy data type, optional
- The data is returned with the given dtype. If 'None', it is returned
- with its original dtype, otherwise the signal gets rescaled. Integer
- dtypes use the complete value range, float dtypes the range [-1, +1].
-
- Examples
- --------
- Processor for loading the first two seconds of an audio file, re-sampling
- it to 22.05 kHz and down-mixing it to mono:
-
- >>> proc = SignalProcessor(sample_rate=22050, num_channels=1, stop=2)
- >>> sig = proc('tests/data/audio/sample.wav')
- >>> sig
- Signal([-2470, -2553, ..., -173, -265], dtype=int16)
- >>> sig.sample_rate
- 22050
- >>> sig.num_channels
- 1
- >>> sig.length
- 2.0
-
- """
-
- def __init__(self, sample_rate=SAMPLE_RATE, num_channels=NUM_CHANNELS,
- start=START, stop=STOP, norm=NORM, gain=GAIN, dtype=DTYPE,
- **kwargs):
- # pylint: disable=unused-argument
- self.sample_rate = sample_rate
- self.num_channels = num_channels
- self.start = start
- self.stop = stop
- self.norm = norm
- self.gain = gain
- self.dtype = dtype
-
- def process(self, data, **kwargs):
- """
- Processes the given audio file.
-
- Parameters
- ----------
- data : numpy array, str or file handle
- Data to be processed.
- kwargs : dict, optional
- Keyword arguments passed to :class:`Signal`.
-
- Returns
- -------
- signal : :class:`Signal` instance
- :class:`Signal` instance.
-
- """
- # pylint: disable=unused-argument
- # update arguments passed to FramedSignal
- args = dict(sample_rate=self.sample_rate,
- num_channels=self.num_channels, start=self.start,
- stop=self.stop, norm=self.norm, gain=self.gain,
- dtype=self.dtype)
- args.update(kwargs)
- # instantiate a Signal and return it
- return Signal(data, **args)
-
- @staticmethod
- def add_arguments(parser, sample_rate=None, mono=None, start=None,
- stop=None, norm=None, gain=None):
- """
- Add signal processing related arguments to an existing parser.
-
- Parameters
- ----------
- parser : argparse parser instance
- Existing argparse parser object.
- sample_rate : int, optional
- Re-sample the signal to this sample rate [Hz].
- mono : bool, optional
- Down-mix the signal to mono.
- start : float, optional
- Start position [seconds].
- stop : float, optional
- Stop position [seconds].
- norm : bool, optional
- Normalize the signal to the range [-1, +1].
- gain : float, optional
- Adjust the gain of the signal [dB].
-
- Returns
- -------
- argparse argument group
- Signal processing argument parser group.
-
- Notes
- -----
- Parameters are included in the group only if they are not 'None'. To
- include `start` and `stop` arguments with a default value of 'None',
- i.e. do not set any start or stop time, they can be set to 'True'.
-
- """
- # add signal processing options to the existing parser
- g = parser.add_argument_group('signal processing arguments')
- if sample_rate is not None:
- g.add_argument('--sample_rate', action='store', type=int,
- default=sample_rate, help='re-sample the signal to '
- 'this sample rate [Hz]')
- if mono is not None:
- g.add_argument('--mono', dest='num_channels', action='store_const',
- const=1, help='down-mix the signal to mono')
- if start is not None:
- g.add_argument('--start', action='store', type=float,
- help='start position of the signal [seconds]')
- if stop is not None:
- g.add_argument('--stop', action='store', type=float,
- help='stop position of the signal [seconds]')
- if norm is not None:
- g.add_argument('--norm', action='store_true', default=norm,
- help='normalize the signal [default=%(default)s]')
- if gain is not None:
- g.add_argument('--gain', action='store', type=float, default=gain,
- help='adjust the gain of the signal '
- '[dB, default=%(default).1f]')
- # return the argument group so it can be modified if needed
- return g
-
-
-# functions for splitting a signal into frames
-def signal_frame(signal, index, frame_size, hop_size, origin=0, pad=0):
- """
- This function returns frame at `index` of the `signal`.
-
- Parameters
- ----------
- signal : numpy array
- Signal.
- index : int
- Index of the frame to return.
- frame_size : int
- Size of each frame in samples.
- hop_size : float
- Hop size in samples between adjacent frames.
- origin : int
- Location of the window center relative to the signal position.
- pad : int, float or str, optional
- Pad parts of the frame not covered by the signal with this value.
- The literal 'repeat' can be used to indicate that the first/last value
- should be repeated.
-
- Returns
- -------
- frame : numpy array
- Requested frame of the signal.
-
- Notes
- -----
- The reference sample of the first frame (index == 0) refers to the first
- sample of the `signal`, and each following frame is placed `hop_size`
- samples after the previous one.
-
- The window is always centered around this reference sample. Its location
- relative to the reference sample can be set with the `origin` parameter.
- Arbitrary integer values can be given:
-
- - zero centers the window on its reference sample
- - negative values shift the window to the right
- - positive values shift the window to the left
-
- An `origin` of half the size of the `frame_size` results in windows located
- to the left of the reference sample, i.e. the first frame starts at the
- first sample of the signal.
-
- The part of the frame which is not covered by the signal is padded with
- zeros.
-
- This function is totally independent of the length of the signal. Thus,
- contrary to common indexing, the index '-1' refers NOT to the last frame
- of the signal, but instead the frame left of the first frame is returned.
-
- """
- # cast variables to int
- frame_size = int(frame_size)
- # length of the signal
- num_samples = len(signal)
- # seek to the correct position in the audio signal
- ref_sample = int(index * hop_size)
- # position the window
- start = ref_sample - frame_size // 2 - int(origin)
- stop = start + frame_size
- # return the requested portion of the signal
- # Note: use NumPy's advanced indexing (i.e. trailing comma) in order to
- # avoid a memory leak (issue #321). This returns a copy of the data,
- # however, returning a simple copy of the relevant portion of the
- # signal also leaks memory
- # Update: removing this hack again, since it seems that it is not needed
- # any more with recent NumPy versions
- if start >= 0 and stop <= num_samples:
- # normal read operation, return appropriate section
- return signal[start:stop]
-
- # part of the frame falls outside the signal, padding needed
- # Note: np.pad(signal[from: to], (pad_left, pad_right), mode='constant')
- # always returns a ndarray, not the subclass (and is slower);
- # usually np.zeros_like(signal[:frame_size]) is exactly what we want
- # (i.e. zeros of frame_size length and the same type/class as the
- # signal and not just the dtype), but since we have no guarantee that
- # the signal is that long, we have to use the np.repeat workaround
- frame = np.repeat(signal[:1], frame_size, axis=0)
-
- # determine how many samples need to be padded from left/right
- left, right = 0, 0
- if start < 0:
- left = min(stop, 0) - start
- # repeat beginning of signal
- frame[:left] = np.repeat(signal[:1], left, axis=0)
- if pad != 'repeat':
- frame[:left] = pad
- start = 0
- if stop > num_samples:
- right = stop - max(start, num_samples)
- # repeat end of signal
- frame[-right:] = np.repeat(signal[-1:], right, axis=0)
- if pad != 'repeat':
- frame[-right:] = pad
- stop = num_samples
-
- # position signal inside frame
- frame[left:frame_size - right] = signal[min(start, num_samples):
- max(stop, 0)]
- # return the frame
- return frame
-
-
-FRAME_SIZE = 2048
-HOP_SIZE = 441.
-FPS = None
-ORIGIN = 0
-END_OF_SIGNAL = 'normal'
-NUM_FRAMES = None
-
-
-# classes for splitting a signal into frames
-class FramedSignal(object):
- """
- The :class:`FramedSignal` splits a :class:`Signal` into frames and makes it
- iterable and indexable.
-
- Parameters
- ----------
- signal : :class:`Signal` instance
- Signal to be split into frames.
- frame_size : int, optional
- Size of one frame [samples].
- hop_size : float, optional
- Progress `hop_size` samples between adjacent frames.
- fps : float, optional
- Use given frames per second; if set, this computes and overwrites the
- given `hop_size` value.
- origin : int, optional
- Location of the window relative to the reference sample of a frame.
- end : int or str, optional
- End of signal handling (see notes below).
- num_frames : int, optional
- Number of frames to return.
- kwargs : dict, optional
- If no :class:`Signal` instance was given, one is instantiated with
- these additional keyword arguments.
-
- Notes
- -----
- The :class:`FramedSignal` class is implemented as an iterator. It splits
- the given `signal` automatically into frames of `frame_size` length with
- `hop_size` samples (can be float, normal rounding applies) between the
- frames. The reference sample of the first frame refers to the first sample
- of the `signal`.
-
- The location of the window relative to the reference sample of a frame can
- be set with the `origin` parameter (with the same behaviour as used by
- ``scipy.ndimage`` filters). Arbitrary integer values can be given:
-
- - zero centers the window on its reference sample,
- - negative values shift the window to the right,
- - positive values shift the window to the left.
-
- Additionally, it can have the following literal values:
-
- - 'center', 'offline': the window is centered on its reference sample,
- - 'left', 'past', 'online': the window is located to the left of its
- reference sample (including the reference sample),
- - 'right', 'future', 'stream': the window is located to the right of its
- reference sample.
-
- The `end` parameter is used to handle the end of signal behaviour and
- can have these values:
-
- - 'normal': stop as soon as the whole signal got covered by at least one
- frame (i.e. pad maximally one frame),
- - 'extend': frames are returned as long as part of the frame overlaps
- with the signal to cover the whole signal.
-
- Alternatively, `num_frames` can be used to retrieve a fixed number of
- frames.
-
- In order to be able to stack multiple frames obtained with different frame
- sizes, the number of frames to be returned must be independent from the set
- `frame_size`. It is not guaranteed that every sample of the signal is
- returned in a frame unless the `origin` is either 'right' or 'future'.
-
- If used in online real-time mode the parameters `origin` and `num_frames`
- should be set to 'stream' and 1, respectively.
-
- Examples
- --------
- To chop a :class:`Signal` (or anything a :class:`Signal` can be
- instantiated from) into overlapping frames of size 2048 with adjacent
- frames being 441 samples apart:
-
- >>> sig = Signal('tests/data/audio/sample.wav')
- >>> sig
- Signal([-2494, -2510, ..., 655, 639], dtype=int16)
- >>> frames = FramedSignal(sig, frame_size=2048, hop_size=441)
- >>> frames # doctest: +ELLIPSIS
-
- >>> frames[0]
- Signal([ 0, 0, ..., -4666, -4589], dtype=int16)
- >>> frames[10]
- Signal([-6156, -5645, ..., -253, 671], dtype=int16)
- >>> frames.fps
- 100.0
-
- Instead of passing a :class:`Signal` instance as the first argument,
- anything a :class:`Signal` can be instantiated from (e.g. a file name) can
- be used. We can also set the frames per second (`fps`) instead, they get
- converted to `hop_size` based on the `sample_rate` of the signal:
-
- >>> frames = FramedSignal('tests/data/audio/sample.wav', fps=100)
- >>> frames # doctest: +ELLIPSIS
-
- >>> frames[0]
- Signal([ 0, 0, ..., -4666, -4589], dtype=int16)
- >>> frames.frame_size, frames.hop_size
- (2048, 441.0)
-
- When trying to access an out of range frame, an IndexError is raised. Thus
- the FramedSignal can be used the same way as a numpy array or any other
- iterable.
-
- >>> frames = FramedSignal('tests/data/audio/sample.wav')
- >>> frames.num_frames
- 281
- >>> frames[281]
- Traceback (most recent call last):
- IndexError: end of signal reached
- >>> frames.shape
- (281, 2048)
-
- Slices are FramedSignals itself:
-
- >>> frames[:4] # doctest: +ELLIPSIS
-
-
- To obtain a numpy array from a FramedSignal, simply use np.array() on the
- full FramedSignal or a slice of it. Please note, that this requires a full
- memory copy.
-
- >>> np.array(frames[2:4])
- array([[ 0, 0, ..., -5316, -5405],
- [ 2215, 2281, ..., 561, 653]], dtype=int16)
-
- """
-
- def __init__(self, signal, frame_size=FRAME_SIZE, hop_size=HOP_SIZE,
- fps=FPS, origin=ORIGIN, end=END_OF_SIGNAL,
- num_frames=NUM_FRAMES, **kwargs):
-
- # signal handling
- if not isinstance(signal, Signal):
- # try to instantiate a Signal
- signal = Signal(signal, **kwargs)
-
- # save the signal
- self.signal = signal
-
- # arguments for splitting the signal into frames
- if frame_size:
- self.frame_size = int(frame_size)
- if hop_size:
- self.hop_size = float(hop_size)
- # use fps instead of hop_size
- if fps:
- # overwrite the hop_size
- self.hop_size = self.signal.sample_rate / float(fps)
-
- # translate literal window location values to numeric origin
- if origin in ('center', 'offline'):
- # window centered around the origin
- origin = 0
- elif origin in ('left', 'past', 'online'):
- # origin is the right edge of the frame, i.e. window to the left
- # Note: used when simulating online mode, where only past
- # information of the audio signal can be used
- origin = (frame_size - 1) / 2
- elif origin in ('right', 'future', 'stream'):
- # origin is the left edge of the frame, i.e. window to the right
- # Note: used when operating on live audio streams where we want
- # to retrieve a single frame. Instead of using 'online', we
- # "fake" the origin in order to retrieve the complete frame
- # provided by FramedSignalProcessor. This is a workaround to
- # be able to use the same processing chain in different modes
- origin = -(frame_size / 2)
- self.origin = int(origin)
-
- # number of frames determination
- if num_frames is None:
- if end == 'extend':
- # return frames as long as a frame covers any signal
- num_frames = np.floor(len(self.signal) /
- float(self.hop_size) + 1)
- elif end == 'normal':
- # return frames as long as the origin sample covers the signal
- num_frames = np.ceil(len(self.signal) / float(self.hop_size))
- else:
- raise ValueError("end of signal handling '%s' unknown" %
- end)
- self.num_frames = int(num_frames)
-
- # make the object indexable / iterable
- def __getitem__(self, index):
- """
- This makes the :class:`FramedSignal` class indexable and/or iterable.
-
- The signal is split into frames (of length `frame_size`) automatically.
- Two frames are located `hop_size` samples apart. If `hop_size` is a
- float, normal rounding applies.
-
- """
- # a single index is given
- if isinstance(index, integer_types):
- # negative indices
- if index < 0:
- index += self.num_frames
- # return the frame at the given index
- if index < self.num_frames:
- return signal_frame(self.signal, index,
- frame_size=self.frame_size,
- hop_size=self.hop_size, origin=self.origin)
- # otherwise raise an error to indicate the end of signal
- raise IndexError("end of signal reached")
- # a slice is given
- elif isinstance(index, slice):
- # determine the frames to return (limited to the number of frames)
- start, stop, step = index.indices(self.num_frames)
- # allow only normal steps
- if step != 1:
- raise ValueError('only slices with a step size of 1 supported')
- # determine the number of frames
- num_frames = stop - start
- # determine the new origin, i.e. start position
- origin = self.origin - self.hop_size * start
- # return a new FramedSignal instance covering the requested frames
- return FramedSignal(self.signal, frame_size=self.frame_size,
- hop_size=self.hop_size, origin=origin,
- num_frames=num_frames)
- # other index types are invalid
- else:
- raise TypeError("frame indices must be slices or integers")
-
- # len() returns the number of frames, consistent with __getitem__()
- def __len__(self):
- return self.num_frames
-
- @property
- def frame_rate(self):
- """Frame rate (same as fps)."""
- # n/a if the signal has no sample rate
- if self.signal.sample_rate is None:
- return None
- return float(self.signal.sample_rate) / self.hop_size
-
- @property
- def fps(self):
- """Frames per second."""
- return self.frame_rate
-
- @property
- def overlap_factor(self):
- """Overlapping factor of two adjacent frames."""
- return 1.0 - self.hop_size / self.frame_size
-
- @property
- def shape(self):
- """
- Shape of the FramedSignal (num_frames, frame_size[, num_channels]).
-
- """
- shape = self.num_frames, self.frame_size
- if self.signal.num_channels != 1:
- shape += (self.signal.num_channels, )
- return shape
-
- @property
- def ndim(self):
- """Dimensionality of the FramedSignal."""
- return len(self.shape)
-
- def energy(self):
- """Energy of the individual frames."""
- return energy(self)
-
- def root_mean_square(self):
- """Root mean square of the individual frames."""
- return root_mean_square(self)
-
- rms = root_mean_square
-
- def sound_pressure_level(self):
- """Sound pressure level of the individual frames."""
- return sound_pressure_level(self)
-
- spl = sound_pressure_level
-
-
-class FramedSignalProcessor(Processor):
- """
- Slice a Signal into frames.
-
- Parameters
- ----------
- frame_size : int, optional
- Size of one frame [samples].
- hop_size : float, optional
- Progress `hop_size` samples between adjacent frames.
- fps : float, optional
- Use given frames per second; if set, this computes and overwrites the
- given `hop_size` value.
- origin : int, optional
- Location of the window relative to the reference sample of a frame.
- end : int or str, optional
- End of signal handling (see :class:`FramedSignal`).
- num_frames : int, optional
- Number of frames to return.
-
- Notes
- -----
- When operating on live audio signals, `origin` must be set to 'stream' in
- order to retrieve always the last `frame_size` samples.
-
- Examples
- --------
- Processor for chopping a :class:`Signal` (or anything a :class:`Signal` can
- be instantiated from) into overlapping frames of size 2048, and a frame
- rate of 100 frames per second:
-
- >>> proc = FramedSignalProcessor(frame_size=2048, fps=100)
- >>> frames = proc('tests/data/audio/sample.wav')
- >>> frames # doctest: +ELLIPSIS
-
- >>> frames[0]
- Signal([ 0, 0, ..., -4666, -4589], dtype=int16)
- >>> frames[10]
- Signal([-6156, -5645, ..., -253, 671], dtype=int16)
- >>> frames.hop_size
- 441.0
-
- """
-
- def __init__(self, frame_size=FRAME_SIZE, hop_size=HOP_SIZE, fps=FPS,
- origin=ORIGIN, end=END_OF_SIGNAL, num_frames=NUM_FRAMES,
- **kwargs):
- # pylint: disable=unused-argument
- self.frame_size = frame_size
- self.hop_size = hop_size
- self.fps = fps # do not convert here, pass it to FramedSignal
- self.origin = origin
- self.end = end
- self.num_frames = num_frames
-
- def process(self, data, **kwargs):
- """
- Slice the signal into (overlapping) frames.
-
- Parameters
- ----------
- data : :class:`Signal` instance
- Signal to be sliced into frames.
- kwargs : dict, optional
- Keyword arguments passed to :class:`FramedSignal`.
-
- Returns
- -------
- frames : :class:`FramedSignal` instance
- FramedSignal instance
-
- """
- # update arguments passed to FramedSignal
- args = dict(frame_size=self.frame_size, hop_size=self.hop_size,
- fps=self.fps, origin=self.origin, end=self.end,
- num_frames=self.num_frames)
- args.update(kwargs)
- # always use the last `frame_size` samples if we operate on a live
- # audio stream, otherwise we get the wrong portion of the signal
- if self.origin == 'stream':
- data = data[-self.frame_size:]
- # instantiate a FramedSignal from the data and return it
- return FramedSignal(data, **args)
-
- @staticmethod
- def add_arguments(parser, frame_size=FRAME_SIZE, fps=FPS,
- online=None):
- """
- Add signal framing related arguments to an existing parser.
-
- Parameters
- ----------
- parser : argparse parser instance
- Existing argparse parser object.
- frame_size : int, optional
- Size of one frame in samples.
- fps : float, optional
- Frames per second.
- online : bool, optional
- Online mode (use only past signal information, i.e. align the
- window to the left of the reference sample).
-
- Returns
- -------
- argparse argument group
- Signal framing argument parser group.
-
- Notes
- -----
- Parameters are included in the group only if they are not 'None'.
-
- """
- # add signal framing options to the existing parser
- g = parser.add_argument_group('signal framing arguments')
- # depending on the type of frame_size, use different options
- if isinstance(frame_size, integer_types):
- g.add_argument('--frame_size', action='store', type=int,
- default=frame_size,
- help='frame size [samples, default=%(default)i]')
- elif isinstance(frame_size, list):
- # Note: this option can be used to stack multiple spectrograms
- # with different frame sizes
- from ..utils import OverrideDefaultListAction
- g.add_argument('--frame_size', type=int, default=frame_size,
- action=OverrideDefaultListAction, sep=',',
- help='(comma separated list of) frame size(s) to '
- 'use [samples, default=%(default)s]')
- if fps is not None:
- g.add_argument('--fps', action='store', type=float, default=fps,
- help='frames per second [default=%(default).1f]')
- if online is False:
- g.add_argument('--online', dest='origin', action='store_const',
- const='online', default='offline',
- help='operate in online mode [default=offline]')
- elif online is True:
- g.add_argument('--offline', dest='origin', action='store_const',
- const='offline', default='online',
- help='operate in offline mode [default=online]')
- # return the argument group so it can be modified if needed
- return g
-
-
-# class for online processing
-class Stream(object):
- """
- A Stream handles live (i.e. online, real-time) audio input via PyAudio.
-
- Parameters
- ----------
- sample_rate : int
- Sample rate of the signal.
- num_channels : int, optional
- Number of channels.
- dtype : numpy dtype, optional
- Data type for the signal.
- frame_size : int, optional
- Size of one frame [samples].
- hop_size : int, optional
- Progress `hop_size` samples between adjacent frames.
- fps : float, optional
- Use given frames per second; if set, this computes and overwrites the
- given `hop_size` value (the resulting `hop_size` must be an integer).
- stream_input_device : int, optional
- PyAudio device index of the desired input device.
- queue_size : int
- Size of the FIFO (first in first out) queue. If the queue is full and
- new audio samples arrive, the oldest item in the queue will be dropped.
-
- Notes
- -----
- Stream is implemented as an iterable which blocks until enough new data is
- available.
-
- """
-
- def __init__(self, sample_rate=SAMPLE_RATE, num_channels=NUM_CHANNELS,
- dtype=np.float32, frame_size=FRAME_SIZE, hop_size=HOP_SIZE,
- fps=FPS, stream_input_device=None, **kwargs):
- # import PyAudio here and not at the module level
- import pyaudio
- # set attributes
- self.sample_rate = sample_rate
- self.num_channels = 1 if num_channels is None else num_channels
- self.dtype = dtype
- if frame_size:
- self.frame_size = int(frame_size)
- if fps:
- # use fps instead of hop_size
- hop_size = self.sample_rate / float(fps)
- if int(hop_size) != hop_size:
- raise ValueError(
- 'only integer `hop_size` supported, not %s' % hop_size)
- self.hop_size = int(hop_size)
- self.stream_input_device = stream_input_device
- # init PyAudio
- self.pa = pyaudio.PyAudio()
- # init a stream to read audio samples from
- self.stream = self.pa.open(rate=self.sample_rate,
- channels=self.num_channels,
- format=pyaudio.paFloat32, input=True,
- frames_per_buffer=self.hop_size,
- input_device_index=self.stream_input_device,
- start=True)
- # create a buffer
- self.buffer = BufferProcessor(self.frame_size)
- # frame index counter
- self.frame_idx = 0
- # PyAudio flags
- self.paComplete = pyaudio.paComplete
- self.paContinue = pyaudio.paContinue
-
- def __iter__(self):
- return self
-
- def __next__(self):
- # get the desired number of samples (block until all are present)
- data = self.stream.read(self.hop_size, exception_on_overflow=False)
- # convert it to a numpy array
- data = np.fromstring(data, 'float32').astype(self.dtype, copy=False)
- # buffer the data (i.e. append hop_size samples and rotate)
- data = self.buffer(data)
- # wrap the last frame_size samples as a Signal
- # TODO: check float / int hop size; theoretically a float hop size
- # can be accomplished by making the buffer N samples bigger and
- # take the correct portion of the buffer
- start = (self.frame_idx * float(self.hop_size) / self.sample_rate)
- signal = Signal(data[-self.frame_size:], sample_rate=self.sample_rate,
- dtype=self.dtype, num_channels=self.num_channels,
- start=start)
- # increment the frame index
- self.frame_idx += 1
- return signal
-
- next = __next__
-
- def is_running(self):
- return self.stream.is_active()
-
- def close(self):
- self.stream.close()
- # TODO: is this the correct place to terminate PyAudio?
- self.pa.terminate()
-
- @property
- def shape(self):
- """Shape of the Stream (None, frame_size[, num_channels])."""
- shape = None, self.frame_size
- if self.signal.num_channels != 1:
- shape += (self.signal.num_channels,)
- return shape
diff --git a/spaces/MashiroSA/sovits-emu-voice-transform/CppDataProcess/readme.md b/spaces/MashiroSA/sovits-emu-voice-transform/CppDataProcess/readme.md
deleted file mode 100644
index ca6f7d0d011eeefd7f18ff601cff5507c80b3fc7..0000000000000000000000000000000000000000
--- a/spaces/MashiroSA/sovits-emu-voice-transform/CppDataProcess/readme.md
+++ /dev/null
@@ -1,8 +0,0 @@
-## F0Preprocess
-请前往 https://github.com/JeremyCCHsu/Python-Wrapper-for-World-Vocoder 下载PyWorld的源代码并编译出静态库并链接到你的项目之中,然后调用此头文件
-
-## Slicer
-一个简单的切片机
-
----
-~~上面的东西是直接从MoeSS的代码里面抽出来的,可以作为预置预处理的替代品()~~
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/exp/upernet_global_small/run.sh b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/exp/upernet_global_small/run.sh
deleted file mode 100644
index 9fb22edfa7a32624ea08a63fe7d720c40db3b696..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/exp/upernet_global_small/run.sh
+++ /dev/null
@@ -1,10 +0,0 @@
-#!/usr/bin/env bash
-
-work_path=$(dirname $0)
-PYTHONPATH="$(dirname $0)/../../":$PYTHONPATH \
-python -m torch.distributed.launch --nproc_per_node=8 \
- tools/train.py ${work_path}/config.py \
- --launcher pytorch \
- --options model.backbone.pretrained_path='your_model_path/uniformer_small_in1k.pth' \
- --work-dir ${work_path}/ckpt \
- 2>&1 | tee -a ${work_path}/log.txt
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/bbox.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/bbox.py
deleted file mode 100644
index 0c4d58b6c91f652933974f519acd3403a833e906..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/bbox.py
+++ /dev/null
@@ -1,72 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext('_ext', ['bbox_overlaps'])
-
-
-def bbox_overlaps(bboxes1, bboxes2, mode='iou', aligned=False, offset=0):
- """Calculate overlap between two set of bboxes.
-
- If ``aligned`` is ``False``, then calculate the ious between each bbox
- of bboxes1 and bboxes2, otherwise the ious between each aligned pair of
- bboxes1 and bboxes2.
-
- Args:
- bboxes1 (Tensor): shape (m, 4) in format or empty.
- bboxes2 (Tensor): shape (n, 4) in format or empty.
- If aligned is ``True``, then m and n must be equal.
- mode (str): "iou" (intersection over union) or iof (intersection over
- foreground).
-
- Returns:
- ious(Tensor): shape (m, n) if aligned == False else shape (m, 1)
-
- Example:
- >>> bboxes1 = torch.FloatTensor([
- >>> [0, 0, 10, 10],
- >>> [10, 10, 20, 20],
- >>> [32, 32, 38, 42],
- >>> ])
- >>> bboxes2 = torch.FloatTensor([
- >>> [0, 0, 10, 20],
- >>> [0, 10, 10, 19],
- >>> [10, 10, 20, 20],
- >>> ])
- >>> bbox_overlaps(bboxes1, bboxes2)
- tensor([[0.5000, 0.0000, 0.0000],
- [0.0000, 0.0000, 1.0000],
- [0.0000, 0.0000, 0.0000]])
-
- Example:
- >>> empty = torch.FloatTensor([])
- >>> nonempty = torch.FloatTensor([
- >>> [0, 0, 10, 9],
- >>> ])
- >>> assert tuple(bbox_overlaps(empty, nonempty).shape) == (0, 1)
- >>> assert tuple(bbox_overlaps(nonempty, empty).shape) == (1, 0)
- >>> assert tuple(bbox_overlaps(empty, empty).shape) == (0, 0)
- """
-
- mode_dict = {'iou': 0, 'iof': 1}
- assert mode in mode_dict.keys()
- mode_flag = mode_dict[mode]
- # Either the boxes are empty or the length of boxes' last dimension is 4
- assert (bboxes1.size(-1) == 4 or bboxes1.size(0) == 0)
- assert (bboxes2.size(-1) == 4 or bboxes2.size(0) == 0)
- assert offset == 1 or offset == 0
-
- rows = bboxes1.size(0)
- cols = bboxes2.size(0)
- if aligned:
- assert rows == cols
-
- if rows * cols == 0:
- return bboxes1.new(rows, 1) if aligned else bboxes1.new(rows, cols)
-
- if aligned:
- ious = bboxes1.new_zeros(rows)
- else:
- ious = bboxes1.new_zeros((rows, cols))
- ext_module.bbox_overlaps(
- bboxes1, bboxes2, ious, mode=mode_flag, aligned=aligned, offset=offset)
- return ious
diff --git a/spaces/MirageML/sjc/ncsn/normalization.py b/spaces/MirageML/sjc/ncsn/normalization.py
deleted file mode 100644
index 77f0dd4d2667f7868ce3352ab3ed1c1fcd525d34..0000000000000000000000000000000000000000
--- a/spaces/MirageML/sjc/ncsn/normalization.py
+++ /dev/null
@@ -1,208 +0,0 @@
-import torch
-import torch.nn as nn
-
-
-def get_normalization(config, conditional=True):
- norm = config.model.normalization
- if conditional:
- if norm == 'NoneNorm':
- return ConditionalNoneNorm2d
- elif norm == 'InstanceNorm++':
- return ConditionalInstanceNorm2dPlus
- elif norm == 'InstanceNorm':
- return ConditionalInstanceNorm2d
- elif norm == 'BatchNorm':
- return ConditionalBatchNorm2d
- elif norm == 'VarianceNorm':
- return ConditionalVarianceNorm2d
- else:
- raise NotImplementedError("{} does not exist!".format(norm))
- else:
- if norm == 'BatchNorm':
- return nn.BatchNorm2d
- elif norm == 'InstanceNorm':
- return nn.InstanceNorm2d
- elif norm == 'InstanceNorm++':
- return InstanceNorm2dPlus
- elif norm == 'VarianceNorm':
- return VarianceNorm2d
- elif norm == 'NoneNorm':
- return NoneNorm2d
- elif norm is None:
- return None
- else:
- raise NotImplementedError("{} does not exist!".format(norm))
-
-class ConditionalBatchNorm2d(nn.Module):
- def __init__(self, num_features, num_classes, bias=True):
- super().__init__()
- self.num_features = num_features
- self.bias = bias
- self.bn = nn.BatchNorm2d(num_features, affine=False)
- if self.bias:
- self.embed = nn.Embedding(num_classes, num_features * 2)
- self.embed.weight.data[:, :num_features].uniform_() # Initialise scale at N(1, 0.02)
- self.embed.weight.data[:, num_features:].zero_() # Initialise bias at 0
- else:
- self.embed = nn.Embedding(num_classes, num_features)
- self.embed.weight.data.uniform_()
-
- def forward(self, x, y):
- out = self.bn(x)
- if self.bias:
- gamma, beta = self.embed(y).chunk(2, dim=1)
- out = gamma.view(-1, self.num_features, 1, 1) * out + beta.view(-1, self.num_features, 1, 1)
- else:
- gamma = self.embed(y)
- out = gamma.view(-1, self.num_features, 1, 1) * out
- return out
-
-
-class ConditionalInstanceNorm2d(nn.Module):
- def __init__(self, num_features, num_classes, bias=True):
- super().__init__()
- self.num_features = num_features
- self.bias = bias
- self.instance_norm = nn.InstanceNorm2d(num_features, affine=False, track_running_stats=False)
- if bias:
- self.embed = nn.Embedding(num_classes, num_features * 2)
- self.embed.weight.data[:, :num_features].uniform_() # Initialise scale at N(1, 0.02)
- self.embed.weight.data[:, num_features:].zero_() # Initialise bias at 0
- else:
- self.embed = nn.Embedding(num_classes, num_features)
- self.embed.weight.data.uniform_()
-
- def forward(self, x, y):
- h = self.instance_norm(x)
- if self.bias:
- gamma, beta = self.embed(y).chunk(2, dim=-1)
- out = gamma.view(-1, self.num_features, 1, 1) * h + beta.view(-1, self.num_features, 1, 1)
- else:
- gamma = self.embed(y)
- out = gamma.view(-1, self.num_features, 1, 1) * h
- return out
-
-
-class ConditionalVarianceNorm2d(nn.Module):
- def __init__(self, num_features, num_classes, bias=False):
- super().__init__()
- self.num_features = num_features
- self.bias = bias
- self.embed = nn.Embedding(num_classes, num_features)
- self.embed.weight.data.normal_(1, 0.02)
-
- def forward(self, x, y):
- vars = torch.var(x, dim=(2, 3), keepdim=True)
- h = x / torch.sqrt(vars + 1e-5)
-
- gamma = self.embed(y)
- out = gamma.view(-1, self.num_features, 1, 1) * h
- return out
-
-
-class VarianceNorm2d(nn.Module):
- def __init__(self, num_features, bias=False):
- super().__init__()
- self.num_features = num_features
- self.bias = bias
- self.alpha = nn.Parameter(torch.zeros(num_features))
- self.alpha.data.normal_(1, 0.02)
-
- def forward(self, x):
- vars = torch.var(x, dim=(2, 3), keepdim=True)
- h = x / torch.sqrt(vars + 1e-5)
-
- out = self.alpha.view(-1, self.num_features, 1, 1) * h
- return out
-
-
-class ConditionalNoneNorm2d(nn.Module):
- def __init__(self, num_features, num_classes, bias=True):
- super().__init__()
- self.num_features = num_features
- self.bias = bias
- if bias:
- self.embed = nn.Embedding(num_classes, num_features * 2)
- self.embed.weight.data[:, :num_features].uniform_() # Initialise scale at N(1, 0.02)
- self.embed.weight.data[:, num_features:].zero_() # Initialise bias at 0
- else:
- self.embed = nn.Embedding(num_classes, num_features)
- self.embed.weight.data.uniform_()
-
- def forward(self, x, y):
- if self.bias:
- gamma, beta = self.embed(y).chunk(2, dim=-1)
- out = gamma.view(-1, self.num_features, 1, 1) * x + beta.view(-1, self.num_features, 1, 1)
- else:
- gamma = self.embed(y)
- out = gamma.view(-1, self.num_features, 1, 1) * x
- return out
-
-
-class NoneNorm2d(nn.Module):
- def __init__(self, num_features, bias=True):
- super().__init__()
-
- def forward(self, x):
- return x
-
-
-class InstanceNorm2dPlus(nn.Module):
- def __init__(self, num_features, bias=True):
- super().__init__()
- self.num_features = num_features
- self.bias = bias
- self.instance_norm = nn.InstanceNorm2d(num_features, affine=False, track_running_stats=False)
- self.alpha = nn.Parameter(torch.zeros(num_features))
- self.gamma = nn.Parameter(torch.zeros(num_features))
- self.alpha.data.normal_(1, 0.02)
- self.gamma.data.normal_(1, 0.02)
- if bias:
- self.beta = nn.Parameter(torch.zeros(num_features))
-
- def forward(self, x):
- means = torch.mean(x, dim=(2, 3))
- m = torch.mean(means, dim=-1, keepdim=True)
- v = torch.var(means, dim=-1, keepdim=True)
- means = (means - m) / (torch.sqrt(v + 1e-5))
- h = self.instance_norm(x)
-
- if self.bias:
- h = h + means[..., None, None] * self.alpha[..., None, None]
- out = self.gamma.view(-1, self.num_features, 1, 1) * h + self.beta.view(-1, self.num_features, 1, 1)
- else:
- h = h + means[..., None, None] * self.alpha[..., None, None]
- out = self.gamma.view(-1, self.num_features, 1, 1) * h
- return out
-
-
-class ConditionalInstanceNorm2dPlus(nn.Module):
- def __init__(self, num_features, num_classes, bias=True):
- super().__init__()
- self.num_features = num_features
- self.bias = bias
- self.instance_norm = nn.InstanceNorm2d(num_features, affine=False, track_running_stats=False)
- if bias:
- self.embed = nn.Embedding(num_classes, num_features * 3)
- self.embed.weight.data[:, :2 * num_features].normal_(1, 0.02) # Initialise scale at N(1, 0.02)
- self.embed.weight.data[:, 2 * num_features:].zero_() # Initialise bias at 0
- else:
- self.embed = nn.Embedding(num_classes, 2 * num_features)
- self.embed.weight.data.normal_(1, 0.02)
-
- def forward(self, x, y):
- means = torch.mean(x, dim=(2, 3))
- m = torch.mean(means, dim=-1, keepdim=True)
- v = torch.var(means, dim=-1, keepdim=True)
- means = (means - m) / (torch.sqrt(v + 1e-5))
- h = self.instance_norm(x)
-
- if self.bias:
- gamma, alpha, beta = self.embed(y).chunk(3, dim=-1)
- h = h + means[..., None, None] * alpha[..., None, None]
- out = gamma.view(-1, self.num_features, 1, 1) * h + beta.view(-1, self.num_features, 1, 1)
- else:
- gamma, alpha = self.embed(y).chunk(2, dim=-1)
- h = h + means[..., None, None] * alpha[..., None, None]
- out = gamma.view(-1, self.num_features, 1, 1) * h
- return out
diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/_base_/schedules/schedule_adam_step_5e.py b/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/_base_/schedules/schedule_adam_step_5e.py
deleted file mode 100644
index 73aad763608c78fa5c818ddc557b12f9f34056c8..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/_base_/schedules/schedule_adam_step_5e.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# optimizer
-optim_wrapper = dict(type='OptimWrapper', optimizer=dict(type='Adam', lr=1e-3))
-train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=5, val_interval=1)
-val_cfg = dict(type='ValLoop')
-test_cfg = dict(type='TestLoop')
-# learning policy
-param_scheduler = [
- dict(type='MultiStepLR', milestones=[3, 4], end=5),
-]
diff --git a/spaces/Mountchicken/MAERec-Gradio/tools/visualizations/vis_scheduler.py b/spaces/Mountchicken/MAERec-Gradio/tools/visualizations/vis_scheduler.py
deleted file mode 100644
index 4a2d4a3c7e75f9cba0b82456ec009acef214f5fc..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/tools/visualizations/vis_scheduler.py
+++ /dev/null
@@ -1,286 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import argparse
-import json
-import os.path as osp
-import re
-from pathlib import Path
-from unittest.mock import MagicMock
-
-import matplotlib.pyplot as plt
-import rich
-import torch.nn as nn
-from mmengine.config import Config, DictAction
-from mmengine.hooks import Hook
-from mmengine.model import BaseModel
-from mmengine.registry import init_default_scope
-from mmengine.runner import Runner
-from mmengine.visualization import Visualizer
-from rich.progress import BarColumn, MofNCompleteColumn, Progress, TextColumn
-
-from mmocr.registry import DATASETS
-
-
-class SimpleModel(BaseModel):
- """simple model that do nothing in train_step."""
-
- def __init__(self):
- super(SimpleModel, self).__init__()
- self.data_preprocessor = nn.Identity()
- self.conv = nn.Conv2d(1, 1, 1)
-
- def forward(self, inputs, data_samples, mode='tensor'):
- pass
-
- def train_step(self, data, optim_wrapper):
- pass
-
-
-class ParamRecordHook(Hook):
-
- def __init__(self, by_epoch):
- super().__init__()
- self.by_epoch = by_epoch
- self.lr_list = []
- self.momentum_list = []
- self.wd_list = []
- self.task_id = 0
- self.progress = Progress(BarColumn(), MofNCompleteColumn(),
- TextColumn('{task.description}'))
-
- def before_train(self, runner):
- if self.by_epoch:
- total = runner.train_loop.max_epochs
- self.task_id = self.progress.add_task(
- 'epochs', start=True, total=total)
- else:
- total = runner.train_loop.max_iters
- self.task_id = self.progress.add_task(
- 'iters', start=True, total=total)
- self.progress.start()
-
- def after_train_epoch(self, runner):
- if self.by_epoch:
- self.progress.update(self.task_id, advance=1)
-
- def after_train_iter(self, runner, batch_idx, data_batch, outputs):
- if not self.by_epoch:
- self.progress.update(self.task_id, advance=1)
- self.lr_list.append(runner.optim_wrapper.get_lr()['lr'][0])
- self.momentum_list.append(
- runner.optim_wrapper.get_momentum()['momentum'][0])
- self.wd_list.append(
- runner.optim_wrapper.param_groups[0]['weight_decay'])
-
- def after_train(self, runner):
- self.progress.stop()
-
-
-def parse_args():
- parser = argparse.ArgumentParser(
- description='Visualize a Dataset Pipeline')
- parser.add_argument('config', help='config file path')
- parser.add_argument(
- '-p',
- '--parameter',
- type=str,
- default='lr',
- choices=['lr', 'momentum', 'wd'],
- help='The parameter to visualize its change curve, choose from'
- '"lr", "wd" and "momentum". Defaults to "lr".')
- parser.add_argument(
- '-d',
- '--dataset-size',
- type=int,
- help='The size of the dataset. If specify, `build_dataset` will '
- 'be skipped and use this size as the dataset size.')
- parser.add_argument(
- '-n',
- '--ngpus',
- type=int,
- default=1,
- help='The number of GPUs used in training.')
- parser.add_argument(
- '-s',
- '--save-path',
- type=Path,
- help='The learning rate curve plot save path')
- parser.add_argument(
- '--log-level',
- default='WARNING',
- help='The log level of the handler and logger. Defaults to '
- 'WARNING.')
- parser.add_argument('--title', type=str, help='title of figure')
- parser.add_argument(
- '--style', type=str, default='whitegrid', help='style of plt')
- parser.add_argument('--not-show', default=False, action='store_true')
- parser.add_argument(
- '--window-size',
- default='12*7',
- help='Size of the window to display images, in format of "$W*$H".')
- parser.add_argument(
- '--cfg-options',
- nargs='+',
- action=DictAction,
- help='override some settings in the used config, the key-value pair '
- 'in xxx=yyy format will be merged into config file. If the value to '
- 'be overwritten is a list, it should be like key="[a,b]" or key=a,b '
- 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" '
- 'Note that the quotation marks are necessary and that no white space '
- 'is allowed.')
- args = parser.parse_args()
- if args.window_size != '':
- assert re.match(r'\d+\*\d+', args.window_size), \
- "'window-size' must be in format 'W*H'."
-
- return args
-
-
-def plot_curve(lr_list, args, param_name, iters_per_epoch, by_epoch=True):
- """Plot learning rate vs iter graph."""
- try:
- import seaborn as sns
- sns.set_style(args.style)
- except ImportError:
- pass
-
- wind_w, wind_h = args.window_size.split('*')
- wind_w, wind_h = int(wind_w), int(wind_h)
- plt.figure(figsize=(wind_w, wind_h))
-
- ax: plt.Axes = plt.subplot()
- ax.plot(lr_list, linewidth=1)
-
- if by_epoch:
- ax.xaxis.tick_top()
- ax.set_xlabel('Iters')
- ax.xaxis.set_label_position('top')
- sec_ax = ax.secondary_xaxis(
- 'bottom',
- functions=(lambda x: x / iters_per_epoch,
- lambda y: y * iters_per_epoch))
- sec_ax.set_xlabel('Epochs')
- else:
- plt.xlabel('Iters')
- plt.ylabel(param_name)
-
- if args.title is None:
- plt.title(f'{osp.basename(args.config)} {param_name} curve')
- else:
- plt.title(args.title)
-
-
-def simulate_train(data_loader, cfg, by_epoch):
- model = SimpleModel()
- param_record_hook = ParamRecordHook(by_epoch=by_epoch)
- default_hooks = dict(
- param_scheduler=cfg.default_hooks['param_scheduler'],
- runtime_info=None,
- timer=None,
- logger=None,
- checkpoint=None,
- sampler_seed=None,
- param_record=param_record_hook)
-
- runner = Runner(
- model=model,
- work_dir=cfg.work_dir,
- train_dataloader=data_loader,
- train_cfg=cfg.train_cfg,
- log_level=cfg.log_level,
- optim_wrapper=cfg.optim_wrapper,
- param_scheduler=cfg.param_scheduler,
- default_scope=cfg.default_scope,
- default_hooks=default_hooks,
- visualizer=MagicMock(spec=Visualizer),
- custom_hooks=cfg.get('custom_hooks', None))
-
- runner.train()
-
- param_dict = dict(
- lr=param_record_hook.lr_list,
- momentum=param_record_hook.momentum_list,
- wd=param_record_hook.wd_list)
-
- return param_dict
-
-
-def build_dataset(cfg):
- return DATASETS.build(cfg)
-
-
-def main():
- args = parse_args()
- cfg = Config.fromfile(args.config)
-
- init_default_scope(cfg.get('default_scope', 'mmocr'))
-
- if args.cfg_options is not None:
- cfg.merge_from_dict(args.cfg_options)
- if cfg.get('work_dir', None) is None:
- # use config filename as default work_dir if cfg.work_dir is None
- cfg.work_dir = osp.join('./work_dirs',
- osp.splitext(osp.basename(args.config))[0])
-
- cfg.log_level = args.log_level
-
- # make sure save_root exists
- if args.save_path and not args.save_path.parent.exists():
- raise FileNotFoundError(
- f'The save path is {args.save_path}, and directory '
- f"'{args.save_path.parent}' do not exist.")
-
- # init logger
- print('Param_scheduler :')
- rich.print_json(json.dumps(cfg.param_scheduler))
-
- # prepare data loader
- batch_size = cfg.train_dataloader.batch_size * args.ngpus
-
- if 'by_epoch' in cfg.train_cfg:
- by_epoch = cfg.train_cfg.get('by_epoch')
- elif 'type' in cfg.train_cfg:
- by_epoch = cfg.train_cfg.get('type') == 'EpochBasedTrainLoop'
- else:
- raise ValueError('please set `train_cfg`.')
-
- if args.dataset_size is None and by_epoch:
- dataset_size = len(build_dataset(cfg.train_dataloader.dataset))
- else:
- dataset_size = args.dataset_size or batch_size
-
- class FakeDataloader(list):
- dataset = MagicMock(metainfo=None)
-
- data_loader = FakeDataloader(range(dataset_size // batch_size))
- dataset_info = (
- f'\nDataset infos:'
- f'\n - Dataset size: {dataset_size}'
- f'\n - Batch size per GPU: {cfg.train_dataloader.batch_size}'
- f'\n - Number of GPUs: {args.ngpus}'
- f'\n - Total batch size: {batch_size}')
- if by_epoch:
- dataset_info += f'\n - Iterations per epoch: {len(data_loader)}'
- rich.print(dataset_info + '\n')
-
- # simulation training process
- param_dict = simulate_train(data_loader, cfg, by_epoch)
- param_list = param_dict[args.parameter]
-
- if args.parameter == 'lr':
- param_name = 'Learning Rate'
- elif args.parameter == 'momentum':
- param_name = 'Momentum'
- else:
- param_name = 'Weight Decay'
- plot_curve(param_list, args, param_name, len(data_loader), by_epoch)
-
- if args.save_path:
- plt.savefig(args.save_path)
- print(f'\nThe {param_name} graph is saved at {args.save_path}')
-
- if not args.not_show:
- plt.show()
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/NCTCMumbai/NCTC/models/official/core/input_reader.py b/spaces/NCTCMumbai/NCTC/models/official/core/input_reader.py
deleted file mode 100644
index 52f6e84e4bd02d4178586556ca191912de18fc18..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/core/input_reader.py
+++ /dev/null
@@ -1,223 +0,0 @@
-# Lint as: python3
-# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""A common dataset reader."""
-
-from typing import Any, Callable, List, Optional
-
-import tensorflow as tf
-import tensorflow_datasets as tfds
-
-from official.modeling.hyperparams import config_definitions as cfg
-
-
-class InputReader:
- """Input reader that returns a tf.data.Dataset instance."""
-
- def __init__(self,
- params: cfg.DataConfig,
- shards: Optional[List[str]] = None,
- dataset_fn=tf.data.TFRecordDataset,
- decoder_fn: Optional[Callable[..., Any]] = None,
- parser_fn: Optional[Callable[..., Any]] = None,
- dataset_transform_fn: Optional[Callable[[tf.data.Dataset],
- tf.data.Dataset]] = None,
- postprocess_fn: Optional[Callable[..., Any]] = None):
- """Initializes an InputReader instance.
-
- Args:
- params: A config_definitions.DataConfig object.
- shards: A list of files to be read. If given, read from these files.
- Otherwise, read from params.input_path.
- dataset_fn: A `tf.data.Dataset` that consumes the input files. For
- example, it can be `tf.data.TFRecordDataset`.
- decoder_fn: An optional `callable` that takes the serialized data string
- and decodes them into the raw tensor dictionary.
- parser_fn: An optional `callable` that takes the decoded raw tensors dict
- and parse them into a dictionary of tensors that can be consumed by the
- model. It will be executed after decoder_fn.
- dataset_transform_fn: An optional `callable` that takes a
- `tf.data.Dataset` object and returns a `tf.data.Dataset`. It will be
- executed after parser_fn.
- postprocess_fn: A optional `callable` that processes batched tensors. It
- will be executed after batching.
- """
- if params.input_path and params.tfds_name:
- raise ValueError('At most one of `input_path` and `tfds_name` can be '
- 'specified, but got %s and %s.' % (
- params.input_path, params.tfds_name))
- self._shards = shards
- self._tfds_builder = None
- if self._shards:
- self._num_files = len(self._shards)
- elif not params.tfds_name:
- self._input_patterns = params.input_path.strip().split(',')
- self._num_files = 0
- for input_pattern in self._input_patterns:
- input_pattern = input_pattern.strip()
- if not input_pattern:
- continue
- matched_files = tf.io.gfile.glob(input_pattern)
- if not matched_files:
- raise ValueError('%s does not match any files.' % input_pattern)
- else:
- self._num_files += len(matched_files)
- if self._num_files == 0:
- raise ValueError('%s does not match any files.' % params.input_path)
- else:
- if not params.tfds_split:
- raise ValueError(
- '`tfds_name` is %s, but `tfds_split` is not specified.' %
- params.tfds_name)
- self._tfds_builder = tfds.builder(
- params.tfds_name, data_dir=params.tfds_data_dir)
-
- self._global_batch_size = params.global_batch_size
- self._is_training = params.is_training
- self._drop_remainder = params.drop_remainder
- self._shuffle_buffer_size = params.shuffle_buffer_size
- self._cache = params.cache
- self._cycle_length = params.cycle_length
- self._block_length = params.block_length
- self._sharding = params.sharding
- self._examples_consume = params.examples_consume
- self._tfds_split = params.tfds_split
- self._tfds_download = params.tfds_download
- self._tfds_as_supervised = params.tfds_as_supervised
- self._tfds_skip_decoding_feature = params.tfds_skip_decoding_feature
-
- self._dataset_fn = dataset_fn
- self._decoder_fn = decoder_fn
- self._parser_fn = parser_fn
- self._dataset_transform_fn = dataset_transform_fn
- self._postprocess_fn = postprocess_fn
-
- def _read_sharded_files(
- self,
- input_context: Optional[tf.distribute.InputContext] = None):
- """Reads a dataset from sharded files."""
- # Read from `self._shards` if it is provided.
- if self._shards:
- dataset = tf.data.Dataset.from_tensor_slices(self._shards)
- else:
- dataset = tf.data.Dataset.list_files(
- self._input_patterns, shuffle=self._is_training)
- if self._sharding and input_context and (
- input_context.num_input_pipelines > 1):
- dataset = dataset.shard(input_context.num_input_pipelines,
- input_context.input_pipeline_id)
- if self._is_training:
- dataset = dataset.repeat()
-
- dataset = dataset.interleave(
- map_func=self._dataset_fn,
- cycle_length=self._cycle_length,
- block_length=self._block_length,
- num_parallel_calls=tf.data.experimental.AUTOTUNE)
- return dataset
-
- def _read_single_file(
- self,
- input_context: Optional[tf.distribute.InputContext] = None):
- """Reads a dataset from a single file."""
- # Read from `self._shards` if it is provided.
- dataset = self._dataset_fn(self._shards or self._input_patterns)
-
- # When `input_file` is a path to a single file, disable auto sharding
- # so that same input file is sent to all workers.
- options = tf.data.Options()
- options.experimental_distribute.auto_shard_policy = (
- tf.data.experimental.AutoShardPolicy.OFF)
- dataset = dataset.with_options(options)
- if self._sharding and input_context and (
- input_context.num_input_pipelines > 1):
- dataset = dataset.shard(input_context.num_input_pipelines,
- input_context.input_pipeline_id)
- if self._is_training:
- dataset = dataset.repeat()
- return dataset
-
- def _read_tfds(
- self,
- input_context: Optional[tf.distribute.InputContext] = None
- ) -> tf.data.Dataset:
- """Reads a dataset from tfds."""
- if self._tfds_download:
- self._tfds_builder.download_and_prepare()
-
- read_config = tfds.ReadConfig(
- interleave_cycle_length=self._cycle_length,
- interleave_block_length=self._block_length,
- input_context=input_context)
- decoders = {}
- if self._tfds_skip_decoding_feature:
- for skip_feature in self._tfds_skip_decoding_feature.split(','):
- decoders[skip_feature.strip()] = tfds.decode.SkipDecoding()
- dataset = self._tfds_builder.as_dataset(
- split=self._tfds_split,
- shuffle_files=self._is_training,
- as_supervised=self._tfds_as_supervised,
- decoders=decoders,
- read_config=read_config)
- return dataset
-
- @property
- def tfds_info(self) -> tfds.core.DatasetInfo:
- """Returns TFDS dataset info, if available."""
- if self._tfds_builder:
- return self._tfds_builder.info
- else:
- raise ValueError('tfds_info is not available, because the dataset '
- 'is not loaded from tfds.')
-
- def read(
- self,
- input_context: Optional[tf.distribute.InputContext] = None
- ) -> tf.data.Dataset:
- """Generates a tf.data.Dataset object."""
- if self._tfds_builder:
- dataset = self._read_tfds(input_context)
- elif self._num_files > 1:
- dataset = self._read_sharded_files(input_context)
- else:
- assert self._num_files == 1
- dataset = self._read_single_file(input_context)
-
- if self._cache:
- dataset = dataset.cache()
-
- if self._is_training:
- dataset = dataset.shuffle(self._shuffle_buffer_size)
-
- if self._examples_consume > 0:
- dataset = dataset.take(self._examples_consume)
-
- def maybe_map_fn(dataset, fn):
- return dataset if fn is None else dataset.map(
- fn, num_parallel_calls=tf.data.experimental.AUTOTUNE)
-
- dataset = maybe_map_fn(dataset, self._decoder_fn)
- dataset = maybe_map_fn(dataset, self._parser_fn)
-
- if self._dataset_transform_fn is not None:
- dataset = self._dataset_transform_fn(dataset)
-
- per_replica_batch_size = input_context.get_per_replica_batch_size(
- self._global_batch_size) if input_context else self._global_batch_size
-
- dataset = dataset.batch(
- per_replica_batch_size, drop_remainder=self._drop_remainder)
- dataset = maybe_map_fn(dataset, self._postprocess_fn)
- return dataset.prefetch(tf.data.experimental.AUTOTUNE)
diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/README.md b/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/README.md
deleted file mode 100644
index 42f299a3f2308f63f5339bd3f639bef0607f5e97..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/README.md
+++ /dev/null
@@ -1,64 +0,0 @@
-# Layers
-
-Layers are the fundamental building blocks for NLP models. They can be used to
-assemble new layers, networks, or models.
-
-* [DenseEinsum](dense_einsum.py) implements a feedforward network using
- tf.einsum. This layer contains the einsum op, the associated weight, and the
- logic required to generate the einsum expression for the given
- initialization parameters.
-
-* [MultiHeadAttention](attention.py) implements an optionally masked attention
- between query, key, value tensors as described in
- ["Attention Is All You Need"](https://arxiv.org/abs/1706.03762). If
- `from_tensor` and `to_tensor` are the same, then this is self-attention.
-
-* [CachedAttention](attention.py) implements an attention layer with cache
- used for auto-agressive decoding.
-
-* [MultiChannelAttention](multi_channel_attention.py) implements an variant of
- multi-head attention which can be used to merge multiple streams for
- cross-attentions.
-
-* [TalkingHeadsAttention](talking_heads_attention.py) implements the talking
- heads attention, as decribed in
- ["Talking-Heads Attention"](https://arxiv.org/abs/2003.02436).
-
-* [Transformer](transformer.py) implements an optionally masked transformer as
- described in
- ["Attention Is All You Need"](https://arxiv.org/abs/1706.03762).
-
-* [TransformerDecoderLayer](transformer.py) TransformerDecoderLayer is made up
- of self multi-head attention, cross multi-head attention and
- feedforward network.
-
-* [ReZeroTransformer](rezero_transformer.py) implements Transformer with
- ReZero described in
- ["ReZero is All You Need: Fast Convergence at Large Depth"](https://arxiv.org/abs/2003.04887).
-
-* [OnDeviceEmbedding](on_device_embedding.py) implements efficient embedding
- lookups designed for TPU-based models.
-
-* [PositionalEmbedding](position_embedding.py) creates a positional embedding
- as described in ["BERT: Pre-training of Deep Bidirectional Transformers for
- Language Understanding"](https://arxiv.org/abs/1810.04805).
-
-* [SelfAttentionMask](self_attention_mask.py) creates a 3D attention mask from
- a 2D tensor mask.
-
-* [MaskedSoftmax](masked_softmax.py) implements a softmax with an optional
- masking input. If no mask is provided to this layer, it performs a standard
- softmax; however, if a mask tensor is applied (which should be 1 in
- positions where the data should be allowed through, and 0 where the data
- should be masked), the output will have masked positions set to
- approximately zero.
-
-* [`MaskedLM`](masked_lm.py) implements a masked language model. It assumes
- the embedding table variable is passed to it.
-
-* [ClassificationHead](cls_head.py) A pooling head over a sequence of
- embeddings, commonly used by classification tasks.
-
-* [GatedFeedforward](gated_feedforward.py) implements the gated linear layer
- feedforward as described in
- ["GLU Variants Improve Transformer"](https://arxiv.org/abs/2002.05202).
diff --git a/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/setup.py b/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/setup.py
deleted file mode 100644
index 32a4c9c9b72a15b1a4e1ad0cc83308fb9f465426..0000000000000000000000000000000000000000
--- a/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/setup.py
+++ /dev/null
@@ -1,118 +0,0 @@
-#!/usr/bin/env python
-
-from setuptools import find_packages, setup
-
-import os
-import subprocess
-import time
-
-version_file = "realesrgan/version.py"
-
-
-def readme():
- with open("README.md", encoding="utf-8") as f:
- content = f.read()
- return content
-
-
-def get_git_hash():
- def _minimal_ext_cmd(cmd):
- # construct minimal environment
- env = {}
- for k in ["SYSTEMROOT", "PATH", "HOME"]:
- v = os.environ.get(k)
- if v is not None:
- env[k] = v
- # LANGUAGE is used on win32
- env["LANGUAGE"] = "C"
- env["LANG"] = "C"
- env["LC_ALL"] = "C"
- out = subprocess.Popen(cmd, stdout=subprocess.PIPE, env=env).communicate()[0]
- return out
-
- try:
- out = _minimal_ext_cmd(["git", "rev-parse", "HEAD"])
- sha = out.strip().decode("ascii")
- except OSError:
- sha = "unknown"
-
- return sha
-
-
-def get_hash():
- if os.path.exists(".git"):
- sha = get_git_hash()[:7]
- else:
- sha = "unknown"
-
- return sha
-
-
-def write_version_py():
- content = """# GENERATED VERSION FILE
-# TIME: {}
-__version__ = '{}'
-__gitsha__ = '{}'
-version_info = ({})
-"""
- sha = get_hash()
- with open("VERSION", "r") as f:
- SHORT_VERSION = f.read().strip()
- VERSION_INFO = ", ".join(
- [x if x.isdigit() else f'"{x}"' for x in SHORT_VERSION.split(".")]
- )
-
- version_file_str = content.format(time.asctime(), SHORT_VERSION, sha, VERSION_INFO)
- with open(version_file, "w") as f:
- f.write(version_file_str)
-
-
-def get_version():
- with open(version_file, "r") as f:
- exec(compile(f.read(), version_file, "exec"))
- return locals()["__version__"]
-
-
-def get_requirements(filename="requirements.txt"):
- here = os.path.dirname(os.path.realpath(__file__))
- with open(os.path.join(here, filename), "r") as f:
- requires = [line.replace("\n", "") for line in f.readlines()]
- return requires
-
-
-if __name__ == "__main__":
- write_version_py()
- setup(
- name="realesrgan",
- version=get_version(),
- description="Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration",
- long_description=readme(),
- long_description_content_type="text/markdown",
- author="Xintao Wang",
- author_email="xintao.wang@outlook.com",
- keywords="computer vision, pytorch, image restoration, super-resolution, esrgan, real-esrgan",
- url="https://github.com/xinntao/Real-ESRGAN",
- include_package_data=True,
- packages=find_packages(
- exclude=(
- "options",
- "datasets",
- "experiments",
- "results",
- "tb_logger",
- "wandb",
- )
- ),
- classifiers=[
- "Development Status :: 4 - Beta",
- "License :: OSI Approved :: Apache Software License",
- "Operating System :: OS Independent",
- "Programming Language :: Python :: 3",
- "Programming Language :: Python :: 3.7",
- "Programming Language :: Python :: 3.8",
- ],
- license="BSD-3-Clause License",
- setup_requires=["cython", "numpy"],
- install_requires=get_requirements(),
- zip_safe=False,
- )
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_text_joint_to_text/criterions/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_text_joint_to_text/criterions/__init__.py
deleted file mode 100644
index 7faae73119321af0b34fe8e26499a2ef5577291a..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_text_joint_to_text/criterions/__init__.py
+++ /dev/null
@@ -1,15 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import importlib
-import os
-
-
-for file in os.listdir(os.path.dirname(__file__)):
- if file.endswith(".py") and not file.startswith("_"):
- criterion_name = file[: file.find(".py")]
- importlib.import_module(
- "examples.speech_text_joint_to_text.criterions." + criterion_name
- )
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/criterions/adaptive_loss.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/criterions/adaptive_loss.py
deleted file mode 100644
index 6209ceaedb6d8120ad820c11b55c13596447933c..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/criterions/adaptive_loss.py
+++ /dev/null
@@ -1,123 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-from dataclasses import dataclass
-
-import torch.nn.functional as F
-from fairseq import metrics, utils
-from fairseq.criterions import FairseqCriterion, register_criterion
-from fairseq.dataclass import FairseqDataclass
-from fairseq.dataclass.constants import DDP_BACKEND_CHOICES
-from omegaconf import II
-
-
-@dataclass
-class AdaptiveLossConfig(FairseqDataclass):
- sentence_avg: bool = II("optimization.sentence_avg")
- ddp_backend: DDP_BACKEND_CHOICES = II("distributed_training.ddp_backend")
-
-
-@register_criterion("adaptive_loss", dataclass=AdaptiveLossConfig)
-class AdaptiveLoss(FairseqCriterion):
- """This is an implementation of the loss function accompanying the adaptive softmax approximation for
- graphical processing units (GPU), described in the paper "Efficient softmax approximation for GPUs"
- (http://arxiv.org/abs/1609.04309)."""
-
- def __init__(self, task, sentence_avg):
- super().__init__(task)
- self.sentence_avg = sentence_avg
-
- @classmethod
- def build_criterion(cls, cfg: AdaptiveLossConfig, task):
- if cfg.ddp_backend in {"c10d", "pytorch_ddp"}:
- raise Exception(
- "AdaptiveLoss is not compatible with the PyTorch "
- "version of DistributedDataParallel. Please use "
- "`--ddp-backend=legacy_ddp` instead."
- )
- return cls(task, cfg.sentence_avg)
-
- def forward(self, model, sample, reduce=True):
- """Compute the loss for the given sample.
-
- Returns a tuple with three elements:
- 1) the loss
- 2) the sample size, which is used as the denominator for the gradient
- 3) logging outputs to display while training
- """
-
- assert (
- hasattr(model.decoder, "adaptive_softmax")
- and model.decoder.adaptive_softmax is not None
- )
- adaptive_softmax = model.decoder.adaptive_softmax
-
- net_output = model(**sample["net_input"])
- orig_target = model.get_targets(sample, net_output)
-
- nsentences = orig_target.size(0)
- orig_target = orig_target.view(-1)
-
- bsz = orig_target.size(0)
-
- logits, target = adaptive_softmax(net_output[0], orig_target)
- assert len(target) == len(logits)
-
- loss = net_output[0].new(1 if reduce else bsz).zero_()
-
- for i in range(len(target)):
- if target[i] is not None:
- assert target[i].min() >= 0 and target[i].max() <= logits[i].size(1)
- loss += F.cross_entropy(
- logits[i],
- target[i],
- ignore_index=self.padding_idx,
- reduction="sum" if reduce else "none",
- )
-
- orig = utils.strip_pad(orig_target, self.padding_idx)
- ntokens = orig.numel()
- sample_size = sample["target"].size(0) if self.sentence_avg else ntokens
- logging_output = {
- "loss": loss.data,
- "ntokens": ntokens,
- "nsentences": nsentences,
- "sample_size": sample_size,
- }
- return loss, sample_size, logging_output
-
- @staticmethod
- def reduce_metrics(logging_outputs) -> None:
- """Aggregate logging outputs from data parallel training."""
- loss_sum = utils.item(sum(log.get("loss", 0) for log in logging_outputs))
- ntokens = utils.item(sum(log.get("ntokens", 0) for log in logging_outputs))
- sample_size = utils.item(
- sum(log.get("sample_size", 0) for log in logging_outputs)
- )
-
- metrics.log_scalar(
- "loss", loss_sum / sample_size / math.log(2), sample_size, round=3
- )
- if sample_size != ntokens:
- metrics.log_scalar(
- "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3
- )
- metrics.log_derived(
- "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg)
- )
- else:
- metrics.log_derived(
- "ppl", lambda meters: utils.get_perplexity(meters["loss"].avg)
- )
-
- @staticmethod
- def logging_outputs_can_be_summed() -> bool:
- """
- Whether the logging outputs returned by `forward` can be summed
- across workers prior to calling `reduce_metrics`. Setting this
- to True will improves distributed training speed.
- """
- return True
diff --git a/spaces/ORI-Muchim/MarinTTS/app.py b/spaces/ORI-Muchim/MarinTTS/app.py
deleted file mode 100644
index 69d607ab985afd5944ad0413768072d79806e525..0000000000000000000000000000000000000000
--- a/spaces/ORI-Muchim/MarinTTS/app.py
+++ /dev/null
@@ -1,159 +0,0 @@
-import json
-import os
-import re
-
-import librosa
-import numpy as np
-import torch
-from torch import no_grad, LongTensor
-import commons
-import utils
-import gradio as gr
-from models import SynthesizerTrn
-from text import text_to_sequence, _clean_text
-from mel_processing import spectrogram_torch
-
-limitation = os.getenv("SYSTEM") == "spaces" # limit text and audio length in huggingface spaces
-
-
-def get_text(text, hps, is_phoneme):
- text_norm = text_to_sequence(text, hps.symbols, [] if is_phoneme else hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = LongTensor(text_norm)
- return text_norm
-
-
-def create_tts_fn(model, hps, speaker_ids):
- def tts_fn(text, speaker, speed, is_phoneme):
- if limitation:
- text_len = len(text)
- max_len = 500
- if is_phoneme:
- max_len *= 3
- else:
- if len(hps.data.text_cleaners) > 0 and hps.data.text_cleaners[0] == "zh_ja_mixture_cleaners":
- text_len = len(re.sub("(\[ZH\]|\[JA\])", "", text))
- if text_len > max_len:
- return "Error: Text is too long", None
-
- speaker_id = speaker_ids[speaker]
- stn_tst = get_text(text, hps, is_phoneme)
- with no_grad():
- x_tst = stn_tst.unsqueeze(0)
- x_tst_lengths = LongTensor([stn_tst.size(0)])
- sid = LongTensor([speaker_id])
- audio = model.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8,
- length_scale=1.0 / speed)[0][0, 0].data.cpu().float().numpy()
- del stn_tst, x_tst, x_tst_lengths, sid
- return "Success", (hps.data.sampling_rate, audio)
-
- return tts_fn
-
-
-
-
-
-def create_to_phoneme_fn(hps):
- def to_phoneme_fn(text):
- return _clean_text(text, hps.data.text_cleaners) if text != "" else ""
-
- return to_phoneme_fn
-
-
-css = """
- #advanced-btn {
- color: white;
- border-color: black;
- background: black;
- font-size: .7rem !important;
- line-height: 19px;
- margin-top: 24px;
- margin-bottom: 12px;
- padding: 2px 8px;
- border-radius: 14px !important;
- }
- #advanced-options {
- display: none;
- margin-bottom: 20px;
- }
-"""
-
-if __name__ == '__main__':
- models_tts = []
- name = 'MarinTTS'
- lang = '日本語 (Japanese)'
- example = 'こんにちは。私は北川まりんです。'
- config_path = f"saved_model/config.json"
- model_path = f"saved_model/model.pth"
- cover_path = f"saved_model/cover.png"
- hps = utils.get_hparams_from_file(config_path)
- model = SynthesizerTrn(
- len(hps.symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- **hps.model)
- utils.load_checkpoint(model_path, model, None)
- model.eval()
- speaker_ids = [0]
- speakers = [name]
-
- t = 'vits'
- models_tts.append((name, cover_path, speakers, lang, example,
- hps.symbols, create_tts_fn(model, hps, speaker_ids),
- create_to_phoneme_fn(hps)))
-
- app = gr.Blocks(css=css)
-
- with app:
- gr.Markdown("# My DressUp Darling MarinTTS Using Vits Model\n"
- "\n\n")
-
- for i, (name, cover_path, speakers, lang, example, symbols, tts_fn,
- to_phoneme_fn) in enumerate(models_tts):
-
- with gr.Column():
- gr.Markdown(f"## {name}\n\n"
- f"\n\n"
- f"lang: {lang}")
- tts_input1 = gr.TextArea(label="Text (500 words limitation)", value=example,
- elem_id=f"tts-input{i}")
- tts_input2 = gr.Dropdown(label="Speaker", choices=speakers,
- type="index", value=speakers[0])
- tts_input3 = gr.Slider(label="Speed", value=1, minimum=0.1, maximum=2, step=0.1)
- with gr.Accordion(label="Advanced Options", open=False):
- phoneme_input = gr.Checkbox(value=False, label="Phoneme input")
- to_phoneme_btn = gr.Button("Covert text to phoneme")
- phoneme_list = gr.Dataset(label="Phoneme list", components=[tts_input1],
- samples=[[x] for x in symbols],
- elem_id=f"phoneme-list{i}")
- phoneme_list_json = gr.Json(value=symbols, visible=False)
- tts_submit = gr.Button("Generate", variant="primary")
- tts_output1 = gr.Textbox(label="Output Message")
- tts_output2 = gr.Audio(label="Output Audio")
- tts_submit.click(tts_fn, [tts_input1, tts_input2, tts_input3, phoneme_input],
- [tts_output1, tts_output2])
- to_phoneme_btn.click(to_phoneme_fn, [tts_input1], [tts_input1])
- phoneme_list.click(None, [phoneme_list, phoneme_list_json], [],
- _js=f"""
- (i,phonemes) => {{
- let root = document.querySelector("body > gradio-app");
- if (root.shadowRoot != null)
- root = root.shadowRoot;
- let text_input = root.querySelector("#tts-input{i}").querySelector("textarea");
- let startPos = text_input.selectionStart;
- let endPos = text_input.selectionEnd;
- let oldTxt = text_input.value;
- let result = oldTxt.substring(0, startPos) + phonemes[i] + oldTxt.substring(endPos);
- text_input.value = result;
- let x = window.scrollX, y = window.scrollY;
- text_input.focus();
- text_input.selectionStart = startPos + phonemes[i].length;
- text_input.selectionEnd = startPos + phonemes[i].length;
- text_input.blur();
- window.scrollTo(x, y);
- return [];
- }}""")
-
- app.queue(concurrency_count=3).launch(show_api=False)
diff --git a/spaces/ORI-Muchim/MinamiTTS/text/cleaners.py b/spaces/ORI-Muchim/MinamiTTS/text/cleaners.py
deleted file mode 100644
index 57d924f38f3c58bc53ac23aab3f5c58da2bf26f6..0000000000000000000000000000000000000000
--- a/spaces/ORI-Muchim/MinamiTTS/text/cleaners.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import re
-
-def japanese_cleaners(text):
- from text.japanese import japanese_to_romaji_with_accent
- text = japanese_to_romaji_with_accent(text)
- if len(text) == 0 or re.match('[A-Za-z]', text[-1]):
- text += '.'
- return text
-
-
-def japanese_cleaners2(text):
- text = text.replace('・・・', '…').replace('・', ' ')
- text = japanese_cleaners(text).replace('ts', 'ʦ').replace('...', '…') \
- .replace('(', '').replace(')', '') \
- .replace('[', '').replace(']', '') \
- .replace('*', ' ').replace('{', '').replace('}', '')
- return text
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/Detectron1-Comparisons/README.md b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/Detectron1-Comparisons/README.md
deleted file mode 100644
index 924fd00af642ddf1a4ff4c4f5947f676134eb7de..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/Detectron1-Comparisons/README.md
+++ /dev/null
@@ -1,84 +0,0 @@
-
-Detectron2 model zoo's experimental settings and a few implementation details are different from Detectron.
-
-The differences in implementation details are shared in
-[Compatibility with Other Libraries](../../docs/notes/compatibility.md).
-
-The differences in model zoo's experimental settings include:
-* Use scale augmentation during training. This improves AP with lower training cost.
-* Use L1 loss instead of smooth L1 loss for simplicity. This sometimes improves box AP but may
- affect other AP.
-* Use `POOLER_SAMPLING_RATIO=0` instead of 2. This does not significantly affect AP.
-* Use `ROIAlignV2`. This does not significantly affect AP.
-
-In this directory, we provide a few configs that __do not__ have the above changes.
-They mimic Detectron's behavior as close as possible,
-and provide a fair comparison of accuracy and speed against Detectron.
-
-
-
-
-