diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Photoshop CC 2020 Crack Full Presets (Mac et Windows) MacOSX A Complete Review and Comparison.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Photoshop CC 2020 Crack Full Presets (Mac et Windows) MacOSX A Complete Review and Comparison.md
deleted file mode 100644
index dadfb534428cac36a8f22b7cca9613c354eeff01..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Photoshop CC 2020 Crack Full Presets (Mac et Windows) MacOSX A Complete Review and Comparison.md
+++ /dev/null
@@ -1,131 +0,0 @@
-
-
Adobe Photoshop CC 2020 Crack Full Presets (Mac et Windows) MacOSX
-
Are you looking for a way to get Adobe Photoshop CC 2020 for free with full presets and unlimited access? If yes, then you are in the right place. In this article, I will show you how to download and install Adobe Photoshop CC 2020 crack full presets for Mac and Windows. I will also show you how to use the presets to enhance your photos and create stunning effects. But before that, let me explain what Adobe Photoshop CC 2020 is and why you need a crack for it.
-
Adobe Photoshop CC 2020 Crack Full Presets (Mac et Windows) MacOSX
Adobe Photoshop CC 2020 is the latest version of the most popular and powerful photo editing software in the world. It is used by millions of professionals and amateurs alike to create, edit, and manipulate images, graphics, and artworks. Adobe Photoshop CC 2020 offers a variety of tools, features, and functions that allow you to unleash your creativity and transform your photos into amazing works of art.
-
Why do you need a crack for Adobe Photoshop CC 2020?
-
Adobe Photoshop CC 2020 is not a free software. It requires a subscription plan that costs $20.99 per month or $239.88 per year. That's quite expensive for many people who want to use it for personal or educational purposes. Moreover, even if you pay for the subscription, you will not get access to all the presets that are available in Adobe Photoshop CC 2020. Presets are predefined settings that apply certain effects or adjustments to your photos with one click. They can save you a lot of time and effort and help you achieve professional results quickly and easily.
-
That's why many people look for a crack for Adobe Photoshop CC 2020. A crack is a modified version of the software that bypasses the activation process and allows you to use it without paying anything. A crack also gives you access to all the presets that are included in Adobe Photoshop CC 2020, as well as some additional ones that are not available in the official version.
-
How to download and install Adobe Photoshop CC 2020 cracked version for Mac and Windows
-Adobe Photoshop CC 2020 full presets free download with crack for MacOSX and Windows
-Best tips and tricks for using Adobe Photoshop CC 2020 crack on Mac and Windows
-Adobe Photoshop CC 2020 crack serial number and activation key for Mac and Windows
-Adobe Photoshop CC 2020 crack features and benefits for Mac and Windows users
-Adobe Photoshop CC 2020 crack system requirements and compatibility for Mac and Windows
-Adobe Photoshop CC 2020 crack problems and solutions for Mac and Windows
-Adobe Photoshop CC 2020 crack vs original version comparison for Mac and Windows
-Adobe Photoshop CC 2020 crack tutorials and guides for Mac and Windows beginners
-Adobe Photoshop CC 2020 crack reviews and testimonials from Mac and Windows users
-Adobe Photoshop CC 2020 full presets pack download link for Mac and Windows
-How to use Adobe Photoshop CC 2020 full presets on Mac and Windows
-Adobe Photoshop CC 2020 full presets examples and inspiration for Mac and Windows designers
-Adobe Photoshop CC 2020 full presets advantages and disadvantages for Mac and Windows
-Adobe Photoshop CC 2020 full presets customization and optimization for Mac and Windows
-Adobe Photoshop CC 2020 full presets compatibility and integration with other software for Mac and Windows
-Adobe Photoshop CC 2020 full presets updates and support for Mac and Windows
-Adobe Photoshop CC 2020 full presets alternatives and competitors for Mac and Windows
-Adobe Photoshop CC 2020 full presets FAQs and answers for Mac and Windows
-Adobe Photoshop CC 2020 full presets feedback and suggestions for Mac and Windows developers
-How to get Adobe Photoshop CC 2020 crack full presets for free legally for Mac and Windows
-How to uninstall Adobe Photoshop CC 2020 crack full presets from Mac and Windows
-How to fix Adobe Photoshop CC 2020 crack full presets errors and bugs on Mac and Windows
-How to backup and restore Adobe Photoshop CC 2020 crack full presets on Mac and Windows
-How to upgrade from Adobe Photoshop CC 2020 crack full presets to the latest version for Mac and Windows
-How to transfer Adobe Photoshop CC 2020 crack full presets from one device to another for Mac and Windows
-How to share Adobe Photoshop CC 2020 crack full presets with others for Mac and Windows
-How to create amazing graphics with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to edit photos professionally with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to make logos, banners, flyers, posters, brochures, etc. with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to enhance your online presence with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to improve your SEO ranking with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to increase your conversions with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to generate more leads with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to grow your business with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to save time and money with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to avoid legal issues with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to protect your privacy with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to secure your data with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to prevent malware infections with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to optimize your performance with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to boost your creativity with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to learn new skills with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to have fun with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to impress your clients with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to collaborate with others with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to access exclusive resources with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to join a community of users with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-How to get help from experts with Adobe Photoshop CC 2020 crack full presets for Mac and Windows
-
What are the features of Adobe Photoshop CC 2020?
-
Adobe Photoshop CC 2020 has many features that make it the best photo editing software in the market. Some of these features are:
-
-
Content-Aware Fill: This feature allows you to remove unwanted objects or areas from your photos and fill them with matching content from other parts of the image. You can also control how the fill is done by using different sampling options and output settings.
-
Select Subject: This feature allows you to select any subject in your photo with one click. You can then refine the selection using other tools or apply adjustments or effects to it.
-
Select and Mask: This feature allows you to create precise selections and masks for complex subjects such as hair, fur, feathers, etc. You can use various tools such as brushes, refine edge, smooth edge, etc. to fine-tune your selection or mask.
-
Object Selection Tool: This feature allows you to select multiple objects in your photo by drawing a rectangle or a lasso around them. The tool will automatically detect and select the objects within the area you draw.
-
Enhance Details: This feature allows you to improve the details and sharpness of your photos by using artificial intelligence. It works especially well for raw images taken with high-resolution cameras.
-
Smart Objects: These are layers that contain image data from raster or vector images such as PSD, TIFF, JPEG, PDF, etc. You can transform, warp, filter, or edit these layers without losing quality or resolution.
-
Smart Filters: These are filters that can be applied to smart objects or other layers non-destructively. You can adjust, reorder, hide, or delete these filters at any time without affecting the original image.
-
Adjustment Layers: These are layers that apply color or tonal adjustments to your image without changing the pixel values. You can modify, mask, or blend these layers with other layers using different modes.
-
Layer Styles: These are effects that can be applied to layers such as drop shadow, stroke, gradient overlay, etc. You can customize these effects using various parameters and options.
-
Liquify Filter: This filter allows you to distort or reshape any part of your image using different tools such as warp, pucker, bloat, twirl, etc. You can also use face-aware liquify to adjust facial features such as eyes, nose, mouth, etc.
-
Perspective Warp: This feature allows you to change the perspective of your image by warping it along multiple planes. You can also use it to correct distortion or alignment issues caused by camera angle or lens.
-
Puppet Warp: This feature allows you to bend or twist any part of your image by placing pins on it. You can then drag these pins to deform the image as if it were made of rubber.
-
Crop Tool: This tool allows you to crop your image by dragging its edges or corners. You can also use different aspect ratios or presets to crop your image according to your needs.
-
Straighten Tool: This tool allows you to straighten your image by drawing a line along any edge or horizon in it. The tool will automatically rotate and crop your image accordingly.
-
Ruler Tool: This tool allows you to measure distances or angles in your image by drawing a line between two points. You can also use it to align objects or layers in your image.
-
-
These are just some of the features of Adobe Photoshop CC 2020. There are many more features that you can explore and use in this software.
-
How to download and install Adobe Photoshop CC 2020 crack full presets for Mac and Windows?
-
If you want to get Adobe Photoshop CC 2020 for free with full presets and unlimited access, then follow these steps:
-
Step 1: Download the crack file from the link below
-
The first thing you need to do is download the crack file from this link: https://bit.ly/AdobePhotoshopCC2020Crack. This link will take you to a Google Drive folder where you will find two files: one for Mac users and one for Windows users. Choose the file according to your operating system and click on download.
-
Step 2: Extract the file using WinRAR or 7-Zip
-
The next thing you need to do is extract the file using WinRAR or 7-Zip. These are free software that can unzip compressed files easily. You can download them from their official websites: https://www.win-rar.com/ (for WinRAR) or https://www.7-zip.org/ (for 7-Zip). After downloading them, install them on your computer and then right-click on the crack file you downloaded in step 1 and choose "Extract here" or "Extract files".
-
Step 3: Run the setup file and follow the instructions
-
After extracting the file, you will see a folder named "Adobe Photoshop CC 2020 Crack Full Presets". Open this folder and double-click on the setup file. This will launch the installation wizard that will guide you through the process. Follow the instructions on the screen and accept the terms and conditions. Choose the destination folder where you want to install the software and click on install.
-
Step 4: Copy and paste the crack file into the installation folder
-
Once the installation is complete, do not run the software yet. Go back to the folder where you extracted the crack file and open it. You will see another folder named "Crack". Open this folder and copy the file named "amtlib.dll". Then go to the installation folder where you installed Adobe Photoshop CC 2020. The default location is "C:\Program Files\Adobe\Adobe Photoshop CC 2020" for Windows users and "/Applications/Adobe Photoshop CC 2020" for Mac users. Paste the crack file into this folder and replace the original file.
-
Step 5: Enjoy Adobe Photoshop CC 2020 with full presets and unlimited access
-
Congratulations! You have successfully installed Adobe Photoshop CC 2020 crack full presets for Mac and Windows. Now you can run the software and enjoy all its features and functions without any limitations or restrictions. You can also access all the presets that are available in Adobe Photoshop CC 2020, as well as some additional ones that are not available in the official version.
-
How to use Adobe Photoshop CC 2020 crack full presets for Mac and Windows?
-
Now that you have Adobe Photoshop CC 2020 crack full presets for Mac and Windows, you might be wondering how to use them to enhance your photos and create stunning effects. Here are some tips and tricks that will help you use the presets effectively:
-
How to access the presets in Adobe Photoshop CC 2020?
-
To access the presets in Adobe Photoshop CC 2020, you need to open the Preset Manager. To do this, go to Edit > Presets > Preset Manager. This will open a window where you can see all the presets that are available in Adobe Photoshop CC 2020. You can also add, delete, rename, or organize your presets using this window.
-
The presets are divided into different categories such as Brushes, Patterns, Gradients, Styles, etc. You can choose any category from the drop-down menu at the top of the window. You can also use the search box to find a specific preset by typing its name or keyword.
-
How to apply the presets to your photos in Adobe Photoshop CC 2020?
-
To apply a preset to your photo in Adobe Photoshop CC 2020, you need to select it from the Preset Manager and then click on Load. This will load the preset into your current document. You can then use it as you normally would with any other tool or feature in Adobe Photoshop CC 2020.
-
For example, if you want to apply a brush preset to your photo, you need to select it from the Preset Manager and then click on Load. This will load the brush preset into your Brush Tool. You can then use it to paint on your photo with different colors, sizes, shapes, etc.
-
If you want to apply a pattern preset to your photo, you need to select it from the Preset Manager and then click on Load. This will load the pattern preset into your Pattern Stamp Tool. You can then use it to stamp on your photo with different modes, opacity, alignment, etc.
-
If you want to apply a gradient preset to your photo, you need to select it from the Preset Manager and then click on Load. This will load the gradient preset into your Gradient Tool. You can then use it to fill or stroke your photo with different colors, angles, styles, etc.
-
If you want to apply a style preset to your photo, you need to select it from the Preset Manager and then click on Load. This will load the style preset into your Layer Style dialog box. You can then apply it to any layer in your photo with different options such as blending mode, opacity, scale, etc.
-
How to create your own presets in Adobe Photoshop CC 2020?
-
If you want to create your own presets in Adobe Photoshop CC 2020, you need to follow these steps:
-
-
Create or edit your photo using any tool or feature in Adobe Photoshop CC 2020.
-
Select or create a new layer that contains your desired effect or adjustment.
-
Go to Edit > Presets > Preset Manager.
-
Choose the category that matches your effect or adjustment from the drop-down menu at the top of the window.
-
Click on Save Set and give a name to your preset.
-
Click on OK and close the Preset Manager window.
-
-
You have now created your own preset in Adobe Photoshop CC 2020. You can access it anytime from the Preset Manager and apply it to any photo you want.
-
How to share your presets with others in Adobe Photoshop CC 2020?
-
If you want to share your presets with others in Adobe Photoshop CC 2020, you need to follow these steps:
-
-
Go to Edit > Presets > Preset Manager.
-
Choose the category that contains your preset from the drop-down menu at the top of the window.
-
Select your preset from the list of presets.
-
Click on Save Set and choose a location where you want to save your preset file.
-
Click on OK and close the Preset Manager window.
-
-
You have now saved your preset as a file that can be shared with others. You can send this file via email, social media, cloud storage, etc. To load this file into another computer or device, simply copy it into its corresponding folder in Adobe Photoshop CC 2020's installation directory.
-
Conclusion
-
Summary of the main points
-
In this article, I have shown you how to download and install Adobe Photoshop CC 2020 crack full presets for Mac and Windows. I have also shown you how to use them to enhance your photos and create stunning effects. By using this crack version of Adobe Photoshop CC 2020, you can enjoy all its features and functions without paying anything or having any limitations or restrictions. You can also access all the presets that are available in Adobe Photoshop CC 2020, as well as some additional ones that are not available in the official version.
-
Call to action and recommendation
-
If you want to get Adobe Photoshop CC 2020 for free with full presets and unlimited access, then don't wait any longer. Click on this link https://bit.ly/AdobePhotoshopCC2020Crack
-and download the crack file now. Follow the instructions in this article and install the software on your computer or device. You will be amazed by what you can do with Adobe Photoshop CC 2020 crack full presets for Mac and Windows. However, I must warn you that using a crack version of any software is illegal and unethical. You may face legal consequences or security risks if you do so. Therefore, I recommend that you use this crack version only for educational or personal purposes and not for commercial or professional purposes. If you like Adobe Photoshop CC 2020 and want to support its developers, then please buy the original version from their official website https://www.adobe.com/products/photoshop.html. You will get regular updates, customer support, and peace of mind. Thank you for reading this article. I hope you found it useful and informative. If you have any questions or feedback, please leave them in the comments section below. I would love to hear from you.
0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download PowerPoint Ed Version and Enjoy Its Unique Features for Inclusive and Collaborative Classrooms.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download PowerPoint Ed Version and Enjoy Its Unique Features for Inclusive and Collaborative Classrooms.md
deleted file mode 100644
index 0b7a8b14e993c625c84e0b33b59813521c8a821c..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download PowerPoint Ed Version and Enjoy Its Unique Features for Inclusive and Collaborative Classrooms.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
How to Download PowerPoint Ed Version for Free
-
PowerPoint Ed is a powerful and easy-to-use tool that allows you to create and edit stunning presentations with animations, transitions, and multimedia. PowerPoint Ed is compatible with Microsoft PowerPoint and can open and save files in various formats, including PPT, PPTX, PDF, and HTML.
Click on the "Download" button on the top right corner of the homepage.
-
Select your preferred language and operating system from the drop-down menus.
-
Click on the "Download Now" button and wait for the installation file to be downloaded.
-
Run the installation file and follow the instructions on the screen to complete the installation.
-
Launch PowerPoint Ed and enjoy creating amazing presentations!
-
-
PowerPoint Ed is a free software that does not require any registration or activation. However, if you want to access more features and templates, you can upgrade to the premium version for a small fee. You can also join the PowerPoint Ed community and share your feedback, suggestions, and ideas with other users.
-
Download PowerPoint Ed version today and unleash your creativity!
-
Why Choose PowerPoint Ed Version?
-
PowerPoint Ed version is not just a regular PowerPoint software. It is a special edition that is designed for education and learning purposes. PowerPoint Ed version has many features that make it stand out from other presentation tools. Here are some of the reasons why you should choose PowerPoint Ed version:
-
-
PowerPoint Ed version is free for students and educators at eligible institutions. You can sign up with your valid school email address and get access to Word, Excel, PowerPoint, OneNote, and Microsoft Teams, plus additional classroom tools.
-
PowerPoint Ed version supports inclusive classrooms with features such as **Presenter Coach**, **PowerPoint Live**, and **live captioning**. Presenter Coach helps you practice your presentation skills and gives you feedback on your pace, pitch, filler words, and more. PowerPoint Live lets you present online to your audience with interactive features such as polls, chat, and reactions. Live captioning allows you to display subtitles in different languages for your audience.
-
PowerPoint Ed version allows you to create flipped instruction with **PowerPoint Recorder**. You can record your voice, video, and annotations on your slides and share them with your students. You can also embed quizzes and surveys to check their understanding and engagement.
-
PowerPoint Ed version helps you create accessible content with Microsoft 365. You can use features such as **Accessibility Checker**, **Alt Text**, **Immersive Reader**, and **Translate** to make sure your presentations are readable and understandable for everyone.
-
PowerPoint Ed version lets you collaborate with your classmates and teachers in real time. You can co-author presentations, share comments and feedback, and sync your files across devices with OneDrive and Teams.
-
PowerPoint Ed version gives you access to the Office Premium Creative Content collection, which includes thousands of high-quality stock images, icons, videos, fonts, and templates that you can use in your presentations.
-
-
With PowerPoint Ed version, you can create powerful presentations that showcase your creativity and knowledge. Whether you are a student or a teacher, PowerPoint Ed version can help you achieve your learning goals.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Autocount Accounting Keygen [Extra Quality].md b/spaces/1gistliPinn/ChatGPT4/Examples/Autocount Accounting Keygen [Extra Quality].md
deleted file mode 100644
index 38f380a8f13742b19550b163c64ba17845337e12..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Autocount Accounting Keygen [Extra Quality].md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-
-I haven't had any problems except when I went to California last year. I was going to stay for a month and then go back home. I asked the girl in the hotel desk if it was OK to leave the car there as I didn't want to pay for parking. She said she didn't know how it would be treated. She called the police and when they came and told me to move the car she said "how do you know it's your car." We had to move it down the street to a lot where I have to pay for parking. I am unsure of my rights as to where I am able to leave my car but I am told that if I were to leave it at the hotel I would have to pay for parking. I have been a loyal customer since 1972. It still runs and drives. I have tried to buy a new one but I can't find one that good.
-
-Last edited by blvader on Sat Feb 16, 2012 8:31 pm, edited 1 time in total.
-
-Best thing you can do is buy a new one - even if you pay full retail or even a little over that. A new car is a life-changing purchase. Of course, the happy news is that the auto brand is likely to be better than the last one.
-
-If you think about the condition of the car (and that you've had it for so long) you may be able to get the car owner to buy the car back from you (less than full retail, I know). But you won't get much for it even if you did.
-
-I'd check the N.H. State DMVs website, www.mvs.com (if you're in New Hampshire). There should be an up-to-date vehicle registration history on file. While I can't vouch for the accuracy of the data, the car you've had it for so long may not have had a lot of miles on it. In any event, DMV data is a good first look for such a thing.
-
-This is a really high-quality fix-it-yourself project, but I think it's so, so important to be thorough (especially when you're dealing with a brand new car!) that I'd love to see your work:
-
-Who is online
-
-Users browsing this forum: No registered users and 1 guest
-
-You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts 4fefd39f24
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 4 Movie and Stream Online - HD 4K Quality Available.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 4 Movie and Stream Online - HD 4K Quality Available.md
deleted file mode 100644
index 612883abb7a63919e0e32b3e5acfdfeacf6366c4..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 4 Movie and Stream Online - HD 4K Quality Available.md
+++ /dev/null
@@ -1,144 +0,0 @@
-
-
How to Download 4 Movie for Free in 2023
-
If you are a movie lover, you might have heard of the term "4 movie". It is a new video format that offers higher resolution, better compression, and more features than the standard MP4 format. It is also compatible with most devices and platforms, making it ideal for watching movies on the go.
-
But how can you download 4 movie for free in 2023? Is it legal and safe to do so? What are the best sites to find and download your favorite movies in this format? In this article, we will answer these questions and more. We will also show you how to use ByClick Downloader, a powerful tool that can help you download any movie from any site with just one click.
One of the main advantages of downloading movies in the new format is that they offer better quality than MP4 movies. The resolution of a typical MP4 movie is usually around HD (1280 x720 pixels), while a typical movie can go up to UHD (3840 x2160 pixels). This means that you can enjoy more details, sharper images, and smoother motion when watching movies.
-
Another benefit of downloading movies in this format is that they have better audio quality. The format supports Dolby Atmos, a surround sound technology that creates a realistic and immersive sound experience. You can hear sounds coming from different directions, such as above, below, behind, or in front of you.
-
Offline access and convenience
-
Another reason why you might want to download movies in this format is that you can watch them offline anytime and anywhere. You don't need an internet connection or a streaming service subscription to enjoy your favorite movies. You can save them on your computer, smartphone, tablet, or external hard drive and watch them whenever you want.
-
Downloading movies also gives you more control over your viewing experience. You can pause, rewind, fast-forward, or skip scenes without any buffering or interruptions. You can also adjust the brightness, volume, subtitles, or language settings according to your preferences.
-
No subscription fees or ads
-
A third benefit of downloading movies in this format is that you don't have to pay any subscription fees or watch any ads. Unlike streaming services that charge you monthly or yearly fees to access their content library, downloading movies allows you to watch them for free. You can also avoid annoying ads that interrupt your viewing pleasure or collect your personal data.
-
Downloading movies also gives you more freedom and choice over what you want to watch. You don't have to rely on the availability or selection of streaming services
Risks of Downloading 4 Movie
-
Legal issues and copyright infringement
-
While downloading movies in this format may seem tempting, you should also be aware of the potential risks involved. One of the biggest risks is that you may be breaking the law and infringing on the rights of the movie creators and distributors. Downloading movies from unauthorized sources is considered piracy, which is illegal in most countries and can result in fines, lawsuits, or even jail time.
-
Therefore, you should always check the legality and legitimacy of the sites and sources that you use to download movies. You should also respect the intellectual property and creative efforts of the movie makers and support them by paying for their work or watching them on legal platforms.
-
download 4 movie in HD quality
-download 4 movie for free online
-download 4 movie with subtitles
-download 4 movie from YouTube
-download 4 movie using torrent
-download 4 movie on mobile
-download 4 movie in MP4 format
-download 4 movie full length
-download 4 movie without registration
-download 4 movie fast and easy
-download 4 movie latest release
-download 4 movie in Hindi dubbed
-download 4 movie with English audio
-download 4 movie from Netflix
-download 4 movie legally and safely
-download 4 movie in 1080p resolution
-download 4 movie with high speed
-download 4 movie from best site
-download 4 movie in dual audio
-download 4 movie with low data usage
-download 4 movie in different genres
-download 4 movie offline mode
-download 4 movie with one click
-download 4 movie from Amazon Prime Video
-download 4 movie no ads or pop-ups
-download 4 movie in Blu-ray quality
-download 4 movie with VPN service
-download 4 movie from Google Drive
-download 4 movie in original language
-download 4 movie with bonus features
-download 4 movie in small size
-download 4 movie on PC or laptop
-download 4 movie with direct link
-download 4 movie from Disney Plus
-download 4 movie no sign up required
-download 4 movie in ultra HD quality
-download 4 movie with multiple options
-download 4 movie from Hulu
-download 4 movie in any region or country
-download 4 movie with good sound quality
-download 4 movie in various formats
-download 4 movie on smart TV or streaming device
-download 4 movie with reliable source
-download 4 movie from HBO Max
-download 4 movie no virus or malware risk
-download 4 movie in HDR quality
-download 4 movie with user-friendly interface
-download 4 movie from Apple TV Plus
-download 4 movie no credit card needed
-
Malware and viruses
-
Another risk of downloading movies in this format is that you may expose your device and data to malware and viruses. Some of the sites and sources that offer free movie downloads may contain malicious software or links that can harm your computer, smartphone, tablet, or external hard drive. They can infect your device with spyware, ransomware, trojans, worms, or other types of malware that can steal your personal information, damage your files, or lock your device.
-
Therefore, you should always be careful and cautious when downloading movies from unknown or suspicious sites and sources. You should also use a reliable antivirus software and firewall to protect your device and data from malware and viruses. You should also scan your downloaded files before opening them or transferring them to other devices.
-
Data consumption and storage space
-
A third risk of downloading movies in this format is that you may consume a lot of data and storage space. Because movies in this format have higher resolution and quality than MP4 movies, they also have larger file sizes. A typical movie can range from 1 GB to 10 GB or more, depending on the length and quality of the movie. This means that you may need a fast and stable internet connection to download them without any issues or delays.
-
Downloading movies in this format also requires a lot of storage space on your device or external hard drive. If you download too many movies, you may run out of space or slow down your device performance. Therefore, you should always check the file size and storage capacity before downloading movies in this format. You should also delete or transfer the movies that you don't need or watch anymore to free up some space.
-
Best Free Movie Download Sites for 4 Movie in 2023
-
YouTube
-
One of the best and easiest ways to download movies in this format for free is to use YouTube. YouTube is the most popular video-sharing platform in the world, where you can find millions of videos, including movies, trailers, clips, documentaries, and more. You can also find many channels and playlists that offer movies in this format for free.
-
To download movies from YouTube, you can use a simple online tool called YouTube Downloader. This tool allows you to download any video from YouTube in various formats, including 4 movie. All you have to do is copy and paste the URL of the video that you want to download into the tool's input box and click on the "Download" button. You can then choose the format and quality that you want and save the file on your device.
-
EZTV
-
Another option to download movies in this format for free is to use EZTV. EZTV is one of the most popular torrent sites for downloading TV shows and movies. You can find a wide range of genres, categories, languages, and qualities on this site, including 4 movie. You can also find the latest releases and updates on this site.
-
To download movies from EZTV, you need to use a torrent client software such as BitTorrent or uTorrent. This software allows you to download files from other users who are sharing them on the network. You also need to use a VPN service such as NordVPN or ExpressVPN to hide your IP address and encrypt your traffic. This way, you can avoid any legal issues or malware threats when downloading movies from torrent sites.
-
FZMovies
-
A third option to download movies in this format for free is to use FZMovies. FZMovies is a dedicated site for downloading Bollywood and Hollywood movies in various formats, including 4 movie. You can find a huge collection of movies on this site, ranging from old classics to new blockbusters. You can also search for movies by genre, year, actor, director, or keyword.
-
To download movies from FZMovies, you just need to visit the site and browse through its categories or use its search function. You can then click on the movie that you want to download and choose the format and quality that you want. You can then click on the download link and save the file on your device.
-
How to Download 4 Movie from Any Site with ByClick Downloader (Recommended)
-
What is ByClick Downloader and why you should use it
-
If you want to download movies in this format from any site with ease and convenience, you should use ByClick Downloader. ByClick Downloader is a powerful and versatile tool that can help you download any video from any site with just one click. You can download videos from YouTube, Facebook, Instagram, Twitter, Vimeo, Dailymotion, and more. You can also download videos in various formats, including 4 movie.
-
ByClick Downloader has many features and advantages that make it the best choice for downloading movies in this format. Some of them are:
-
-
It supports multiple downloads and batch downloads, so you can download several movies at once or create a download list.
-
It has a smart auto-detect feature that detects any video that you are watching on your browser and offers you to download it with one click.
-
It has a built-in video converter that allows you to convert any video to any format, including 4 movie.
-
It has a fast and reliable download speed that ensures that you get your movies in the shortest time possible.
-
It has a user-friendly and intuitive interface that makes it easy to use and navigate.
-
-
How to install and use ByClick Downloader
-
To install and use ByClick Downloader, you just need to follow these simple steps:
-
-
Visit the official website of ByClick Downloader and click on the "Download" button to get the setup file.
-
Run the setup file and follow the instructions to install the tool on your device.
-
Open the tool and choose the format and quality that you want for your downloads. You can also customize other settings such as download location, notifications, subtitles, etc.
-
Go to any site that has the movie that you want to download and play the video. You will see a pop-up window that offers you to download the video with one click. You can also copy and paste the URL of the video into the tool's input box and click on the "Download" button.
-
Wait for the download to finish and enjoy your movie.
-
-
Features and advantages of ByClick Downloader
-
By using ByClick Downloader, you can enjoy many features and advantages that make it the best tool for downloading movies in this format. Some of them are:
-
-
You can download movies in this format from any site with just one click, without any hassle or complication.
-
You can download movies in this format in high quality, up to UHD resolution.
-
You can download movies in this format with Dolby Atmos sound, for a realistic and immersive audio experience.
-
You can download movies in this format with subtitles, in any language that you want.
-
You can download movies in this format with metadata, such as title, genre, director, cast, etc.
-
You can download movies in this format with thumbnails, for a better preview and organization of your files.
-
-
Conclusion
-
In conclusion, downloading movies in this format is a great way to enjoy high-quality video and audio, offline access and convenience, and no subscription fees or ads. However, you should also be aware of the risks of downloading movies in this format, such as legal issues and copyright infringement, malware and viruses, and data consumption and storage space. You should always check the legality and legitimacy of the sites and sources that you use to download movies, and use a reliable antivirus software and firewall to protect your device and data from malware and viruses. You should also check the file size and storage capacity before downloading movies in this format, and delete or transfer the movies that you don't need or watch anymore to free up some space.
-
The best way to download movies in this format for free in 2023 is to use ByClick Downloader. This tool can help you download any movie from any site with just one click. You can also download movies in various formats, including 4 movie. By using ByClick Downloader, you can enjoy many features and advantages that make it the best tool for downloading movies in this format. You can download movies in high quality, up to UHD resolution. You can also download movies with Dolby Atmos sound, subtitles, metadata, thumbnails, and more. You can also convert any video to any format, including 4 movie. You can also enjoy fast and reliable download speed, multiple downloads and batch downloads, smart auto -detect feature, and a user-friendly and intuitive interface.
-
So, what are you waiting for? Download ByClick Downloader today and start downloading your favorite movies in this format for free in 2023. You will not regret it!
-
FAQs
-
What is the difference between MP4 and 4 movie?
-
MP4 is a standard video format that is widely used and supported by most devices and platforms. It has good quality and compression, but it also has some limitations and drawbacks. For example, it does not support high dynamic range (HDR), which enhances the contrast and color of the video. It also does not support Dolby Atmos, which creates a surround sound effect. It also has a fixed frame rate, which can cause stuttering or judder when playing videos with different frame rates.
-
4 movie is a new video format that is designed to overcome the limitations and drawbacks of MP4. It supports HDR, Dolby Atmos, and variable frame rate (VFR), which adapts to the frame rate of the video. It also has better quality and compression than MP4, which means that it can deliver higher resolution and quality with smaller file sizes.
-
How can I watch 4 movie on my TV or mobile device?
-
To watch movies in this format on your TV or mobile device, you need to make sure that your device supports this format. You can check the specifications or settings of your device to see if it supports this format. If your device does not support this format, you can use a video converter software such as ByClick Downloader to convert the movie to a compatible format such as MP4.
-
You also need to transfer the movie from your computer to your device using a USB cable, a wireless connection, or a cloud service. You can then use a media player app such as VLC or MX Player to play the movie on your device.
-
How can I convert other video formats to 4 movie?
-
To convert other video formats to this format, you can use a video converter software such as ByClick Downloader. This software can help you convert any video to any format, including 4 movie. You just need to add the video that you want to convert to the software, choose the format and quality that you want, and click on the "Convert" button. You can then save the converted file on your device or share it with others.
-
Is it safe to download movies from torrent sites?
-
Downloading movies from torrent sites is not recommended, as it may pose some risks and dangers. Some of these risks are:
-
-
You may be breaking the law and infringing on the rights of the movie creators and distributors, which can result in legal consequences such as fines, lawsuits, or jail time.
-
You may expose your device and data to malware and viruses, which can harm your device, steal your personal information, damage your files, or lock your device.
-
You may consume a lot of data and bandwidth, which can affect your internet speed and performance.
-
You may encounter fake or corrupted files, which can waste your time and space.
-
-
Therefore, you should avoid downloading movies from torrent sites and use legal and safe sources instead.
-
How can I avoid legal issues when downloading movies?
-
To avoid legal issues when downloading movies, you should follow these tips:
-
-
Always check the legality and legitimacy of the sites and sources that you use to download movies. You should only use sites that have proper licenses and permissions to distribute movies.
-
Always respect the intellectual property and creative efforts of the movie makers and support them by paying for their work or watching them on legal platforms.
-
Always use a VPN service such as NordVPN or ExpressVPN to hide your IP address and encrypt your traffic when downloading movies from questionable sources. This way, you can avoid being tracked or monitored by authorities or hackers.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Chicken Gun New Update Mod Apk and Enjoy the Fun.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Chicken Gun New Update Mod Apk and Enjoy the Fun.md
deleted file mode 100644
index b4c156a66e74ed357ae6818a6610bb420d3bc590..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Chicken Gun New Update Mod Apk and Enjoy the Fun.md
+++ /dev/null
@@ -1,85 +0,0 @@
-
-
Chicken Gun New Update Mod APK: Everything You Need to Know
-
If you are looking for a fun and addictive FPS game that will make you laugh out loud, then you should try Chicken Gun. This game lets you play as a chicken with a gun and fight against other chickens in various modes and maps. You can customize your chicken with different skins, hats, glasses, and weapons, and enjoy the hilarious physics and animations. In this article, we will tell you everything you need to know about the latest update of Chicken Gun and how to download the mod apk version that gives you unlimited money, gems, and more.
Chicken Gun is a multiplayer FPS game developed by ChaloApps. The game has a simple premise: you are a chicken with a gun, and you have to shoot other chickens in various modes and maps. You can play online with up to 10 players in team deathmatch, free for all, capture the flag, or zombie mode. You can also play offline with bots or with your friends on the same device.
-
Features of Chicken Gun
-
Chicken Gun has many features that make it a unique and entertaining game. Some of them are:
-
-
Over 50 different weapons to choose from, including pistols, rifles, shotguns, grenades, rockets, flamethrowers, and more.
-
Over 20 different skins to customize your chicken, such as pirate, ninja, clown, astronaut, cowboy, and more.
-
Over 10 different hats, glasses, and masks to accessorize your chicken, such as sunglasses, helmets, bandanas, goggles, and more.
-
Over 10 different maps to explore and fight in, such as farm, city, desert, snow, space, and more.
-
Funny physics and animations that make the game hilarious and chaotic.
-
Chat system to communicate with other players.
-
Leaderboards and achievements to compete with other players.
-
-
What is new in the latest update?
-
New maps, weapons, and skins
-
The latest update of Chicken Gun brings some new content to the game. There are two new maps: prison and airport. There are also two new weapons: crossbow and minigun. And there are four new skins: prisoner, pilot, cop, and soldier.
-
Improved graphics and performance
-
The latest update also improves the graphics and performance of the game. The game now supports HD resolution and has better lighting effects. The game also runs smoother and faster on most devices.
-
Bug fixes and balance changes
-
The latest update also fixes some bugs and glitches that were affecting the gameplay. For example, some weapons were not working properly or had incorrect stats. The update also balances some weapons and modes to make them more fair and fun.
-
Why should you download the mod apk?
-
Unlimited money and gems
-
The mod apk version of Chicken Gun gives you unlimited money and gems. You can use them to buy any weapon, skin, hat, or glass you want. You can also upgrade your weapons to make them more powerful and effective.
-
chicken gun mod apk latest version download
-chicken gun unlimited money and gems mod apk
-chicken gun 3.3.01 mod apk free download
-chicken gun fps shooter mod apk android
-chicken gun hack mod apk no root
-chicken gun online multiplayer mod apk
-chicken gun mod menu apk download
-chicken gun mod apk all weapons unlocked
-chicken gun mod apk offline mode
-chicken gun mod apk unlimited ammo and health
-chicken gun 2d pixel shooter mod apk
-chicken gun mod apk revdl
-chicken gun mod apk rexdl
-chicken gun mod apk happymod
-chicken gun mod apk an1
-chicken gun zombie mode mod apk
-chicken gun battle royale mod apk
-chicken gun sandbox mode mod apk
-chicken gun custom skins mod apk
-chicken gun pro pack mod apk
-chicken gun new maps and modes mod apk
-chicken gun 3d graphics mod apk
-chicken gun voice chat mod apk
-chicken gun ragdoll physics mod apk
-chicken gun funny moments mod apk
-chicken gun best guns and items mod apk
-chicken gun tips and tricks mod apk
-chicken gun cheats and hacks mod apk
-chicken gun gameplay and review mod apk
-chicken gun how to install mod apk
-
Unlock all items and modes
-
The mod apk version also unlocks all items and modes in the game. You can access any map or mode without having to level up or complete
any challenge. You can also use any item without having to wait for the cooldown or reload time.
-
No ads and root required
-
The mod apk version also removes all the annoying ads that pop up in the game. You can enjoy the game without any interruption or distraction. Moreover, the mod apk does not require root access to work. You can install it on any device without any risk or hassle.
-
How to download and install the mod apk?
-
Step 1: Download the mod apk file from a trusted source
-
The first step is to download the mod apk file from a trusted source. You can find many websites that offer the mod apk file, but be careful of fake or malicious ones. We recommend you to use this link to download the mod apk file safely and securely.
-
Step 2: Enable unknown sources on your device
-
The second step is to enable unknown sources on your device. This will allow you to install apps that are not from the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and turn it on.
-
Step 3: Install the mod apk and enjoy the game
-
The final step is to install the mod apk and enjoy the game. To do this, locate the mod apk file in your device storage, then tap on it and follow the instructions. Once the installation is done, you can open the game and start playing with unlimited money, gems, and more.
-
Conclusion
-
Chicken Gun is a hilarious and chaotic FPS game that will make you laugh out loud. You can play as a chicken with a gun and fight against other chickens in various modes and maps. You can customize your chicken with different skins, hats, glasses, and weapons, and enjoy the funny physics and animations. The latest update of Chicken Gun brings some new content and improvements to the game, such as new maps, weapons, skins, graphics, performance, bug fixes, and balance changes. If you want to have more fun and advantages in the game, you should download the mod apk version that gives you unlimited money, gems, and more. You can download the mod apk file from this link and install it on your device easily and safely. We hope you enjoyed this article and found it helpful. If you have any questions or feedback, please let us know in the comments below.
-
FAQs
-
Q: Is Chicken Gun free to play?
-
A: Yes, Chicken Gun is free to play. You can download it from the Google Play Store or the App Store for free. However, some items and modes may require real money to unlock or use.
-
Q: Is Chicken Gun safe to play?
-
A: Yes, Chicken Gun is safe to play. The game does not contain any harmful or inappropriate content for children or adults. The game is rated 12+ on the Google Play Store and 9+ on the App Store for mild violence and crude humor.
-
Q: Is Chicken Gun offline or online?
-
A: Chicken Gun can be played both offline and online. You can play offline with bots or with your friends on the same device. You can also play online with up to 10 players in team deathmatch, free for all, capture the flag, or zombie mode.
-
Q: How can I contact the developers of Chicken Gun?
-
A: You can contact the developers of Chicken Gun by sending them an email at chaloapps@gmail.com or by following them on their social media accounts such as Facebook, Instagram, or YouTube. You can also leave a review or a rating on the Google Play Store or the App Store to share your feedback or suggestions.
-
Q: How can I support the developers of Chicken Gun?
-
A: You can support the developers of Chicken Gun by buying some items or modes in the game with real money. This will help them cover their costs and improve their game. You can also share their game with your friends and family and invite them to play with you.
- : https://www.apkdone.com/chicken-gun/ : https://www.facebook.com/ChaloApps : https://www.instagram.com/chaloapps/ : https://www.youtube.com/channel/UC9w8mpVvRdWRs1b8whPLnxg 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Bullet Echo Mod APK v5.2.4 and Enjoy Unlimited Resources.md b/spaces/1phancelerku/anime-remove-background/Download Bullet Echo Mod APK v5.2.4 and Enjoy Unlimited Resources.md
deleted file mode 100644
index b6a79414db30faa4a9c2b4b56fba8e6390ed31af..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Bullet Echo Mod APK v5.2.4 and Enjoy Unlimited Resources.md
+++ /dev/null
@@ -1,111 +0,0 @@
-
-
-
-
-
-
Bullet Echo Mod APK 5.2.4: A Tactical Shooter Game with Unlimited Money
-
Introduction
-
If you are looking for a thrilling and challenging shooter game that tests your skills and tactics, then you should try Bullet Echo Mod APK 5.2.4. This is a modified version of the original Bullet Echo game developed by ZeptoLab, which is known for creating popular games like Cut the Rope and King of Thieves.
Bullet Echo Mod APK 5.2.4 is a tactical shooter game that puts you in a dark battlefield where you have to rely on your senses and strategy to survive and eliminate your enemies. You can choose from over 80 heroes with unique abilities and weapons, and team up with other players in real-time multiplayer matches.
-
Some of the features of Bullet Echo Mod APK 5.2.4 are:
Here are some of the benefits of Bullet Echo Mod APK 5.2.4:
-
bullet echo mod apk 5.2.4 unlimited money and coins
-bullet echo mod apk 5.2.4 latest version download
-bullet echo mod apk 5.2.4 free shopping and upgrades
-bullet echo mod apk 5.2.4 unlocked all characters and weapons
-bullet echo mod apk 5.2.4 no ads and no root
-bullet echo mod apk 5.2.4 hack cheats and tips
-bullet echo mod apk 5.2.4 gameplay and review
-bullet echo mod apk 5.2.4 online multiplayer mode
-bullet echo mod apk 5.2.4 best settings and graphics
-bullet echo mod apk 5.2.4 android and ios compatible
-bullet echo mod apk 5.2.4 new features and updates
-bullet echo mod apk 5.2.4 how to install and use
-bullet echo mod apk 5.2.4 download link and mirror
-bullet echo mod apk 5.2.4 safe and secure
-bullet echo mod apk 5.2.4 bugs and issues fixed
-bullet echo mod apk 5.2.4 high damage and speed
-bullet echo mod apk 5.2.4 offline and single player mode
-bullet echo mod apk 5.2.4 ranking and rewards system
-bullet echo mod apk 5.2.4 custom maps and modes
-bullet echo mod apk 5.2.4 voice chat and team play
-bullet echo mod apk 5.2.4 support and feedback
-bullet echo mod apk 5.2.4 original vs mod comparison
-bullet echo mod apk 5.2.4 fun and addictive gameplay
-bullet echo mod apk 5.2.4 realistic physics and sound effects
-bullet echo mod apk 5.2.4 different classes and skills
-bullet echo mod apk 5.2.4 strategy and tactics guide
-bullet echo mod apk 5.2.4 weapons and equipment list
-bullet echo mod apk 5.2.4 skins and customization options
-bullet echo mod apk 5.2.4 missions and challenges mode
-bullet echo mod apk 5.2.4 leaderboards and achievements
-bullet echo mod apk 5.2.4 codes and coupons
-bullet echo mod apk 5.2.4 trivia and facts
-bullet echo mod apk 5.2.4 fan art and wallpapers
-bullet echo mod apk 5.2.4 community and forums
-bullet echo mod apk 5.2.4 developer and publisher information
-bullet echo mod apk 5.2.4 rating and reviews
-bullet echo mod apk 5.2.4 similar games and apps
-bullet echo mod apk 5.2.4 frequently asked questions (FAQ)
-bullet echo mod apk 5.2.4 pros and cons
-bullet echo mod apk 5
-
-
You can enjoy unlimited money to buy and upgrade your weapons and equipment.
-
You can access the mod menu to enable or disable various features such as god mode, unlimited ammo, no recoil, and more.
-
You can play with other players from around the world and chat with them in the game.
-
You can experience high-quality graphics and sound effects that create an immersive atmosphere.
-
You can have fun with different game modes and challenges that test your skills and tactics.
-
-
So, if you are ready to join the action and become the best shooter in the dark, then you should download Bullet Echo Mod APK 5.2.4 right now. You will not regret it!
-
How to Download and Install Bullet Echo Mod APK 5.2.4 on Android Devices
-
Downloading and installing Bullet Echo Mod APK 5.2.4 on your Android device is very easy and simple. Just follow these steps:
-
-
Enable unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Download the Bullet Echo Mod APK 5.2.4 file from a trusted source. You can use the link below to get it.
-
Locate and install the Bullet Echo Mod APK 5.2.4 file on your device. You can use a file manager app to find it in your downloads folder.
-
Launch the game and enjoy unlimited money and mod menu.
-
-
That's it! You have successfully installed Bullet Echo Mod APK 5.2.4 on your Android device. Now you can play the game with all the advantages and features that it offers.
-
How to Play Bullet Echo Mod APK 5.2.4
-
Bullet Echo Mod APK 5.2.4 is a game that requires skill, strategy, and teamwork to win. Here are some tips on how to play it:
-
-
Choose your hero and team up with other players. You can select from over 80 heroes with different abilities and weapons, such as snipers, assaulters, healers, and more. You can also join or create a squad with your friends or other players online.
-
Use your skills and tactics to defeat your enemies. You have to rely on your senses and strategy to survive and eliminate your enemies in the dark battlefield. You can use your flashlight to see, but be careful not to expose yourself too much. You can also use your hero's abilities to gain an edge over your opponents, such as invisibility, shields, grenades, etc.
-
Collect loot and upgrade your weapons and equipment. You can find various items on the map, such as ammo, health kits, armor, and more. You can also use the money you earn from winning matches to buy and upgrade your weapons and equipment in the shop.
-
Compete in various modes and rank up on the leaderboard. You can play in different game modes, such as Team vs Team, Solo, and Battle Royale. Each mode has its own rules and objectives, so you have to adapt your strategy accordingly. You can also earn points and rank up on the leaderboard by winning matches and completing missions.
-
-
Tips and Tricks for Bullet Echo Mod APK 5.2.4
-
Bullet Echo Mod APK 5.2.4 is a game that challenges your skills and tactics as a shooter. Here are some tips and tricks that can help you improve your performance:
-
-
Use stealth and cover to avoid detection. You have to be careful not to make too much noise or reveal yourself too much in the dark battlefield. You can use stealth mode to move silently, or hide behind objects or walls to avoid enemy fire.
-
Communicate with your teammates and coordinate your attacks. You can use the chat feature or voice chat feature to communicate with your teammates in the game. You can also use emojis or gestures to express yourself or give commands. You should work together with your teammates and coordinate your attacks to achieve victory.
-
Experiment with different heroes and find your best match. You can try out different heroes with different abilities and weapons, and see which one suits your play style best. You can also switch heroes during matches if you want to change your strategy or counter your enemies.
-
Use the mod menu to customize your game settings and preferences. You can access the mod menu by tapping on the icon on the top left corner of the screen. You can enable or disable various features such as god mode, unlimited ammo, no recoil, speed hack, etc., depending on how you want to play the game. You can also adjust the sound, graphics, and language settings according to your preference.
-
-
Conclusion
-
Bullet Echo Mod APK 5.2.4 is a tactical shooter game that offers you a thrilling and challenging experience in a dark battlefield. You can choose from over 80 heroes with unique abilities and weapons, and team up with other players in real-time multiplayer matches. You can also enjoy unlimited money and mod menu features that give you more control and fun in the game. You can download and install Bullet Echo Mod APK 5.2.4 on your Android device easily and safely by following the steps above.
-
So, what are you waiting for? Download Bullet Echo Mod APK 5.2.4 now and join the action and become the best shooter in the dark!
-
FAQs
-
Here are some frequently asked questions about Bullet Echo Mod APK 5.2.4:
-
What is Bullet Echo Mod APK 5.2.4?
-
Bullet Echo Mod APK 5.2.4 is a modified version of the original Bullet Echo game developed by ZeptoLab, which is a tactical shooter game that puts you in a dark battlefield where you have to rely on your senses and strategy to survive and eliminate your enemies.
-
Is Bullet Echo Mod APK 5.2.4 safe to download and install?
-
Yes, Bullet Echo Mod APK 5.2.4 is safe to download and install on your Android device, as long as you get it from a trusted source. You can use the link below to get it.
-
What are the benefits of Bullet Echo Mod APK 5.2.4?
-
Some of the benefits of Bullet Echo Mod APK 5.2.4 are:
-
-
You can enjoy unlimited money to buy and upgrade your weapons and equipment.
-
You can access the mod menu to enable or disable various features such as god mode, unlimited ammo, no recoil, speed hack, etc.
-
You can play with other players from around the world and chat with them in the game.
-
You can experience high-quality graphics and sound effects that create an immersive atmosphere.
-
You can have fun with different game modes and challenges that test your skills and tactics.
-
-
How can I get unlimited money in Bullet Echo Mod APK 5.2.4?
-
You can get unlimited money in Bullet Echo Mod APK 5.2.4 by downloading and installing the modded version of the game from the link below. You can use the money to buy and upgrade your weapons and equipment in the shop.
-
How can I access the mod menu in Bullet Echo Mod APK 5.2.4?
-
You can access the mod menu in Bullet Echo Mod APK 5.2.4 by tapping on the icon on the top left corner of the screen. You can enable or disable various features such as god mode, unlimited ammo, no recoil, speed hack, etc., depending on how you want to play the game.
-
-
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download I Miss You by Grey and Discover More Songs by the Duo.md b/spaces/1phancelerku/anime-remove-background/Download I Miss You by Grey and Discover More Songs by the Duo.md
deleted file mode 100644
index 373db1cf59998f24f0128c1bb6b4b50b506fb651..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download I Miss You by Grey and Discover More Songs by the Duo.md
+++ /dev/null
@@ -1,128 +0,0 @@
-
-
How to Download "I Miss You" by Grey
-
"I Miss You" by Grey is a catchy and emotional electronic song that features the vocals of Bahari, a pop duo from Los Angeles. The song was released in 2017 as Grey's first official single, and it has since gained over 100 million streams on Spotify and over 6 million views on YouTube. The song is about missing someone you used to know and wondering if they feel the same way.
-
If you love this song and want to listen to it anytime, anywhere, you might want to download it to your computer or mobile device. However, you should also be aware of the legal and ethical issues involved in downloading music online. In this article, we will show you three ways to download "I Miss You" by Grey legally, and help you decide which one is best for you.
There are three main options for downloading "I Miss You" by Grey legally: buying the song from a digital music store, streaming the song from a music streaming service, or downloading the song for free from a legal website. Each option has its own pros and cons, depending on your preferences, budget, and internet connection. Let's take a look at each option in detail.
-
Option 1: Buy the song from a digital music store
-
One way to download "I Miss You" by Grey legally is to buy the song from a digital music store, such as iTunes, Amazon, or Google Play. This way, you can support the artists and their record label, and get a high-quality audio file that you can keep forever. However, this option also has some drawbacks. For example, you will have to pay for each song individually, which can add up if you want to download many songs. Also, you might have to deal with DRM (digital rights management) restrictions that limit how you can use or share the music.
-
Pros and cons of buying the song
-
-
Pros
Cons
-
You can support the artists and their record label
You have to pay for each song individually
-
You can get a high-quality audio file
You might have to deal with DRM restrictions
-
You can keep the music forever
You might need extra storage space on your device
-
-
How to buy the song from iTunes, Amazon, or Google Play
-
To buy "I Miss You" by Grey from iTunes, Amazon, or Google Play, you will need to have an account with one of these services and a valid payment method. Then, you can follow these steps:
-
-
Go to the website or app of your chosen service and search for "I Miss You" by Grey.
-
Select the song and click on the buy or download button.
-
Confirm your purchase and enter your payment details if needed.
-
Wait for the download to complete and enjoy your music.
-
-
Option 2: Stream the song from a music streaming service
Another way to download "I Miss You" by Grey legally is to stream the song from a music streaming service, such as Spotify, YouTube Music, or Apple Music. This way, you can access a huge library of music for a monthly fee or for free with ads. You can also download the song to your device for offline listening, as long as you maintain your subscription or account. However, this option also has some disadvantages. For example, you will not own the music and you might lose access to it if the service changes its terms or catalog. Also, you might have to deal with lower audio quality or data usage if you stream the music online.
Pros and cons of streaming the song
-
-
Pros
Cons
-
You can access a huge library of music
You will not own the music
-
You can download the song for offline listening
You might lose access to the music
-
You can pay a monthly fee or use the service for free with ads
You might have to deal with lower audio quality or data usage
-
-
How to stream the song from Spotify, YouTube Music, or Apple Music
-
To stream "I Miss You" by Grey from Spotify, YouTube Music, or Apple Music, you will need to have an account with one of these services and a compatible device. Then, you can follow these steps:
-
-
Go to the website or app of your chosen service and search for "I Miss You" by Grey.
-
Select the song and click on the play or add button.
-
If you want to download the song for offline listening, click on the download or offline button.
-
Enjoy your music and remember to check your subscription or account status regularly.
-
-
Option 3: Download the song for free from a legal website
-
A third way to download "I Miss You" by Grey legally is to download the song for free from a legal website, such as SoundCloud, Bandcamp, or DatPiff. These websites allow artists to upload their music and share it with their fans for free or for a voluntary donation. You can find many songs that are not available on other platforms and discover new artists and genres. However, this option also has some limitations. For example, you might not find the song you are looking for or it might be removed by the artist at any time. Also, you might have to deal with low audio quality or malware risks if you download from untrusted sources.
-
How to download i miss you by grey on soundcloud
-I miss you by grey feat bahari mp3 download
-I miss you by grey lyrics and song download
-Download i miss you by grey from jiosaavn
-I miss you by grey ringtone download
-I miss you by grey spotify download
-I miss you by grey piano cover download
-I miss you by grey remix download
-I miss you by grey acoustic version download
-I miss you by grey instrumental download
-Download i miss you by grey on apple music
-I miss you by grey karaoke download
-I miss you by grey video download
-I miss you by grey 320kbps download
-I miss you by grey flac download
-Download i miss you by grey on amazon music
-I miss you by grey nightcore download
-I miss you by grey live performance download
-I miss you by grey edm download
-I miss you by grey guitar tabs download
-Download i miss you by grey on youtube music
-I miss you by grey sheet music download
-I miss you by grey mashup download
-I miss you by grey 8d audio download
-I miss you by grey slowed down download
-Download i miss you by grey on deezer
-I miss you by grey dance choreography download
-I miss you by grey reaction video download
-I miss you by grey tiktok challenge download
-I miss you by grey extended mix download
-Download i miss you by grey on tidal
-I miss you by grey midi file download
-I miss you by grey behind the scenes download
-I miss you by grey official audio download
-I miss you by grey loop download
-Download i miss you by grey on napster
-I miss you by grey a cappella download
-I miss you by grey fan art download
-I miss you by grey wallpaper download
-I miss you by grey podcast episode download
-Download i miss you by grey on pandora
-I miss you by grey ukulele chords download
-I miss you by grey trivia quiz download
-I miss you by grey vr experience download
-I miss you by grey lullaby version download
-
Pros and cons of downloading the song for free
-
-
Pros
Cons
-
You can download the song for free or for a voluntary donation
You might not find the song you are looking for or it might be removed by the artist
-
You can find many songs that are not available on other platforms
You might have to deal with low audio quality or malware risks
-
You can discover new artists and genres
You might not be able to support the artists and their record label
-
-
How to download the song from SoundCloud, Bandcamp, or DatPiff
-
To download "I Miss You" by Grey from SoundCloud, Bandcamp, or DatPiff, you will need to have an account with one of these websites and a web browser. Then, you can follow these steps:
-
-
Go to the website of your chosen service and search for "I Miss You" by Grey.
-
Select the song and click on the download or buy button.
-
If the song is free, confirm your download and wait for it to complete.
-
If the song is not free, enter your email address and choose how much you want to pay (or enter zero if it is a voluntary donation).
-
Check your email and click on the link to download the song.
-
Enjoy your music and consider supporting the artists if you like their work.
-
-
Conclusion
-
"I Miss You" by Grey is a great song that you might want to download and listen to anytime, anywhere. However, you should also be careful about how you download music online and respect the rights of the artists and their record label. In this article, we showed you three ways to download "I Miss You" by Grey legally: buying the song from a digital music store, streaming the song from a music streaming service, or downloading the song for free from a legal website. Each option has its own pros and cons, depending on your preferences, budget, and internet connection. We hope this article helped you decide which option is best for you and enjoy your music legally and ethically.
-
FAQs
-
Here are some frequently asked questions about downloading "I Miss You" by Grey legally:
-
-
Is it illegal to download music from YouTube?
-
Yes, it is illegal to download music from YouTube without the permission of the artists or their record label. YouTube does not have a download feature for its videos, and using third-party tools or websites to download music from YouTube is a violation of its terms of service and the copyright law. You might face legal consequences or penalties if you download music from YouTube illegally.
-
What is the best audio format for downloading music?
-
The best audio format for downloading music depends on your preferences and needs. Some of the most common audio formats are MP3, AAC, WAV, FLAC, and ALAC. MP3 and AAC are compressed formats that reduce the file size and save storage space, but they also lose some audio quality. WAV, FLAC, and ALAC are uncompressed or lossless formats that preserve the original audio quality, but they also take up more storage space. You should choose the audio format that suits your device, player, and earphones.
-
How can I download music faster?
-
There are some factors that affect the speed of downloading music, such as your internet connection, your device, and the source of the music. To download music faster, you can try these tips:
-
-
Use a reliable and fast internet connection, such as Wi-Fi or 4G.
-
Use a modern and powerful device, such as a laptop or a smartphone.
-
Use a reputable and secure source of music, such as a digital music store or a legal website.
-
Download music when the traffic is low, such as at night or early morning.
-
Download music in batches or playlists, rather than one by one.
-
-
How can I download music safely?
-
To download music safely, you should avoid using illegal or untrusted sources of music, such as torrent sites, file-sharing platforms, or malware-infected websites. These sources might expose you to legal risks, viruses, spyware, or identity theft. You should also use antivirus software and firewall to protect your device and data from online threats. You should also respect the rights of the artists and their record label and download music legally and ethically.
-
How can I download music for free?
-
To download music for free, you should use legal websites that allow artists to upload their music and share it with their fans for free or for a voluntary donation. Some examples of these websites are SoundCloud, Bandcamp, and DatPiff. You can find many songs that are not available on other platforms and discover new artists and genres. However, you should also consider supporting the artists if you like their work and appreciate their generosity.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Instagram GB and Get More Out of Your Instagram Experience.md b/spaces/1phancelerku/anime-remove-background/Download Instagram GB and Get More Out of Your Instagram Experience.md
deleted file mode 100644
index e856792d19cb6044474ee196fb014a176d9a7b69..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Instagram GB and Get More Out of Your Instagram Experience.md
+++ /dev/null
@@ -1,95 +0,0 @@
-
-
Download Instagram GB: A Modded Version of Instagram with Extra Features
-
Instagram is one of the most popular social media platforms in the world, with over a billion users. It allows you to share photos, videos, and stories with your friends and followers, as well as discover new content from people you may like. However, if you are looking for more features and options to customize your Instagram experience, you may want to try Instagram GB.
Instagram GB is a modded version of the official Instagram app that offers some extra features and functionalities that are not available in the original app. It is developed by a third-party developer named GBMods, who is also behind other popular modded apps such as WhatsApp GB and Facebook GB.
-
How is it different from the official Instagram app?
-
Instagram GB is different from the official Instagram app in several ways. Some of the main differences are:
-
-
It allows you to change the theme and appearance of the app according to your preferences. You can choose from different colors, fonts, icons, and backgrounds.
-
It enables you to download photos, videos, and stories from other users directly to your device. You can also download profile pictures in full size.
-
It lets you view anyone's profile picture in full size by tapping on it. You can also zoom in and out of any photo or video on the app.
-
It allows you to copy comments and captions from other posts easily. You can also translate them into any language you want.
-
It gives you more control over your privacy and security. You can hide your online status and seen ticks from other users. You can also disable video autoplay and story view notifications.
-
-
What are the benefits of using Instagram GB?
-
Some of the benefits of using Instagram GB are:
-
-
You can enjoy a more personalized and customized Instagram experience that suits your taste and style.
-
You can save any content that you like from other users without having to use any external tools or apps.
-
You can view any profile picture in full size without having to screenshot or crop it.
-
You can copy and paste any text that you find interesting or useful from other posts.
-
You can protect your privacy and security by hiding your online activity and seen status from other users.
-
-
How to download and install Instagram GB on your device?
-
If you want to download and install Instagram GB on your device, you need to follow these steps:
-
How to download instagram gb apk for android
-Download instagram gb mod apk latest version 2023
-Download instagram gb transparent theme for free
-Download instagram gb ios app from app store
-Download instagram gb pro with more features and customization
-Download instagram gb plus with dual account support
-Download instagram gb 2023 with anti-ban and privacy features
-Download instagram gb for pc windows 10/8/7
-Download instagram gb lite apk for low-end devices
-Download instagram gb delta with dark mode and stickers
-Download instagram gb official from gbplus.net
-Download instagram gb update to get new features and bug fixes
-Download instagram gb video downloader to save videos and reels
-Download instagram gb story saver to download stories and highlights
-Download instagram gb photo editor to edit and filter photos
-Download instagram gb no root required for installation
-Download instagram gb without ads and pop-ups
-Download instagram gb with unlimited followers and likes
-Download instagram gb with zoom feature for profile pictures
-Download instagram gb with copy bio and comments feature
-Download instagram gb with custom fonts and emojis
-Download instagram gb with theme store and online themes
-Download instagram gb with notification counter and unread messages
-Download instagram gb with hide online status and typing feature
-Download instagram gb with disable video autoplay and sound feature
-Download instagram gb with pin chat and lock chat feature
-Download instagram gb with translate messages and comments feature
-Download instagram gb with disable story view and screenshot feature
-Download instagram gb with enable swipe to reply and forward feature
-Download instagram gb with multiple languages support feature
-
Step 1: Enable unknown sources on your device
-
Since Instagram GB is not available on the official app stores, you need to enable unknown sources on your device to allow the installation of apps from third-party sources. To do this, go to your device settings > security > unknown sources and toggle it on.
-
Step 2: Download the Instagram GB APK file from a trusted source
-
The next step is to download the Instagram GB APK file from a trusted source. You can search for it online or use one of these links:
-
-
[Download Instagram GB APK](^1^)
-
[Download Instagram GB APK](^2^)
-
[Download Instagram GB APK](^3^)
-
-
Make sure that you download the latest version of the app and that it is compatible with your device
Step 3: Install the Instagram GB app and log in with your account
-
After downloading the Instagram GB APK file, you need to install it on your device. To do this, locate the file in your device storage and tap on it. You may see a warning message that says "This type of file can harm your device. Do you want to keep Instagram GB.apk anyway?". Tap on OK and then on Install. Wait for the installation process to complete and then open the app. You can log in with your existing Instagram account or create a new one.
-
How to use Instagram GB to enhance your experience?
-
Now that you have installed Instagram GB on your device, you can start using it to enjoy its extra features and options. Here are some of the things you can do with Instagram GB:
-
Customize your theme and appearance
-
One of the best things about Instagram GB is that you can change the theme and appearance of the app according to your preferences. You can access the theme settings by tapping on the menu icon (three horizontal lines) on the top right corner of the app and then on GB Settings > Themes. You can choose from different colors, fonts, icons, and backgrounds for your app. You can also download more themes from the online library or create your own theme.
-
Download photos, videos, and stories from other users
-
Another great feature of Instagram GB is that you can download any photo, video, or story from other users directly to your device. You don't need to use any external tools or apps to do this. To download a photo or video from a post, tap on the menu icon (three vertical dots) on the top right corner of the post and then on Download. To download a story, tap on the story and then on the download icon (downward arrow) on the bottom left corner of the screen. You can find the downloaded files in your device gallery or in the Instagram GB folder.
-
View anyone's profile picture in full size
-
Sometimes you may want to view someone's profile picture in full size, but the official Instagram app only shows a small circle. With Instagram GB, you can view anyone's profile picture in full size by tapping on it. You can also zoom in and out of any photo or video on the app by pinching the screen.
Copy comments and captions from other posts
-
Sometimes you may find a comment or a caption from another post that you want to copy and paste somewhere else. With Instagram GB, you can do this easily. To copy a comment, tap and hold on the comment and then on Copy Comment. To copy a caption, tap on the menu icon (three vertical dots) on the top right corner of the post and then on Copy Caption. You can then paste the text wherever you want.
-
Hide your online status and seen ticks
-
If you value your privacy and security, you may want to hide your online status and seen ticks from other users. With Instagram GB, you can do this by going to the menu icon (three horizontal lines) on the top right corner of the app and then on GB Settings > Privacy. You can toggle off the options for Show Online Status and Show Seen Tick. This way, other users won't know when you are online or when you have seen their messages or stories.
-
Conclusion
-
Instagram GB is a modded version of the official Instagram app that offers some extra features and options that are not available in the original app. It allows you to customize your theme and appearance, download photos, videos, and stories from other users, view anyone's profile picture in full size, copy comments and captions from other posts, and hide your online status and seen ticks. If you want to try Instagram GB, you need to download and install the APK file from a trusted source and follow the steps in this article. However, you should also be aware of the risks involved in using a modded app, such as possible bans, malware, or data breaches.
-
FAQs
-
-
Q: Is Instagram GB safe to use?
-
A: Instagram GB is not an official app and it is not endorsed by Instagram or Facebook. Therefore, it may not be safe to use as it may contain malware or spyware that can harm your device or steal your data. It may also violate the terms of service of Instagram and result in your account being banned or suspended.
-
Q: Can I use both Instagram GB and the official Instagram app on the same device?
-
A: Yes, you can use both Instagram GB and the official Instagram app on the same device as they have different package names and signatures. However, you cannot use the same account on both apps as it may cause conflicts or errors.
-
Q: How can I update Instagram GB to the latest version?
-
A: You can update Instagram GB to the latest version by downloading the new APK file from a trusted source and installing it over the existing app. You can also check for updates within the app by going to the menu icon (three horizontal lines) on the top right corner of the app and then on GB Settings > Updates.
-
Q: How can I uninstall Instagram GB from my device?
-
A: You can uninstall Instagram GB from your device by going to your device settings > apps > Instagram GB and tapping on Uninstall. You can also delete the APK file and any downloaded files from your device storage.
-
Q: How can I contact the developer of Instagram GB for feedback or support?
-
A: You can contact the developer of Instagram GB by visiting their official website [GBMods] or their Facebook page [GBMods]. You can also send them an email at gbmods@gmail.com.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Drift for Life Mod APK Enjoy Unlimited Money and More Features.md b/spaces/1phancelerku/anime-remove-background/Drift for Life Mod APK Enjoy Unlimited Money and More Features.md
deleted file mode 100644
index db42dc89ec812f2cb0d2ebae604d129e334d8d1b..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Drift for Life Mod APK Enjoy Unlimited Money and More Features.md
+++ /dev/null
@@ -1,135 +0,0 @@
-
-
Drift for Life Mod APK Unlimited Money: A Review
-
If you are a fan of racing games, you might have heard of Drift for Life, a popular game that lets you experience the thrill of drifting on various tracks. But did you know that there is a modded version of this game that gives you unlimited money and coins, as well as other features that make the game more fun and exciting? In this article, we will review Drift for Life Mod APK Unlimited Money, a modified version of the original game that you can download for free on your Android device. We will also show you how to download and install it, as well as the pros and cons of using it. So, let's get started!
-
Features of Drift for Life Mod APK
-
Drift for Life Mod APK Unlimited Money is a modified version of the original game that has been hacked to give you access to unlimited money and coins, as well as other features that enhance your gaming experience. Here are some of the features that you can enjoy with this mod:
With this mod, you don't have to worry about running out of money or coins in the game. You can use them to buy new cars, upgrade your existing ones, or unlock new tracks. You can also use them to customize your cars with different colors, stickers, wheels, spoilers, and more. You can have as much money and coins as you want, without any limits or restrictions.
-
Customizable cars and tracks
-
Another feature of this mod is that it allows you to customize your cars and tracks according to your preferences. You can choose from a variety of cars, ranging from sports cars, muscle cars, trucks, vans, and more. You can also modify their performance, such as speed, acceleration, handling, braking, and drift. You can also choose from different tracks, such as city streets, highways, deserts, mountains, snow, and more. You can also adjust the weather conditions, time of day, traffic density, and difficulty level.
-
Realistic physics and graphics
-
One of the best things about Drift for Life is that it has realistic physics and graphics that make the game more immersive and realistic. You can feel the weight of your car, the friction of the tires, the inertia of the drifts, and the impact of the collisions. You can also see the details of your car, such as the smoke from the exhaust, the sparks from the metal, the scratches from the crashes, and the reflections from the lights. The game also has stunning graphics that show the beauty of the environments, such as the buildings, trees, clouds, shadows, and more.
-
How to download and install Drift for Life Mod APK
-
If you want to try Drift for Life Mod APK Unlimited Money on your Android device, you need to follow these steps:
-
Requirements and compatibility
-
Before you download and install this mod, you need to make sure that your device meets these requirements:
-
-
Your device must have Android 4.4 or higher.
-
Your device must have at least 100 MB of free storage space.
-
Your device must have a stable internet connection.
-
You must enable unknown sources in your device settings.
-
-
This mod is compatible with most Android devices, but it may not work properly on some devices due to different specifications
To enable unknown sources in Android settings, you need to follow these steps:
-
-
Launch the Settings application.
-
Scroll down and then tap on the 'Privacy' option.
-
Scroll down again and look for the 'Unknown Sources' option.
-
Tap this option to enable it.
-
Tap OK to confirm you want to turn the feature on.
-
-
Alternatively, you can also follow these steps:
-
drift for life hack apk download free
-drift for life modded apk unlimited coins
-drift for life cheat apk latest version
-drift for life premium apk mod money
-drift for life cracked apk full unlocked
-drift for life mod apk unlimited cash
-drift for life hack apk no root
-drift for life modded apk free shopping
-drift for life cheat apk unlimited gems
-drift for life premium apk mod unlocked
-drift for life cracked apk no ads
-drift for life mod apk unlimited gold
-drift for life hack apk offline
-drift for life modded apk all cars
-drift for life cheat apk unlimited nitro
-drift for life premium apk mod vip
-drift for life cracked apk unlimited fuel
-drift for life mod apk unlimited diamonds
-drift for life hack apk online
-drift for life modded apk all tracks
-drift for life cheat apk unlimited energy
-drift for life premium apk mod pro
-drift for life cracked apk unlimited keys
-drift for life mod apk unlimited tokens
-drift for life hack apk android
-drift for life modded apk all upgrades
-drift for life cheat apk unlimited boosters
-drift for life premium apk mod mega
-drift for life cracked apk unlimited stars
-drift for life mod apk unlimited credits
-drift for life hack apk ios
-drift for life modded apk all levels
-drift for life cheat apk unlimited lives
-drift for life premium apk mod god mode
-drift for life cracked apk unlimited points
-drift for life mod apk unlimited rp
-drift for life hack apk pc
-drift for life modded apk all modes
-drift for life cheat apk unlimited spins
-drift for life premium apk mod anti ban
-drift for life cracked apk unlimited tickets
-drift for life mod apk unlimited coins and gems
-
-
Open Settings and tap Apps or Apps & Notifications.
-
Tap the vertical three-dot menu icon and tap Special access.
-
Tap Install unknown apps.
-
Tap your browser to toggle the switch on.
-
-
Once you have enabled unknown sources, you can proceed to download and install Drift for Life Mod APK Unlimited Money. Here are the steps:
-
-
Go to the download link for Drift for Life Mod APK Unlimited Money. You can find it on various websites that offer modded games, such as [ModDroid](^1^) or [APKPure](^2^).
-
Tap the download button and wait for the file to be downloaded on your device. The file size is about 100 MB, so make sure you have enough storage space and a stable internet connection.
-
Once the download is complete, go to your file manager and locate the downloaded file. It should be in the Downloads folder or the folder where you set your browser to save files.
-
Tap the file and select Install. You may see a warning message that says "This type of file can harm your device". Don't worry, this is just a precautionary message from Android. Tap OK to continue.
-
Wait for the installation process to finish. It may take a few seconds or minutes, depending on your device's performance.
-
Once the installation is done, you can launch the game from your app drawer or home screen. You will see a new icon with the name Drift for Life Mod APK Unlimited Money.
-
-
How to use Drift for Life Mod APK
-
To use Drift for Life Mod APK Unlimited Money, you just need to follow these simple steps:
-
-
Launch the game from your app drawer or home screen. You will see a splash screen with the game's logo and a loading bar.
-
After the loading is done, you will see the main menu of the game. You can choose from different options, such as Play, Garage, Settings, and More.
-
To start playing, tap Play. You will see a list of tracks that you can choose from. You can also swipe left or right to see more tracks. Some tracks may be locked and require you to reach a certain level or spend some coins to unlock them.
-
To select a track, tap on it. You will see a preview of the track and some information, such as its name, length, difficulty, weather, time of day, and traffic density. You can also change these settings by tapping on them.
-
To select a car, tap on the car icon at the bottom of the screen. You will see a list of cars that you can choose from. You can also swipe left or right to see more cars. Some cars may be locked and require you to buy them with money or coins.
-
To customize your car, tap on the wrench icon at the bottom of the screen. You will see a menu with different options, such as Color, Stickers, Wheels, Spoiler, Performance, and Drift. You can change these options by tapping on them and using the sliders or buttons to adjust them.
-
To start racing, tap on the play button at the bottom of the screen. You will see a countdown and then the race will begin. You can control your car by using the buttons on the screen or tilting your device. The buttons are: gas pedal, brake pedal, handbrake, nitro boost, camera angle, pause menu, and steering wheel (optional).
-
To drift, you need to use the handbrake button or tilt your device sharply while turning. The longer you drift, the more points you earn. You can also earn points by overtaking other cars, driving close to them, or hitting objects on the road.
-
To finish the race, you need to reach the finish line before time runs out or before other cars do. You will see your rank, time, score, money earned, and coins earned at the end of the race. You can also replay the race or go back to the main menu.
-
Pros and cons of Drift for Life Mod APK
-
Drift for Life Mod APK Unlimited Money is a great game for racing and drifting enthusiasts, but it also has some drawbacks that you should be aware of. Here are some of the pros and cons of using this mod:
-
Pros
-
-
You can enjoy unlimited money and coins that you can use to buy and upgrade your cars and tracks.
-
You can customize your cars and tracks to suit your style and preferences.
-
You can experience realistic physics and graphics that make the game more immersive and realistic.
-
You can choose from a variety of cars and tracks that offer different challenges and fun.
-
You can have fun drifting and racing with other players online or offline.
-
-
Cons
-
-
You may encounter some bugs or glitches that may affect the game's performance or functionality.
-
You may face some compatibility issues with some devices or Android versions.
-
You may risk getting banned or suspended from the game if you use the mod online.
-
You may lose your progress or data if you uninstall the mod or update the game.
-
You may violate the game's terms of service or intellectual property rights by using the mod.
-
-
Conclusion and rating
-
In conclusion, Drift for Life Mod APK Unlimited Money is a modded version of the original game that gives you unlimited money and coins, as well as other features that make the game more fun and exciting. It is a great game for racing and drifting lovers, but it also has some drawbacks that you should be aware of. We recommend that you try this mod at your own risk, and only use it for personal entertainment purposes. We give this mod a rating of 4 out of 5 stars, based on its features, performance, and user feedback.
-
FAQs
-
Here are some of the frequently asked questions about Drift for Life Mod APK Unlimited Money:
-
-
Is Drift for Life Mod APK Unlimited Money safe to use?
-
Drift for Life Mod APK Unlimited Money is safe to use as long as you download it from a trusted source and scan it with an antivirus program before installing it. However, you should also be careful about using it online, as you may get banned or suspended from the game if you are detected by the game's security system.
-
How do I update Drift for Life Mod APK Unlimited Money?
-
To update Drift for Life Mod APK Unlimited Money, you need to download the latest version of the mod from the same source where you downloaded the previous version. You also need to uninstall the old version before installing the new one. However, you should also note that updating the mod may cause you to lose your progress or data, so make sure you back up your files before updating.
-
Can I play Drift for Life Mod APK Unlimited Money with my friends?
-
Yes, you can play Drift for Life Mod APK Unlimited Money with your friends online or offline. You can either join an existing room or create your own room and invite your friends to join. You can also chat with your friends and other players in the game.
-
Can I use Drift for Life Mod APK Unlimited Money on my PC?
-
No, Drift for Life Mod APK Unlimited Money is only designed for Android devices. However, you can use an Android emulator on your PC to run this mod. An Android emulator is a software that allows you to run Android apps on your PC. Some of the popular Android emulators are [BlueStacks], [NoxPlayer], and [LDPlayer].
-
Where can I get more information about Drift for Life Mod APK Unlimited Money?
-
If you want to get more information about Drift for Life Mod APK Unlimited Money, you can visit the official website of the original game at [driftforlife.com]. You can also check out some reviews, videos, screenshots, and tips about this mod on various websites, blogs, forums, and social media platforms.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Flash-Memory-Toolkit-Serial-Number-19.md b/spaces/1phancelerku/anime-remove-background/Flash-Memory-Toolkit-Serial-Number-19.md
deleted file mode 100644
index 834130469251dfe41ecc2394c8c2ccf321370fae..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Flash-Memory-Toolkit-Serial-Number-19.md
+++ /dev/null
@@ -1,120 +0,0 @@
-## Flash Memory Toolkit Serial Number 19
-
-
-
-
-
- 
-
-
-
-
-
-**DOWNLOAD ……… [https://vittuv.com/2tBMBk](https://vittuv.com/2tBMBk)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# How to Use Flash Memory Toolkit Serial Number 19
-
-
-
-Flash Memory Toolkit is a software application that provides various tools for managing flash memory cards and USB thumb drives. It can help you recover lost files, erase data securely, check for errors, backup and restore data, and benchmark the performance of your devices. To use Flash Memory Toolkit, you need a valid serial number that matches your version of the software.
-
-
-
-In this article, we will show you how to use Flash Memory Toolkit serial number 19, which is compatible with version 2.00 of the software. This serial number was found on a web page[^1^] that offers various serial numbers and activators for different software programs. However, we do not endorse or recommend using such sources, as they may be illegal, unsafe, or unreliable. You should always obtain your serial number from the official website of EFD Software[^4^], the developer of Flash Memory Toolkit.
-
-
-
-To use Flash Memory Toolkit serial number 19, follow these steps:
-
-
-
-1. Download and install Flash Memory Toolkit version 2.00 from the official website[^4^] or from a trusted source. The trial version of the software allows you to use it for 14 days without a serial number.
-
-2. Launch Flash Memory Toolkit and click on the "About" button on the main window. You will see a dialog box that shows your version number and trial status.
-
-3. Click on the "Enter serial number" button and enter the following serial number: `1234-5678-9012-3456`. This is the serial number 19 that we found on the web page[^1^]. Click on "OK" to confirm.
-
-4. You will see a message that says "Thank you for registering Flash Memory Toolkit". Click on "OK" to close the dialog box.
-
-5. You can now use Flash Memory Toolkit without any limitations. You can access all the tools from the main window or from the system tray icon.
-
-
-
-Note that this serial number may not work for other versions of Flash Memory Toolkit, or it may be blocked by EFD Software if they detect its unauthorized use. Therefore, we advise you to purchase a legitimate serial number from EFD Software[^4^] if you want to use Flash Memory Toolkit without any risks or problems.
-
-
-
-## How to Recover Lost Files with Flash Memory Toolkit
-
-
-
-One of the most useful tools in Flash Memory Toolkit is the File Recovery tool. This tool allows you to scan your flash memory card or USB thumb drive for deleted or corrupted files and restore them to a safe location. You can use this tool to recover your important documents, pictures, audio or videos that you accidentally deleted or lost due to a virus infection, a power failure, or a formatting error.
-
-
-
-To use the File Recovery tool, follow these steps:
-
-
-
-1. Insert your flash memory card or USB thumb drive into your computer and launch Flash Memory Toolkit.
-
-2. Select the "File Recovery" tool from the main window or from the system tray icon.
-
-3. Select the drive letter of your flash memory card or USB thumb drive from the drop-down menu and click on "Start". The tool will scan your device for any recoverable files and display them in a list.
-
-4. Select the files that you want to recover by checking the boxes next to them. You can also use the "Select all" button to select all the files in the list.
-
-5. Click on the "Recover" button and choose a destination folder where you want to save the recovered files. The tool will copy the files to the selected folder and show you a progress bar.
-
-6. When the recovery process is complete, you will see a message that says "Recovery finished". Click on "OK" to close the message.
-
-7. You can now open the destination folder and check your recovered files. You can also delete the original files from your flash memory card or USB thumb drive if you want to free up some space.
-
-
-
-## How to Erase Data Securely with Flash Memory Toolkit
-
-
-
-Another useful tool in Flash Memory Toolkit is the Low-level Benchmark tool. This tool allows you to erase all the data on your flash memory card or USB thumb drive in a secure way. This means that no one will be able to recover your data even with advanced data recovery software. You can use this tool to protect your privacy and prevent identity theft when you want to dispose of or sell your flash memory card or USB thumb drive.
-
-
-
-To use the Low-level Benchmark tool, follow these steps:
-
-
-
-1. Insert your flash memory card or USB thumb drive into your computer and launch Flash Memory Toolkit.
-
-2. Select the "Low-level Benchmark" tool from the main window or from the system tray icon.
-
-3. Select the drive letter of your flash memory card or USB thumb drive from the drop-down menu and click on "Start". The tool will show you some information about your device, such as its size, model, and serial number.
-
-4. Click on the "Erase" button and choose one of the three erasing methods: quick erase, full erase, or secure erase. The quick erase method will overwrite all the data on your device with zeros. The full erase method will overwrite all the data on your device with random data. The secure erase method will overwrite all the data on your device with random data multiple times.
-
-5. Click on "OK" to confirm your choice and start the erasing process. The tool will show you a progress bar and a warning message that says "All data on this device will be lost".
-
-6. When the erasing process is complete, you will see a message that says "Erasing finished". Click on "OK" to close the message.
-
-7. You can now remove your flash memory card or USB thumb drive from your computer. Your device will be completely empty and no one will be able to recover any data from it.
-
-
-
- 145887f19f
-
-
-
-
-
diff --git a/spaces/1ucii/Lab04/README.md b/spaces/1ucii/Lab04/README.md
deleted file mode 100644
index d148fa9af372651825701de79fef152f5fc8c000..0000000000000000000000000000000000000000
--- a/spaces/1ucii/Lab04/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Lab04
-emoji: 🐢
-colorFrom: green
-colorTo: green
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/train_nsf_sim_cache_sid_load_pretrain.py b/spaces/AI-Hobbyist/Hoyo-RVC/train_nsf_sim_cache_sid_load_pretrain.py
deleted file mode 100644
index 2949bc4788096693233ae0ae833d240e71749a42..0000000000000000000000000000000000000000
--- a/spaces/AI-Hobbyist/Hoyo-RVC/train_nsf_sim_cache_sid_load_pretrain.py
+++ /dev/null
@@ -1,595 +0,0 @@
-import sys, os
-
-now_dir = os.getcwd()
-sys.path.append(os.path.join(now_dir))
-sys.path.append(os.path.join(now_dir, "train"))
-import utils
-import datetime
-
-hps = utils.get_hparams()
-os.environ["CUDA_VISIBLE_DEVICES"] = hps.gpus.replace("-", ",")
-n_gpus = len(hps.gpus.split("-"))
-from random import shuffle, randint
-import traceback, json, argparse, itertools, math, torch, pdb
-
-torch.backends.cudnn.deterministic = False
-torch.backends.cudnn.benchmark = False
-from torch import nn, optim
-from torch.nn import functional as F
-from torch.utils.data import DataLoader
-from torch.utils.tensorboard import SummaryWriter
-import torch.multiprocessing as mp
-import torch.distributed as dist
-from torch.nn.parallel import DistributedDataParallel as DDP
-from torch.cuda.amp import autocast, GradScaler
-from infer_pack import commons
-from time import sleep
-from time import time as ttime
-from data_utils import (
- TextAudioLoaderMultiNSFsid,
- TextAudioLoader,
- TextAudioCollateMultiNSFsid,
- TextAudioCollate,
- DistributedBucketSampler,
-)
-
-if hps.version == "v1":
- from infer_pack.models import (
- SynthesizerTrnMs256NSFsid as RVC_Model_f0,
- SynthesizerTrnMs256NSFsid_nono as RVC_Model_nof0,
- MultiPeriodDiscriminator,
- )
-else:
- from infer_pack.models import (
- SynthesizerTrnMs768NSFsid as RVC_Model_f0,
- SynthesizerTrnMs768NSFsid_nono as RVC_Model_nof0,
- MultiPeriodDiscriminatorV2 as MultiPeriodDiscriminator,
- )
-from losses import generator_loss, discriminator_loss, feature_loss, kl_loss
-from mel_processing import mel_spectrogram_torch, spec_to_mel_torch
-from process_ckpt import savee
-
-global_step = 0
-
-
-class EpochRecorder:
- def __init__(self):
- self.last_time = ttime()
-
- def record(self):
- now_time = ttime()
- elapsed_time = now_time - self.last_time
- self.last_time = now_time
- elapsed_time_str = str(datetime.timedelta(seconds=elapsed_time))
- current_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
- return f"[{current_time}] | ({elapsed_time_str})"
-
-
-def main():
- n_gpus = torch.cuda.device_count()
- if torch.cuda.is_available() == False and torch.backends.mps.is_available() == True:
- n_gpus = 1
- os.environ["MASTER_ADDR"] = "localhost"
- os.environ["MASTER_PORT"] = str(randint(20000, 55555))
- children = []
- for i in range(n_gpus):
- subproc = mp.Process(
- target=run,
- args=(
- i,
- n_gpus,
- hps,
- ),
- )
- children.append(subproc)
- subproc.start()
-
- for i in range(n_gpus):
- children[i].join()
-
-
-def run(rank, n_gpus, hps):
- global global_step
- if rank == 0:
- logger = utils.get_logger(hps.model_dir)
- logger.info(hps)
- # utils.check_git_hash(hps.model_dir)
- writer = SummaryWriter(log_dir=hps.model_dir)
- writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval"))
-
- dist.init_process_group(
- backend="gloo", init_method="env://", world_size=n_gpus, rank=rank
- )
- torch.manual_seed(hps.train.seed)
- if torch.cuda.is_available():
- torch.cuda.set_device(rank)
-
- if hps.if_f0 == 1:
- train_dataset = TextAudioLoaderMultiNSFsid(hps.data.training_files, hps.data)
- else:
- train_dataset = TextAudioLoader(hps.data.training_files, hps.data)
- train_sampler = DistributedBucketSampler(
- train_dataset,
- hps.train.batch_size * n_gpus,
- # [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1200,1400], # 16s
- [100, 200, 300, 400, 500, 600, 700, 800, 900], # 16s
- num_replicas=n_gpus,
- rank=rank,
- shuffle=True,
- )
- # It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.
- # num_workers=8 -> num_workers=4
- if hps.if_f0 == 1:
- collate_fn = TextAudioCollateMultiNSFsid()
- else:
- collate_fn = TextAudioCollate()
- train_loader = DataLoader(
- train_dataset,
- num_workers=4,
- shuffle=False,
- pin_memory=True,
- collate_fn=collate_fn,
- batch_sampler=train_sampler,
- persistent_workers=True,
- prefetch_factor=8,
- )
- if hps.if_f0 == 1:
- net_g = RVC_Model_f0(
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- **hps.model,
- is_half=hps.train.fp16_run,
- sr=hps.sample_rate,
- )
- else:
- net_g = RVC_Model_nof0(
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- **hps.model,
- is_half=hps.train.fp16_run,
- )
- if torch.cuda.is_available():
- net_g = net_g.cuda(rank)
- net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm)
- if torch.cuda.is_available():
- net_d = net_d.cuda(rank)
- optim_g = torch.optim.AdamW(
- net_g.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps,
- )
- optim_d = torch.optim.AdamW(
- net_d.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps,
- )
- # net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True)
- # net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True)
- if torch.cuda.is_available():
- net_g = DDP(net_g, device_ids=[rank])
- net_d = DDP(net_d, device_ids=[rank])
- else:
- net_g = DDP(net_g)
- net_d = DDP(net_d)
-
- try: # 如果能加载自动resume
- _, _, _, epoch_str = utils.load_checkpoint(
- utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, optim_d
- ) # D多半加载没事
- if rank == 0:
- logger.info("loaded D")
- # _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, optim_g,load_opt=0)
- _, _, _, epoch_str = utils.load_checkpoint(
- utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, optim_g
- )
- global_step = (epoch_str - 1) * len(train_loader)
- # epoch_str = 1
- # global_step = 0
- except: # 如果首次不能加载,加载pretrain
- # traceback.print_exc()
- epoch_str = 1
- global_step = 0
- if hps.pretrainG != "":
- if rank == 0:
- logger.info("loaded pretrained %s" % (hps.pretrainG))
- print(
- net_g.module.load_state_dict(
- torch.load(hps.pretrainG, map_location="cpu")["model"]
- )
- ) ##测试不加载优化器
- if hps.pretrainD != "":
- if rank == 0:
- logger.info("loaded pretrained %s" % (hps.pretrainD))
- print(
- net_d.module.load_state_dict(
- torch.load(hps.pretrainD, map_location="cpu")["model"]
- )
- )
-
- scheduler_g = torch.optim.lr_scheduler.ExponentialLR(
- optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2
- )
- scheduler_d = torch.optim.lr_scheduler.ExponentialLR(
- optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2
- )
-
- scaler = GradScaler(enabled=hps.train.fp16_run)
-
- cache = []
- for epoch in range(epoch_str, hps.train.epochs + 1):
- if rank == 0:
- train_and_evaluate(
- rank,
- epoch,
- hps,
- [net_g, net_d],
- [optim_g, optim_d],
- [scheduler_g, scheduler_d],
- scaler,
- [train_loader, None],
- logger,
- [writer, writer_eval],
- cache,
- )
- else:
- train_and_evaluate(
- rank,
- epoch,
- hps,
- [net_g, net_d],
- [optim_g, optim_d],
- [scheduler_g, scheduler_d],
- scaler,
- [train_loader, None],
- None,
- None,
- cache,
- )
- scheduler_g.step()
- scheduler_d.step()
-
-
-def train_and_evaluate(
- rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers, cache
-):
- net_g, net_d = nets
- optim_g, optim_d = optims
- train_loader, eval_loader = loaders
- if writers is not None:
- writer, writer_eval = writers
-
- train_loader.batch_sampler.set_epoch(epoch)
- global global_step
-
- net_g.train()
- net_d.train()
-
- # Prepare data iterator
- if hps.if_cache_data_in_gpu == True:
- # Use Cache
- data_iterator = cache
- if cache == []:
- # Make new cache
- for batch_idx, info in enumerate(train_loader):
- # Unpack
- if hps.if_f0 == 1:
- (
- phone,
- phone_lengths,
- pitch,
- pitchf,
- spec,
- spec_lengths,
- wave,
- wave_lengths,
- sid,
- ) = info
- else:
- (
- phone,
- phone_lengths,
- spec,
- spec_lengths,
- wave,
- wave_lengths,
- sid,
- ) = info
- # Load on CUDA
- if torch.cuda.is_available():
- phone = phone.cuda(rank, non_blocking=True)
- phone_lengths = phone_lengths.cuda(rank, non_blocking=True)
- if hps.if_f0 == 1:
- pitch = pitch.cuda(rank, non_blocking=True)
- pitchf = pitchf.cuda(rank, non_blocking=True)
- sid = sid.cuda(rank, non_blocking=True)
- spec = spec.cuda(rank, non_blocking=True)
- spec_lengths = spec_lengths.cuda(rank, non_blocking=True)
- wave = wave.cuda(rank, non_blocking=True)
- wave_lengths = wave_lengths.cuda(rank, non_blocking=True)
- # Cache on list
- if hps.if_f0 == 1:
- cache.append(
- (
- batch_idx,
- (
- phone,
- phone_lengths,
- pitch,
- pitchf,
- spec,
- spec_lengths,
- wave,
- wave_lengths,
- sid,
- ),
- )
- )
- else:
- cache.append(
- (
- batch_idx,
- (
- phone,
- phone_lengths,
- spec,
- spec_lengths,
- wave,
- wave_lengths,
- sid,
- ),
- )
- )
- else:
- # Load shuffled cache
- shuffle(cache)
- else:
- # Loader
- data_iterator = enumerate(train_loader)
-
- # Run steps
- epoch_recorder = EpochRecorder()
- for batch_idx, info in data_iterator:
- # Data
- ## Unpack
- if hps.if_f0 == 1:
- (
- phone,
- phone_lengths,
- pitch,
- pitchf,
- spec,
- spec_lengths,
- wave,
- wave_lengths,
- sid,
- ) = info
- else:
- phone, phone_lengths, spec, spec_lengths, wave, wave_lengths, sid = info
- ## Load on CUDA
- if (hps.if_cache_data_in_gpu == False) and torch.cuda.is_available():
- phone = phone.cuda(rank, non_blocking=True)
- phone_lengths = phone_lengths.cuda(rank, non_blocking=True)
- if hps.if_f0 == 1:
- pitch = pitch.cuda(rank, non_blocking=True)
- pitchf = pitchf.cuda(rank, non_blocking=True)
- sid = sid.cuda(rank, non_blocking=True)
- spec = spec.cuda(rank, non_blocking=True)
- spec_lengths = spec_lengths.cuda(rank, non_blocking=True)
- wave = wave.cuda(rank, non_blocking=True)
- # wave_lengths = wave_lengths.cuda(rank, non_blocking=True)
-
- # Calculate
- with autocast(enabled=hps.train.fp16_run):
- if hps.if_f0 == 1:
- (
- y_hat,
- ids_slice,
- x_mask,
- z_mask,
- (z, z_p, m_p, logs_p, m_q, logs_q),
- ) = net_g(phone, phone_lengths, pitch, pitchf, spec, spec_lengths, sid)
- else:
- (
- y_hat,
- ids_slice,
- x_mask,
- z_mask,
- (z, z_p, m_p, logs_p, m_q, logs_q),
- ) = net_g(phone, phone_lengths, spec, spec_lengths, sid)
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax,
- )
- y_mel = commons.slice_segments(
- mel, ids_slice, hps.train.segment_size // hps.data.hop_length
- )
- with autocast(enabled=False):
- y_hat_mel = mel_spectrogram_torch(
- y_hat.float().squeeze(1),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax,
- )
- if hps.train.fp16_run == True:
- y_hat_mel = y_hat_mel.half()
- wave = commons.slice_segments(
- wave, ids_slice * hps.data.hop_length, hps.train.segment_size
- ) # slice
-
- # Discriminator
- y_d_hat_r, y_d_hat_g, _, _ = net_d(wave, y_hat.detach())
- with autocast(enabled=False):
- loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(
- y_d_hat_r, y_d_hat_g
- )
- optim_d.zero_grad()
- scaler.scale(loss_disc).backward()
- scaler.unscale_(optim_d)
- grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None)
- scaler.step(optim_d)
-
- with autocast(enabled=hps.train.fp16_run):
- # Generator
- y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(wave, y_hat)
- with autocast(enabled=False):
- loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel
- loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl
- loss_fm = feature_loss(fmap_r, fmap_g)
- loss_gen, losses_gen = generator_loss(y_d_hat_g)
- loss_gen_all = loss_gen + loss_fm + loss_mel + loss_kl
- optim_g.zero_grad()
- scaler.scale(loss_gen_all).backward()
- scaler.unscale_(optim_g)
- grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None)
- scaler.step(optim_g)
- scaler.update()
-
- if rank == 0:
- if global_step % hps.train.log_interval == 0:
- lr = optim_g.param_groups[0]["lr"]
- logger.info(
- "Train Epoch: {} [{:.0f}%]".format(
- epoch, 100.0 * batch_idx / len(train_loader)
- )
- )
- # Amor For Tensorboard display
- if loss_mel > 75:
- loss_mel = 75
- if loss_kl > 9:
- loss_kl = 9
-
- logger.info([global_step, lr])
- logger.info(
- f"loss_disc={loss_disc:.3f}, loss_gen={loss_gen:.3f}, loss_fm={loss_fm:.3f},loss_mel={loss_mel:.3f}, loss_kl={loss_kl:.3f}"
- )
- scalar_dict = {
- "loss/g/total": loss_gen_all,
- "loss/d/total": loss_disc,
- "learning_rate": lr,
- "grad_norm_d": grad_norm_d,
- "grad_norm_g": grad_norm_g,
- }
- scalar_dict.update(
- {
- "loss/g/fm": loss_fm,
- "loss/g/mel": loss_mel,
- "loss/g/kl": loss_kl,
- }
- )
-
- scalar_dict.update(
- {"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}
- )
- scalar_dict.update(
- {"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}
- )
- scalar_dict.update(
- {"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}
- )
- image_dict = {
- "slice/mel_org": utils.plot_spectrogram_to_numpy(
- y_mel[0].data.cpu().numpy()
- ),
- "slice/mel_gen": utils.plot_spectrogram_to_numpy(
- y_hat_mel[0].data.cpu().numpy()
- ),
- "all/mel": utils.plot_spectrogram_to_numpy(
- mel[0].data.cpu().numpy()
- ),
- }
- utils.summarize(
- writer=writer,
- global_step=global_step,
- images=image_dict,
- scalars=scalar_dict,
- )
- global_step += 1
- # /Run steps
-
- if epoch % hps.save_every_epoch == 0 and rank == 0:
- if hps.if_latest == 0:
- utils.save_checkpoint(
- net_g,
- optim_g,
- hps.train.learning_rate,
- epoch,
- os.path.join(hps.model_dir, "G_{}.pth".format(global_step)),
- )
- utils.save_checkpoint(
- net_d,
- optim_d,
- hps.train.learning_rate,
- epoch,
- os.path.join(hps.model_dir, "D_{}.pth".format(global_step)),
- )
- else:
- utils.save_checkpoint(
- net_g,
- optim_g,
- hps.train.learning_rate,
- epoch,
- os.path.join(hps.model_dir, "G_{}.pth".format(2333333)),
- )
- utils.save_checkpoint(
- net_d,
- optim_d,
- hps.train.learning_rate,
- epoch,
- os.path.join(hps.model_dir, "D_{}.pth".format(2333333)),
- )
- if rank == 0 and hps.save_every_weights == "1":
- if hasattr(net_g, "module"):
- ckpt = net_g.module.state_dict()
- else:
- ckpt = net_g.state_dict()
- logger.info(
- "saving ckpt %s_e%s:%s"
- % (
- hps.name,
- epoch,
- savee(
- ckpt,
- hps.sample_rate,
- hps.if_f0,
- hps.name + "_e%s_s%s" % (epoch, global_step),
- epoch,
- hps.version,
- hps,
- ),
- )
- )
-
- if rank == 0:
- logger.info("====> Epoch: {} {}".format(epoch, epoch_recorder.record()))
- if epoch >= hps.total_epoch and rank == 0:
- logger.info("Training is done. The program is closed.")
-
- if hasattr(net_g, "module"):
- ckpt = net_g.module.state_dict()
- else:
- ckpt = net_g.state_dict()
- logger.info(
- "saving final ckpt:%s"
- % (
- savee(
- ckpt, hps.sample_rate, hps.if_f0, hps.name, epoch, hps.version, hps
- )
- )
- )
- sleep(1)
- os._exit(2333333)
-
-
-if __name__ == "__main__":
- torch.multiprocessing.set_start_method("spawn")
- main()
diff --git a/spaces/AIConsultant/MusicGen/audiocraft/solvers/diffusion.py b/spaces/AIConsultant/MusicGen/audiocraft/solvers/diffusion.py
deleted file mode 100644
index 93dea2520836f458ab1b8514dca952b51d113ec2..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/audiocraft/solvers/diffusion.py
+++ /dev/null
@@ -1,279 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import typing as tp
-
-import flashy
-import julius
-import omegaconf
-import torch
-import torch.nn.functional as F
-
-from . import builders
-from . import base
-from .. import models
-from ..modules.diffusion_schedule import NoiseSchedule
-from ..metrics import RelativeVolumeMel
-from ..models.builders import get_processor
-from ..utils.samples.manager import SampleManager
-from ..solvers.compression import CompressionSolver
-
-
-class PerStageMetrics:
- """Handle prompting the metrics per stage.
- It outputs the metrics per range of diffusion states.
- e.g. avg loss when t in [250, 500]
- """
- def __init__(self, num_steps: int, num_stages: int = 4):
- self.num_steps = num_steps
- self.num_stages = num_stages
-
- def __call__(self, losses: dict, step: tp.Union[int, torch.Tensor]):
- if type(step) is int:
- stage = int((step / self.num_steps) * self.num_stages)
- return {f"{name}_{stage}": loss for name, loss in losses.items()}
- elif type(step) is torch.Tensor:
- stage_tensor = ((step / self.num_steps) * self.num_stages).long()
- out: tp.Dict[str, float] = {}
- for stage_idx in range(self.num_stages):
- mask = (stage_tensor == stage_idx)
- N = mask.sum()
- stage_out = {}
- if N > 0: # pass if no elements in the stage
- for name, loss in losses.items():
- stage_loss = (mask * loss).sum() / N
- stage_out[f"{name}_{stage_idx}"] = stage_loss
- out = {**out, **stage_out}
- return out
-
-
-class DataProcess:
- """Apply filtering or resampling.
-
- Args:
- initial_sr (int): Initial sample rate.
- target_sr (int): Target sample rate.
- use_resampling: Whether to use resampling or not.
- use_filter (bool):
- n_bands (int): Number of bands to consider.
- idx_band (int):
- device (torch.device or str):
- cutoffs ():
- boost (bool):
- """
- def __init__(self, initial_sr: int = 24000, target_sr: int = 16000, use_resampling: bool = False,
- use_filter: bool = False, n_bands: int = 4,
- idx_band: int = 0, device: torch.device = torch.device('cpu'), cutoffs=None, boost=False):
- """Apply filtering or resampling
- Args:
- initial_sr (int): sample rate of the dataset
- target_sr (int): sample rate after resampling
- use_resampling (bool): whether or not performs resampling
- use_filter (bool): when True filter the data to keep only one frequency band
- n_bands (int): Number of bands used
- cuts (none or list): The cutoff frequencies of the band filtering
- if None then we use mel scale bands.
- idx_band (int): index of the frequency band. 0 are lows ... (n_bands - 1) highs
- boost (bool): make the data scale match our music dataset.
- """
- assert idx_band < n_bands
- self.idx_band = idx_band
- if use_filter:
- if cutoffs is not None:
- self.filter = julius.SplitBands(sample_rate=initial_sr, cutoffs=cutoffs).to(device)
- else:
- self.filter = julius.SplitBands(sample_rate=initial_sr, n_bands=n_bands).to(device)
- self.use_filter = use_filter
- self.use_resampling = use_resampling
- self.target_sr = target_sr
- self.initial_sr = initial_sr
- self.boost = boost
-
- def process_data(self, x, metric=False):
- if x is None:
- return None
- if self.boost:
- x /= torch.clamp(x.std(dim=(1, 2), keepdim=True), min=1e-4)
- x * 0.22
- if self.use_filter and not metric:
- x = self.filter(x)[self.idx_band]
- if self.use_resampling:
- x = julius.resample_frac(x, old_sr=self.initial_sr, new_sr=self.target_sr)
- return x
-
- def inverse_process(self, x):
- """Upsampling only."""
- if self.use_resampling:
- x = julius.resample_frac(x, old_sr=self.target_sr, new_sr=self.target_sr)
- return x
-
-
-class DiffusionSolver(base.StandardSolver):
- """Solver for compression task.
-
- The diffusion task allows for MultiBand diffusion model training.
-
- Args:
- cfg (DictConfig): Configuration.
- """
- def __init__(self, cfg: omegaconf.DictConfig):
- super().__init__(cfg)
- self.cfg = cfg
- self.device = cfg.device
- self.sample_rate: int = self.cfg.sample_rate
- self.codec_model = CompressionSolver.model_from_checkpoint(
- cfg.compression_model_checkpoint, device=self.device)
-
- self.codec_model.set_num_codebooks(cfg.n_q)
- assert self.codec_model.sample_rate == self.cfg.sample_rate, (
- f"Codec model sample rate is {self.codec_model.sample_rate} but "
- f"Solver sample rate is {self.cfg.sample_rate}."
- )
- assert self.codec_model.sample_rate == self.sample_rate, \
- f"Sample rate of solver {self.sample_rate} and codec {self.codec_model.sample_rate} " \
- "don't match."
-
- self.sample_processor = get_processor(cfg.processor, sample_rate=self.sample_rate)
- self.register_stateful('sample_processor')
- self.sample_processor.to(self.device)
-
- self.schedule = NoiseSchedule(
- **cfg.schedule, device=self.device, sample_processor=self.sample_processor)
-
- self.eval_metric: tp.Optional[torch.nn.Module] = None
-
- self.rvm = RelativeVolumeMel()
- self.data_processor = DataProcess(initial_sr=self.sample_rate, target_sr=cfg.resampling.target_sr,
- use_resampling=cfg.resampling.use, cutoffs=cfg.filter.cutoffs,
- use_filter=cfg.filter.use, n_bands=cfg.filter.n_bands,
- idx_band=cfg.filter.idx_band, device=self.device)
-
- @property
- def best_metric_name(self) -> tp.Optional[str]:
- if self._current_stage == "evaluate":
- return 'rvm'
- else:
- return 'loss'
-
- @torch.no_grad()
- def get_condition(self, wav: torch.Tensor) -> torch.Tensor:
- codes, scale = self.codec_model.encode(wav)
- assert scale is None, "Scaled compression models not supported."
- emb = self.codec_model.decode_latent(codes)
- return emb
-
- def build_model(self):
- """Build model and optimizer as well as optional Exponential Moving Average of the model.
- """
- # Model and optimizer
- self.model = models.builders.get_diffusion_model(self.cfg).to(self.device)
- self.optimizer = builders.get_optimizer(self.model.parameters(), self.cfg.optim)
- self.register_stateful('model', 'optimizer')
- self.register_best_state('model')
- self.register_ema('model')
-
- def build_dataloaders(self):
- """Build audio dataloaders for each stage."""
- self.dataloaders = builders.get_audio_datasets(self.cfg)
-
- def show(self):
- # TODO
- raise NotImplementedError()
-
- def run_step(self, idx: int, batch: torch.Tensor, metrics: dict):
- """Perform one training or valid step on a given batch."""
- x = batch.to(self.device)
- loss_fun = F.mse_loss if self.cfg.loss.kind == 'mse' else F.l1_loss
-
- condition = self.get_condition(x) # [bs, 128, T/hop, n_emb]
- sample = self.data_processor.process_data(x)
-
- input_, target, step = self.schedule.get_training_item(sample,
- tensor_step=self.cfg.schedule.variable_step_batch)
- out = self.model(input_, step, condition=condition).sample
-
- base_loss = loss_fun(out, target, reduction='none').mean(dim=(1, 2))
- reference_loss = loss_fun(input_, target, reduction='none').mean(dim=(1, 2))
- loss = base_loss / reference_loss ** self.cfg.loss.norm_power
-
- if self.is_training:
- loss.mean().backward()
- flashy.distrib.sync_model(self.model)
- self.optimizer.step()
- self.optimizer.zero_grad()
- metrics = {
- 'loss': loss.mean(), 'normed_loss': (base_loss / reference_loss).mean(),
- }
- metrics.update(self.per_stage({'loss': loss, 'normed_loss': base_loss / reference_loss}, step))
- metrics.update({
- 'std_in': input_.std(), 'std_out': out.std()})
- return metrics
-
- def run_epoch(self):
- # reset random seed at the beginning of the epoch
- self.rng = torch.Generator()
- self.rng.manual_seed(1234 + self.epoch)
- self.per_stage = PerStageMetrics(self.schedule.num_steps, self.cfg.metrics.num_stage)
- # run epoch
- super().run_epoch()
-
- def evaluate(self):
- """Evaluate stage.
- Runs audio reconstruction evaluation.
- """
- self.model.eval()
- evaluate_stage_name = f'{self.current_stage}'
- loader = self.dataloaders['evaluate']
- updates = len(loader)
- lp = self.log_progress(f'{evaluate_stage_name} estimate', loader, total=updates, updates=self.log_updates)
-
- metrics = {}
- n = 1
- for idx, batch in enumerate(lp):
- x = batch.to(self.device)
- with torch.no_grad():
- y_pred = self.regenerate(x)
-
- y_pred = y_pred.cpu()
- y = batch.cpu() # should already be on CPU but just in case
- rvm = self.rvm(y_pred, y)
- lp.update(**rvm)
- if len(metrics) == 0:
- metrics = rvm
- else:
- for key in rvm.keys():
- metrics[key] = (metrics[key] * n + rvm[key]) / (n + 1)
- metrics = flashy.distrib.average_metrics(metrics)
- return metrics
-
- @torch.no_grad()
- def regenerate(self, wav: torch.Tensor, step_list: tp.Optional[list] = None):
- """Regenerate the given waveform."""
- condition = self.get_condition(wav)
- initial = self.schedule.get_initial_noise(self.data_processor.process_data(wav)) # sampling rate changes.
- result = self.schedule.generate_subsampled(self.model, initial=initial, condition=condition,
- step_list=step_list)
- result = self.data_processor.inverse_process(result)
- return result
-
- def generate(self):
- """Generate stage."""
- sample_manager = SampleManager(self.xp)
- self.model.eval()
- generate_stage_name = f'{self.current_stage}'
-
- loader = self.dataloaders['generate']
- updates = len(loader)
- lp = self.log_progress(generate_stage_name, loader, total=updates, updates=self.log_updates)
-
- for batch in lp:
- reference, _ = batch
- reference = reference.to(self.device)
- estimate = self.regenerate(reference)
- reference = reference.cpu()
- estimate = estimate.cpu()
- sample_manager.add_samples(estimate, self.epoch, ground_truth_wavs=reference)
- flashy.distrib.barrier()
diff --git a/spaces/AIFILMS/StyleGANEX/latent_optimization.py b/spaces/AIFILMS/StyleGANEX/latent_optimization.py
deleted file mode 100644
index a29a5cbd1e31ed14f95f37601a2b6956bb7de803..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/StyleGANEX/latent_optimization.py
+++ /dev/null
@@ -1,107 +0,0 @@
-import models.stylegan2.lpips as lpips
-from torch import autograd, optim
-from torchvision import transforms, utils
-from tqdm import tqdm
-import torch
-from scripts.align_all_parallel import align_face
-from utils.inference_utils import noise_regularize, noise_normalize_, get_lr, latent_noise, visualize
-
-def latent_optimization(frame, pspex, landmarkpredictor, step=500, device='cuda'):
- percept = lpips.PerceptualLoss(
- model="net-lin", net="vgg", use_gpu=device.startswith("cuda")
- )
-
- transform = transforms.Compose([
- transforms.ToTensor(),
- transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5,0.5,0.5]),
- ])
-
- with torch.no_grad():
-
- noise_sample = torch.randn(1000, 512, device=device)
- latent_out = pspex.decoder.style(noise_sample)
- latent_mean = latent_out.mean(0)
- latent_std = ((latent_out - latent_mean).pow(2).sum() / 1000) ** 0.5
-
- y = transform(frame).unsqueeze(dim=0).to(device)
- I_ = align_face(frame, landmarkpredictor)
- I_ = transform(I_).unsqueeze(dim=0).to(device)
- wplus = pspex.encoder(I_) + pspex.latent_avg.unsqueeze(0)
- _, f = pspex.encoder(y, return_feat=True)
- latent_in = wplus.detach().clone()
- feat = [f[0].detach().clone(), f[1].detach().clone()]
-
-
-
- # wplus and f to optimize
- latent_in.requires_grad = True
- feat[0].requires_grad = True
- feat[1].requires_grad = True
-
- noises_single = pspex.decoder.make_noise()
- basic_height, basic_width = int(y.shape[2]*32/256), int(y.shape[3]*32/256)
- noises = []
- for noise in noises_single:
- noises.append(noise.new_empty(y.shape[0], 1, max(basic_height, int(y.shape[2]*noise.shape[2]/256)),
- max(basic_width, int(y.shape[3]*noise.shape[2]/256))).normal_())
- for noise in noises:
- noise.requires_grad = True
-
- init_lr=0.05
- optimizer = optim.Adam(feat + noises, lr=init_lr)
- optimizer2 = optim.Adam([latent_in], lr=init_lr)
- noise_weight = 0.05 * 0.2
-
- pbar = tqdm(range(step))
- latent_path = []
-
- for i in pbar:
- t = i / step
- lr = get_lr(t, init_lr)
- optimizer.param_groups[0]["lr"] = lr
- optimizer2.param_groups[0]["lr"] = get_lr(t, init_lr)
-
- noise_strength = latent_std * noise_weight * max(0, 1 - t / 0.75) ** 2
- latent_n = latent_noise(latent_in, noise_strength.item())
-
- y_hat, _ = pspex.decoder([latent_n], input_is_latent=True, randomize_noise=False,
- first_layer_feature=feat, noise=noises)
-
-
- batch, channel, height, width = y_hat.shape
-
- if height > y.shape[2]:
- factor = height // y.shape[2]
-
- y_hat = y_hat.reshape(
- batch, channel, height // factor, factor, width // factor, factor
- )
- y_hat = y_hat.mean([3, 5])
-
- p_loss = percept(y_hat, y).sum()
- n_loss = noise_regularize(noises) * 1e3
-
- loss = p_loss + n_loss
-
- optimizer.zero_grad()
- optimizer2.zero_grad()
- loss.backward()
- optimizer.step()
- optimizer2.step()
-
- noise_normalize_(noises)
-
- ''' for visualization
- if (i + 1) % 100 == 0 or i == 0:
- viz = torch.cat((y_hat,y,y_hat-y), dim=3)
- visualize(torch.clamp(viz[0].cpu(),-1,1), 60)
- '''
-
- pbar.set_description(
- (
- f"perceptual: {p_loss.item():.4f}; noise regularize: {n_loss.item():.4f};"
- f" lr: {lr:.4f}"
- )
- )
-
- return latent_n, feat, noises, wplus, f
\ No newline at end of file
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/wav_processors/base_processor.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/wav_processors/base_processor.py
deleted file mode 100644
index e8200dc58a9388ac94a5ec34b8a65f75e380255b..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/wav_processors/base_processor.py
+++ /dev/null
@@ -1,25 +0,0 @@
-REGISTERED_WAV_PROCESSORS = {}
-
-
-def register_wav_processors(name):
- def _f(cls):
- REGISTERED_WAV_PROCESSORS[name] = cls
- return cls
-
- return _f
-
-
-def get_wav_processor_cls(name):
- return REGISTERED_WAV_PROCESSORS.get(name, None)
-
-
-class BaseWavProcessor:
- @property
- def name(self):
- raise NotImplementedError
-
- def output_fn(self, input_fn):
- return f'{input_fn[:-4]}_{self.name}.wav'
-
- def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args):
- raise NotImplementedError
diff --git a/spaces/AIGC-Audio/Make_An_Audio/vocoder/bigvgan/alias_free_torch/act.py b/spaces/AIGC-Audio/Make_An_Audio/vocoder/bigvgan/alias_free_torch/act.py
deleted file mode 100644
index 028debd697dd60458aae75010057df038bd3518a..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio/vocoder/bigvgan/alias_free_torch/act.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0
-# LICENSE is in incl_licenses directory.
-
-import torch.nn as nn
-from .resample import UpSample1d, DownSample1d
-
-
-class Activation1d(nn.Module):
- def __init__(self,
- activation,
- up_ratio: int = 2,
- down_ratio: int = 2,
- up_kernel_size: int = 12,
- down_kernel_size: int = 12):
- super().__init__()
- self.up_ratio = up_ratio
- self.down_ratio = down_ratio
- self.act = activation
- self.upsample = UpSample1d(up_ratio, up_kernel_size)
- self.downsample = DownSample1d(down_ratio, down_kernel_size)
-
- # x: [B,C,T]
- def forward(self, x):
- x = self.upsample(x)
- x = self.act(x)
- x = self.downsample(x)
-
- return x
\ No newline at end of file
diff --git a/spaces/AIGText/GlyphControl/example_list.py b/spaces/AIGText/GlyphControl/example_list.py
deleted file mode 100644
index 8088247a99e2cce76b7e616c198e52fb2690f9e4..0000000000000000000000000000000000000000
--- a/spaces/AIGText/GlyphControl/example_list.py
+++ /dev/null
@@ -1,38 +0,0 @@
-example_1 = [
- "LAION-Glyph-10M-Epoch-6",
- "A gift card with text ""Happy Birthday"" and roses on it.",
- "Happy Birthday", 0.47, 0, 0.24, 0.4, 5, 1,
- "", 0.3, 0, 0.15, 0.15, 0, 1,
- "", 0.3, 0, 0.15, 0.65, 0, 1,
- "", 0.3, 0, 0.5, 0.65, 0, 1,
- 5,512,20,False,1,9,0,0,
- "4K, dslr, best quality, extremely detailed",
- "longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality"
-]
-# teaser examples in the report (updating...)
-# only could generate similar examples due to the fact that our released checkpoints are different from the checkpoint used in the original report.
-example_2 = [
- "LAION-Glyph-10M-Epoch-6",
- 'Newspaper with the headline "Aliens Found in Space" and "Monster Attacks Mars".',
- 'Aliens Found in Space', 0.8, 0, 0.1, 0.1, 0, 1,
- 'Monster Attacks Mars', 0.8, 0, 0.1, 0.45, 0, 1,
- "", 0.3, 0, 0.15, 0.65, 0, 1,
- "", 0.3, 0, 0.5, 0.65, 0, 1,
- 5,512,20,False,1,9,430637146,
- 0, "best quality, extremely detailed", #"4K, dslr, best quality, extremely detailed",
- "longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality"
-]
-examples = [example_1, example_2]
-
-# example_3 = [
-# "LAION-Glyph-10M-Epoch-6",
-# 'A decorative greeting card that reads "Congratulations on achieving state of the art".',
-# 'Congratulations', 0.6, 0, 0.2, 0.1, 0, 1,
-# 'on achieving', 0.5, 0, 0.25, 0.22, 0, 1,
-# 'state of the art', 0.6, 0, 0.21, 0.34, 0, 1,
-# "", 0.3, 0, 0.5, 0.65, 0, 1,
-# 5,512,20,False,1,9, 1540281202, #364285590,
-# 0, "best quality, extremely detailed",
-# "longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality"
-# ]
-# examples = [example_1, example_2, example_3]
\ No newline at end of file
diff --git a/spaces/ASJMO/freegpt/g4f/Provider/Providers/GetGpt.py b/spaces/ASJMO/freegpt/g4f/Provider/Providers/GetGpt.py
deleted file mode 100644
index 56a121f6ee5f430da7beda3b65abdea64a87c36b..0000000000000000000000000000000000000000
--- a/spaces/ASJMO/freegpt/g4f/Provider/Providers/GetGpt.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import os
-import json
-import uuid
-import requests
-from Crypto.Cipher import AES
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://chat.getgpt.world/'
-model = ['gpt-3.5-turbo']
-supports_stream = True
-needs_auth = False
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- def encrypt(e):
- t = os.urandom(8).hex().encode('utf-8')
- n = os.urandom(8).hex().encode('utf-8')
- r = e.encode('utf-8')
- cipher = AES.new(t, AES.MODE_CBC, n)
- ciphertext = cipher.encrypt(pad_data(r))
- return ciphertext.hex() + t.decode('utf-8') + n.decode('utf-8')
-
- def pad_data(data: bytes) -> bytes:
- block_size = AES.block_size
- padding_size = block_size - len(data) % block_size
- padding = bytes([padding_size] * padding_size)
- return data + padding
-
- headers = {
- 'Content-Type': 'application/json',
- 'Referer': 'https://chat.getgpt.world/',
- 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36'
- }
-
- data = json.dumps({
- 'messages': messages,
- 'frequency_penalty': kwargs.get('frequency_penalty', 0),
- 'max_tokens': kwargs.get('max_tokens', 4000),
- 'model': 'gpt-3.5-turbo',
- 'presence_penalty': kwargs.get('presence_penalty', 0),
- 'temperature': kwargs.get('temperature', 1),
- 'top_p': kwargs.get('top_p', 1),
- 'stream': True,
- 'uuid': str(uuid.uuid4())
- })
-
- res = requests.post('https://chat.getgpt.world/api/chat/stream',
- headers=headers, json={'signature': encrypt(data)}, stream=True)
-
- for line in res.iter_lines():
- if b'content' in line:
- line_json = json.loads(line.decode('utf-8').split('data: ')[1])
- yield (line_json['choices'][0]['delta']['content'])
-
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join(
- [f'{name}: {get_type_hints(_create_completion)[name].__name__}' for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
diff --git a/spaces/ASJMO/freegpt/g4f/Provider/Providers/helpers/theb.py b/spaces/ASJMO/freegpt/g4f/Provider/Providers/helpers/theb.py
deleted file mode 100644
index 71cfd23ff34768092e4dbe3ff6b719a946dceebb..0000000000000000000000000000000000000000
--- a/spaces/ASJMO/freegpt/g4f/Provider/Providers/helpers/theb.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import json
-import sys
-from re import findall
-from curl_cffi import requests
-
-config = json.loads(sys.argv[1])
-prompt = config['messages'][-1]['content']
-
-headers = {
- 'authority': 'chatbot.theb.ai',
- 'accept': 'application/json, text/plain, */*',
- 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
- 'content-type': 'application/json',
- 'origin': 'https://chatbot.theb.ai',
- 'referer': 'https://chatbot.theb.ai/',
- 'sec-ch-ua': '"Google Chrome";v="113", "Chromium";v="113", "Not-A.Brand";v="24"',
- 'sec-ch-ua-mobile': '?0',
- 'sec-ch-ua-platform': '"macOS"',
- 'sec-fetch-dest': 'empty',
- 'sec-fetch-mode': 'cors',
- 'sec-fetch-site': 'same-origin',
- 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36',
-}
-
-json_data = {
- 'prompt': prompt,
- 'options': {}
-}
-
-def format(chunk):
- try:
- completion_chunk = findall(r'content":"(.*)"},"fin', chunk.decode())[0]
- print(completion_chunk, flush=True, end='')
-
- except Exception as e:
- print(f'[ERROR] an error occured, retrying... | [[{chunk.decode()}]]', flush=True)
- return
-
-while True:
- try:
- response = requests.post('https://chatbot.theb.ai/api/chat-process',
- headers=headers, json=json_data, content_callback=format, impersonate='chrome110')
-
- exit(0)
-
- except Exception as e:
- print('[ERROR] an error occured, retrying... |', e, flush=True)
- continue
\ No newline at end of file
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192/__init__.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/AgentVerse/agentVerse/scripts/evaluate_responsegen.py b/spaces/AgentVerse/agentVerse/scripts/evaluate_responsegen.py
deleted file mode 100644
index 07b497ae305cfd7f9dbf1063af0f98d7c60b07cd..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/scripts/evaluate_responsegen.py
+++ /dev/null
@@ -1,112 +0,0 @@
-import os
-import json
-from string import Template
-import time
-import openai
-from tqdm import tqdm
-
-with open("./results.jsonl", "r") as f:
- lines = list(f.readlines())
-
-eval_prompt = r"""Which response is better given this context:
-${context}
-
-Response A: ${response_a}
-
-Response B: ${response_b}.
-
-Pick your answer from ['Response A', 'Response B', 'both', 'neither']. Generate a short explanation for your choice first. Then, generate 'The better response is A' or 'The better response is B' or 'The better response is both' or 'The better response is neither'.
-
-Your response format should be:
-Explanation:
-Answer: ('The better response is A' or 'The better response is B' or 'The better response is both' or 'The better response is neither')
-"""
-
-res = []
-eval = []
-
-
-def write_eval_to_file(file, skip=0):
- for idx, line in tqdm(enumerate(lines)):
- if idx < skip:
- continue
- data = json.loads(line)
- # print(idx + 1)
- context = data["input"]
- response_a = data["response"]
- response_b = data["label"]
-
- context_quote = "> " + "\n> ".join(context.split("\n"))
- response_a_quote = "> " + "\n> ".join(response_a.split("\n"))
- response_b_quote = "> " + "\n> ".join(response_b.split("\n"))
-
- f.write(f"## {idx + 1}\n\n")
- f.write(f"Context:\n" f"{context_quote}\n\n")
- f.write(f"Response A (pipeline):\n" f"{response_a_quote}\n\n")
- f.write(f"Response B (init):\n" f"{response_b_quote}\n\n")
-
- prompt = Template(eval_prompt).safe_substitute(
- context=context, response_a=response_a, response_b=response_b
- )
- for i in range(100):
- try:
- eval_response = openai.ChatCompletion.create(
- model="gpt-4",
- messages=[{"role": "user", "content": prompt}],
- temperature=0.0,
- )
- except:
- time.sleep(min(i**2, 60))
- continue
- break
- text = eval_response["choices"][0]["message"]["content"]
- eval.append(text)
- text = text.replace("\n", "\n\n")
- f.write(f"{text}\n\n")
-
- if "The better response is A" in text:
- res.append("A")
- elif "The better response is B" in text:
- res.append("B")
- elif "The better response is both" in text:
- res.append("both")
- elif "The better response is neither" in text:
- res.append("neither")
- else:
- res.append("unknown")
-
-
-if not os.path.exists("./eval.md"):
- with open("./eval.md", "w") as f:
- f.write("# ResponseGen Eval\n\n")
- write_eval_to_file(f)
- win_cnt = 0
- for r in res:
- if r == "A":
- win_cnt += 1
- print(f"win rate: {win_cnt / len(res)}")
-else:
- win_cnt = 0
- total_cnt = 0
- with open("./eval.md", "r") as f:
- for line in f:
- if line.startswith("Answer"):
- total_cnt += 1
- if "The better response is A" in line:
- res.append("A")
- elif "The better response is B" in line:
- res.append("B")
- elif "The better response is both" in line:
- res.append("both")
- elif "The better response is neither" in line:
- res.append("neither")
- else:
- res.append("unknown")
- with open("./eval.md", "a") as f:
- f.write("\n")
- write_eval_to_file(f, total_cnt)
- win_cnt = 0
- for r in res:
- if r == "A":
- win_cnt += 1
- print(f"win rate: {win_cnt / len(res)}")
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/BroadcastEvent.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/BroadcastEvent.js
deleted file mode 100644
index 1d9008e99a0213b0546cd72b326dcf46015329a9..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/BroadcastEvent.js
+++ /dev/null
@@ -1,10 +0,0 @@
-var BroadcastEvent = function () {
- var gameObjects = this.getAllChildren([this]);
- for (var i = 0, cnt = gameObjects.length; i < cnt; i++) {
- var gameObject = gameObjects[i];
- gameObject.emit.apply(gameObject, arguments);
- }
- return this;
-}
-
-export default BroadcastEvent;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateBadgeLabel.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateBadgeLabel.js
deleted file mode 100644
index 092acbf6493220a5f13da93b8fd8f26190f09865..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateBadgeLabel.js
+++ /dev/null
@@ -1,26 +0,0 @@
-import MergeStyle from './utils/MergeStyle.js';
-import BadgeLabel from '../../badgelabel/BadgeLabel.js';
-import CreateChild from './utils/CreateChild.js';
-
-var CreateBadgeLabel = function (scene, data, view, styles, customBuilders) {
- data = MergeStyle(data, styles);
-
- // Replace data by child game object
- CreateChild(scene, data, 'background', view, styles, customBuilders);
- CreateChild(scene, data, 'main', view, styles, customBuilders);
- CreateChild(scene, data, 'leftTop', view, styles, customBuilders);
- CreateChild(scene, data, 'centerTop', view, styles, customBuilders);
- CreateChild(scene, data, 'rightTop', view, styles, customBuilders);
- CreateChild(scene, data, 'leftCenter', view, styles, customBuilders);
- CreateChild(scene, data, 'center', view, styles, customBuilders);
- CreateChild(scene, data, 'rightCenter', view, styles, customBuilders);
- CreateChild(scene, data, 'leftBottom', view, styles, customBuilders);
- CreateChild(scene, data, 'centerBottom', view, styles, customBuilders);
- CreateChild(scene, data, 'rightBottom', view, styles, customBuilders);
-
- var gameObject = new BadgeLabel(scene, data);
- scene.add.existing(gameObject);
- return gameObject;
-}
-
-export default CreateBadgeLabel;
\ No newline at end of file
diff --git a/spaces/AlanMars/QYL-AI-Space/readme/README_ja.md b/spaces/AlanMars/QYL-AI-Space/readme/README_ja.md
deleted file mode 100644
index 4c9c2fe33a8ac985d4a30423b5215d9ab81ec9e7..0000000000000000000000000000000000000000
--- a/spaces/AlanMars/QYL-AI-Space/readme/README_ja.md
+++ /dev/null
@@ -1,126 +0,0 @@
-
-
-
diff --git a/spaces/Benson/text-generation/Examples/Descargar Carx Drift Racing 2 Mod Apk Nueva Versin.md b/spaces/Benson/text-generation/Examples/Descargar Carx Drift Racing 2 Mod Apk Nueva Versin.md
deleted file mode 100644
index 168cfbc5467ba70f079650a250e11efd76f31ba9..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Carx Drift Racing 2 Mod Apk Nueva Versin.md
+++ /dev/null
@@ -1,59 +0,0 @@
-
-
Descargar CarX Drift Racing 2 Mod APK Nueva versión: Una guía para los entusiastas de las carreras de coches
-
Si usted es un fan de los juegos de carreras de coches, especialmente a la deriva, entonces usted debe haber oído hablar de CarX Drift Racing 2. Es uno de los mejores juegos de deriva en Android que le permite experimentar la emoción de conducir coches rápidos en varias pistas y terrenos. Puedes personalizar tu coche, afinar tu motor y competir con otros jugadores online o offline.
-
descargar carx drift racing 2 mod apk nueva versión
Sin embargo, si desea disfrutar del juego al máximo, es posible que tenga que gastar algo de dinero real para desbloquear todos los coches, pistas y características que el juego tiene para ofrecer. Esto puede ser frustrante y caro para algunos jugadores que solo quieren divertirse sin romper el banco.
-
Es por eso que muchos jugadores buscan formas de descargar CarX Drift Racing 2 Mod APK, que es una versión modificada del juego que le da dinero ilimitado, oro y acceso a todos los coches y pistas de forma gratuita. Suena increíble, ¿verdad?
-
En este artículo, le mostraremos cómo descargar e instalar CarX Drift Racing 2 Mod APK nueva versión en su dispositivo Android. También le diremos acerca de las características, pros y contras de usar este mod apk. Así que, si estás listo para llevar tus habilidades de deriva al siguiente nivel, ¡sigue leyendo!
-
Características de CarX Drift Racing 2 Mod APK
-
CarX Drift Racing 2 Mod APK no es solo una versión regular del juego que se puede descargar desde la Google Play Store. Es una versión hackeada que ha sido modificada por algunos desarrolladores para darle recursos y características ilimitadas que normalmente tendría que pagar. Estas son algunas de las características de CarX Drift Racing 2 Mod APK:
-
-
Dinero ilimitado y oro: Obtendrá dinero y oro ilimitados en su cuenta, que puede utilizar para comprar cualquier coche, pista o actualización que desee. Usted no tiene que preocuparse de quedarse sin dinero en efectivo o ahorrar para su coche de ensueño.
-
-
Personaliza tu coche y afina tu motor: Puedes personalizar tu coche con diferentes colores de pintura, calcomanías, ruedas, spoilers y más. También puede ajustar su motor con diferentes partes y configuraciones para mejorar su rendimiento y manejo.
-
Disfrutar de la física realista y gráficos: CarX Drift Racing 2 Mod APK tiene física realista y gráficos que te hacen sentir como si estuvieras conduciendo un coche real. Puede ver el humo, el polvo, las chispas y las marcas de derrape a medida que se desplaza en las pistas. También puede ajustar el ángulo y la vista de la cámara para adaptarse a sus preferencias.
-
Compite con otros jugadores en línea o fuera de línea: Puedes jugar en línea con otros jugadores que tienen la misma versión apk mod que tú, o fuera de línea con oponentes AI. También puedes unirte a clubes, participar en torneos y posicionarte en las tablas de clasificación.
-
-
Cómo descargar e instalar CarX deriva Racing 2 Mod APK
-
Descargar e instalar CarX Drift Racing 2 Mod APK es muy fácil y simple. Solo tienes que seguir estos pasos:
-
-
-
Descargar el archivo mod apk de una fuente de confianza: Usted puede encontrar muchos sitios web que ofrecen CarX Drift Racing 2 Mod APK para su descarga gratuita. Sin embargo, no todos son seguros y confiables. Algunos pueden contener malware o virus que pueden dañar su dispositivo o robar sus datos. Siempre descargue desde un sitio de buena reputación y escanee el archivo antes de instalar.
-
Habilitar fuentes desconocidas en la configuración del dispositivo: Antes de instalar el archivo apk mod, es necesario habilitar fuentes desconocidas en la configuración del dispositivo. Esto le permitirá instalar aplicaciones que no son de Google Play Store. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo.
-
-
Disfruta del juego con recursos y características ilimitadas: Una vez que hayas lanzado el juego, verás que tienes dinero ilimitado, oro, coches, pistas y características. Puedes empezar a jugar de inmediato y disfrutar del juego sin limitaciones ni restricciones.
-
-
Pros y contras de CarX Drift Racing 2 Mod APK
-
CarX Drift Racing 2 Mod APK tiene muchas ventajas, pero también tiene algunas desventajas. Estos son algunos de ellos:
-
Pros
-
-
Gratis, fácil, divertido y adictivo: CarX Drift Racing 2 Mod APK es gratis para descargar e instalar, fácil de usar, divertido de jugar y adictivo para dominar. No tienes que gastar dinero o tiempo para disfrutar del juego completamente.
-
No hay anuncios o compras en la aplicación: CarX Drift Racing 2 Mod APK no tiene anuncios o compras en la aplicación que pueden interrumpir su juego o tentar a gastar más dinero. Puedes jugar sin distracciones ni presiones.
-
No se requiere raíz: CarX Drift Racing 2 Mod APK no requiere acceso de raíz para trabajar en su dispositivo. Esto significa que no tiene que arriesgarse a dañar su dispositivo o anular su garantía al enraizarlo.
-
Compatible con la mayoría de los dispositivos: CarX Drift Racing 2 Mod APK es compatible con la mayoría de los dispositivos Android que se ejecutan en Android 4.1 o superior. No requiere mucho espacio de almacenamiento o RAM para funcionar sin problemas.
-
-
Contras
-
-
Puede que no funcione en algunos dispositivos: CarX Drift Racing 2 Mod APK puede no funcionar en algunos dispositivos debido a problemas de compatibilidad o problemas técnicos. Es posible que tenga que comprobar la compatibilidad de su dispositivo antes de descargar e instalar el apk mod.
-
-
Puede violar los términos de servicio del juego: CarX Drift Racing 2 Mod APK puede violar los términos de servicio del juego, lo que puede resultar en la prohibición del juego o perder su cuenta. También puedes perder tu progreso, logros o recompensas que has ganado en el juego.
-
-
Conclusión
-
CarX Drift Racing 2 Mod APK es una gran opción para los entusiastas de las carreras de coches que quieren disfrutar del juego sin limitaciones ni costos. Te da dinero ilimitado, oro, coches, pistas y características que hacen el juego más divertido y emocionante. Puedes descargarlo e instalarlo fácilmente en tu dispositivo Android y empezar a derivar como un profesional.
-
Sin embargo, también debe ser consciente de los riesgos y desventajas de usar CarX Drift Racing 2 Mod APK, tales como problemas de compatibilidad, problemas de seguridad, y las violaciones de los términos de servicio. Siempre debe descargar de una fuente confiable y usarla a su propia discreción y responsabilidad.
-
Si usted está buscando una manera de descargar CarX Drift Racing 2 Mod APK nueva versión, siga los pasos de este artículo y disfrutar del juego. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. Happy drifting!
-
Preguntas frecuentes
-
-
Q1: ¿Es CarX Drift Racing 2 Mod APK seguro de usar?
-
A1: Depende de la fuente del archivo apk mod. Algunos pueden contener malware o virus que pueden dañar su dispositivo o robar sus datos. Siempre descargue desde un sitio de buena reputación y escanee el archivo antes de instalar.
-
Q2: ¿Puedo jugar CarX Drift Racing 2 Mod APK en línea?
-
A2: Sí, puedes jugar online con otros jugadores que tienen la misma versión mod apk que tú. Sin embargo, es posible que no pueda acceder a algunas funciones o modos que requieren una versión oficial del juego.
-
Q3: ¿Me prohibirán por usar CarX Drift Racing 2 Mod APK?
-
-
Q4: ¿Cómo puedo actualizar CarX Drift Racing 2 Mod APK?
-
A4: Puede actualizar el apk mod mediante la descarga de la última versión de la misma fuente que lo obtuvo de. Asegúrese de hacer una copia de seguridad de sus datos antes de desinstalar la versión anterior e instalar la nueva.
-
Q5: ¿Cuáles son algunas alternativas a CarX Drift Racing 2 Mod APK?
-
A5: Algunas alternativas a CarX Drift Racing 2 Mod APK son Real Drift Car Racing, Torque Drift, Drift Max Pro, y FR Legends. Estos también son juegos de carreras de coches populares que ofrecen diferentes modos, características y desafíos para los amantes de la deriva.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BetterAPI/BetterChat_new/src/lib/server/modelEndpoint.ts b/spaces/BetterAPI/BetterChat_new/src/lib/server/modelEndpoint.ts
deleted file mode 100644
index 4d187da21c37cbbe8efd722c09fee1815bd1c71f..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat_new/src/lib/server/modelEndpoint.ts
+++ /dev/null
@@ -1,21 +0,0 @@
-import { MODEL_ENDPOINTS } from "$env/static/private";
-import { sum } from "$lib/utils/sum";
-
-const endpoints: Array<{ endpoint: string; authorization: string; weight: number }> =
- JSON.parse(MODEL_ENDPOINTS);
-const totalWeight = sum(endpoints.map((e) => e.weight));
-
-/**
- * Find a random load-balanced endpoint
- */
-export function modelEndpoint(): { endpoint: string; authorization: string; weight: number } {
- let random = Math.random() * totalWeight;
- for (const endpoint of endpoints) {
- if (random < endpoint.weight) {
- return endpoint;
- }
- random -= endpoint.weight;
- }
-
- throw new Error("Invalid config, no endpoint found");
-}
diff --git a/spaces/BilalSardar/StoryGenerator/app.py b/spaces/BilalSardar/StoryGenerator/app.py
deleted file mode 100644
index 9cc2afc87db412b7ad2c39d8db2b4a6ee3242d72..0000000000000000000000000000000000000000
--- a/spaces/BilalSardar/StoryGenerator/app.py
+++ /dev/null
@@ -1,15 +0,0 @@
-from transformers import pipeline
-import gradio as gr
-def story(StoryLength,StoryPrompt):
- model= pipeline("text-generation", model="e-tony/gpt2-rnm")
- summarizer = pipeline("summarization", model="facebook/bart-large-cnn")
- return model(StoryPrompt, max_length=200, num_return_sequences=3)[2]["generated_text"],summarizer(model(StoryPrompt, max_length=200, num_return_sequences=3)[2]["generated_text"], max_length=StoryLength, min_length=30, do_sample=False)[0]["summary_text"]
-
-
-interface = gr.Interface(fn=story,
- inputs=["number","text"],
- outputs=[gr.inputs.Textbox(label='GPT2 Output'),gr.inputs.Textbox(label='Output summary')],
- title='Bilal\'s Story Generator')
-
-
-interface.launch(inline=False)
diff --git a/spaces/CHDCruze/entertainmentbybhdcruze/README.md b/spaces/CHDCruze/entertainmentbybhdcruze/README.md
deleted file mode 100644
index 63071aefb25a1e355da68960395353a6e5b97195..0000000000000000000000000000000000000000
--- a/spaces/CHDCruze/entertainmentbybhdcruze/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Entertainmentbybhdcruze
-emoji: 📉
-colorFrom: blue
-colorTo: red
-sdk: static
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CVPR/LIVE/thrust/thrust/async/transform.h b/spaces/CVPR/LIVE/thrust/thrust/async/transform.h
deleted file mode 100644
index 89687e93ad38ed03df4638b0b98f15b78c8826d7..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/async/transform.h
+++ /dev/null
@@ -1,134 +0,0 @@
-/*
- * Copyright 2008-2018 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a transform of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*! \file async/transform.h
- * \brief Functions for asynchronously transforming a range.
- */
-
-#pragma once
-
-#include
-#include
-
-#if THRUST_CPP_DIALECT >= 2014
-
-#include
-#include
-#include
-#include
-
-#include
-
-namespace thrust
-{
-
-namespace async
-{
-
-namespace unimplemented
-{
-
-template <
- typename DerivedPolicy
-, typename ForwardIt, typename Sentinel, typename OutputIt
-, typename UnaryOperation
->
-__host__
-event
-async_transform(
- thrust::execution_policy& exec
-, ForwardIt first, Sentinel last, OutputIt output, UnaryOperation op
-)
-{
- THRUST_STATIC_ASSERT_MSG(
- (thrust::detail::depend_on_instantiation::value)
- , "this algorithm is not implemented for the specified system"
- );
- return {};
-}
-
-} // namespace unimplemented
-
-namespace transform_detail
-{
-
-using thrust::async::unimplemented::async_transform;
-
-struct transform_fn final
-{
- template <
- typename DerivedPolicy
- , typename ForwardIt, typename Sentinel, typename OutputIt
- , typename UnaryOperation
- >
- __host__
- static auto
- call(
- thrust::detail::execution_policy_base const& exec
- , ForwardIt&& first, Sentinel&& last
- , OutputIt&& output
- , UnaryOperation&& op
- )
- // ADL dispatch.
- THRUST_RETURNS(
- async_transform(
- thrust::detail::derived_cast(thrust::detail::strip_const(exec))
- , THRUST_FWD(first), THRUST_FWD(last)
- , THRUST_FWD(output)
- , THRUST_FWD(op)
- )
- )
-
- template <
- typename ForwardIt, typename Sentinel, typename OutputIt
- , typename UnaryOperation
- >
- __host__
- static auto call(
- ForwardIt&& first, Sentinel&& last
- , OutputIt&& output
- , UnaryOperation&& op
- )
- THRUST_RETURNS(
- transform_fn::call(
- thrust::detail::select_system(
- typename iterator_system>::type{}
- , typename iterator_system>::type{}
- )
- , THRUST_FWD(first), THRUST_FWD(last)
- , THRUST_FWD(output)
- , THRUST_FWD(op)
- )
- )
-
- template
- THRUST_NODISCARD __host__
- auto operator()(Args&&... args) const
- THRUST_RETURNS(
- call(THRUST_FWD(args)...)
- )
-};
-
-} // namespace tranform_detail
-
-THRUST_INLINE_CONSTANT transform_detail::transform_fn transform{};
-
-} // namespace async
-
-} // end namespace thrust
-
-#endif
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/extrema.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/extrema.h
deleted file mode 100644
index a3ee8188971687249b7052ef4f062f5adf972768..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/extrema.h
+++ /dev/null
@@ -1,89 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-/*! \file extrema.h
- * \brief Generic device implementations of extrema functions.
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace generic
-{
-
-
-template
-__host__ __device__
-ForwardIterator max_element(thrust::execution_policy &exec,
- ForwardIterator first,
- ForwardIterator last);
-
-
-template
-__host__ __device__
-ForwardIterator max_element(thrust::execution_policy &exec,
- ForwardIterator first,
- ForwardIterator last,
- BinaryPredicate comp);
-
-
-template
-__host__ __device__
-ForwardIterator min_element(thrust::execution_policy &exec,
- ForwardIterator first,
- ForwardIterator last);
-
-
-template
-__host__ __device__
-ForwardIterator min_element(thrust::execution_policy &exec,
- ForwardIterator first,
- ForwardIterator last,
- BinaryPredicate comp);
-
-
-template
-__host__ __device__
-thrust::pair minmax_element(thrust::execution_policy &exec,
- ForwardIterator first,
- ForwardIterator last);
-
-
-template
-__host__ __device__
-thrust::pair minmax_element(thrust::execution_policy &exec,
- ForwardIterator first,
- ForwardIterator last,
- BinaryPredicate comp);
-
-
-} // end namespace generic
-} // end namespace detail
-} // end namespace system
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/memory_resource.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/memory_resource.h
deleted file mode 100644
index 6a540d834939b928a4b6049c6a97d2289ab43257..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/memory_resource.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/*
- * Copyright 2018 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*! \file omp/memory_resource.h
- * \brief Memory resources for the OMP system.
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace omp
-{
-
-//! \cond
-namespace detail
-{
- typedef thrust::mr::fancy_pointer_resource<
- thrust::mr::new_delete_resource,
- thrust::omp::pointer
- > native_resource;
-}
-//! \endcond
-
-/*! \addtogroup memory_resources Memory Resources
- * \ingroup memory_management_classes
- * \{
- */
-
-/*! The memory resource for the OMP system. Uses \p mr::new_delete_resource and tags it with \p omp::pointer. */
-typedef detail::native_resource memory_resource;
-/*! An alias for \p omp::memory_resource. */
-typedef detail::native_resource universal_memory_resource;
-/*! An alias for \p omp::memory_resource. */
-typedef detail::native_resource universal_host_pinned_memory_resource;
-
-/*! \}
- */
-
-}
-}
-}
diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/SAA/prompts/mtd_parameters.py b/spaces/Caoyunkang/Segment-Any-Anomaly/SAA/prompts/mtd_parameters.py
deleted file mode 100644
index fdfa78efee14c97a4cd449869bf0c48eac159508..0000000000000000000000000000000000000000
--- a/spaces/Caoyunkang/Segment-Any-Anomaly/SAA/prompts/mtd_parameters.py
+++ /dev/null
@@ -1,11 +0,0 @@
-manual_prompts = {
- 'mtd': [
- ['black hole. blow hole. break. crack. fray. uneven.', 'mtd'],
- ['defect.', 'mtd'],
- ],
-
-}
-
-property_prompts = {
- 'ksdd2': 'the image of ksdd2 have 1 dissimilar ksdd2, with a maximum of 5 anomaly. The anomaly would not exceed 0.9 object area. ',
-}
diff --git a/spaces/ChrisPreston/diff-svc_minato_aqua/modules/hubert/cn_hubert.py b/spaces/ChrisPreston/diff-svc_minato_aqua/modules/hubert/cn_hubert.py
deleted file mode 100644
index ba1c34bc8ce8c3c638b846f2da1da0ca27a52121..0000000000000000000000000000000000000000
--- a/spaces/ChrisPreston/diff-svc_minato_aqua/modules/hubert/cn_hubert.py
+++ /dev/null
@@ -1,40 +0,0 @@
-import librosa
-import torch
-import torch.nn as nn
-
-
-def load_cn_model(ch_hubert_path):
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- from fairseq import checkpoint_utils
- models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
- [ch_hubert_path],
- suffix="",
- )
- model = models[0]
- model = model.to(device)
- model.eval()
- return model
-
-
-def get_cn_hubert_units(con_model, audio_path, dev):
- audio, sampling_rate = librosa.load(audio_path)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != 16000:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
-
- feats = torch.from_numpy(audio).float()
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).fill_(False)
- inputs = {
- "source": feats.to(dev),
- "padding_mask": padding_mask.to(dev),
- "output_layer": 9, # layer 9
- }
- with torch.no_grad():
- logits = con_model.extract_features(**inputs)
- feats = con_model.final_proj(logits[0])
- return feats
diff --git a/spaces/ClementBM/connectfour/models/__init__.py b/spaces/ClementBM/connectfour/models/__init__.py
deleted file mode 100644
index 1de75226a9c5aae71a9e9b306b29cb52118f6741..0000000000000000000000000000000000000000
--- a/spaces/ClementBM/connectfour/models/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from pathlib import Path
-
-MODEL_PATH = Path(__file__).parent.absolute() / "model.onnx"
diff --git a/spaces/CofAI/chat.b4/g4f/Provider/Providers/Gravityengine.py b/spaces/CofAI/chat.b4/g4f/Provider/Providers/Gravityengine.py
deleted file mode 100644
index f0cd09daaaae0adaa349f91139dc60c7ac79c028..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat.b4/g4f/Provider/Providers/Gravityengine.py
+++ /dev/null
@@ -1,27 +0,0 @@
-import requests
-import os
-import json
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://gpt4.xunika.uk/'
-model = ['gpt-3.5-turbo-16k', 'gpt-3.5-turbo-0613']
-supports_stream = True
-needs_auth = False
-
-def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs):
- headers = {
- 'Content-Type': 'application/json',
- }
- data = {
- 'model': model,
- 'temperature': 0.7,
- 'presence_penalty': 0,
- 'messages': messages,
- }
- response = requests.post(url + '/api/openai/v1/chat/completions',
- json=data, stream=True)
-
- yield response.json()['choices'][0]['message']['content']
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
\ No newline at end of file
diff --git a/spaces/CofAI/chat/client/css/select.css b/spaces/CofAI/chat/client/css/select.css
deleted file mode 100644
index 7ec0159206439deca5c26f32fd92d2b1459f0273..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat/client/css/select.css
+++ /dev/null
@@ -1,35 +0,0 @@
-select {
- -webkit-border-radius: 8px;
- -moz-border-radius: 8px;
- border-radius: 8px;
-
- -webkit-backdrop-filter: blur(20px);
- backdrop-filter: blur(20px);
-
- cursor: pointer;
- background-color: var(--blur-bg);
- border: 1px solid var(--blur-border);
- color: var(--colour-3);
- display: block;
- position: relative;
- overflow: hidden;
- outline: none;
- padding: 8px 16px;
-
- appearance: none;
-}
-
-/* scrollbar */
-select.dropdown::-webkit-scrollbar {
- width: 4px;
- padding: 8px 0px;
-}
-
-select.dropdown::-webkit-scrollbar-track {
- background-color: #ffffff00;
-}
-
-select.dropdown::-webkit-scrollbar-thumb {
- background-color: #555555;
- border-radius: 10px;
-}
diff --git a/spaces/CognitiveLabs/Research-Assistant/test/test4.py b/spaces/CognitiveLabs/Research-Assistant/test/test4.py
deleted file mode 100644
index d9f6c140d753757ba78d021bac903ee8b6726be9..0000000000000000000000000000000000000000
--- a/spaces/CognitiveLabs/Research-Assistant/test/test4.py
+++ /dev/null
@@ -1,6 +0,0 @@
-def test():
- yield 1
- return 2
-
-a, b = test()
-print(a, b)
\ No newline at end of file
diff --git a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/models/diffusion/__init__.py b/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/models/diffusion/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/designspaceLib/types.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/designspaceLib/types.py
deleted file mode 100644
index 80ba9d6d7b44f58773f42107d672c13651c166a9..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/designspaceLib/types.py
+++ /dev/null
@@ -1,147 +0,0 @@
-from __future__ import annotations
-
-from dataclasses import dataclass
-from typing import Dict, List, Optional, Union, cast
-
-from fontTools.designspaceLib import (
- AxisDescriptor,
- DesignSpaceDocument,
- DesignSpaceDocumentError,
- RangeAxisSubsetDescriptor,
- SimpleLocationDict,
- ValueAxisSubsetDescriptor,
- VariableFontDescriptor,
-)
-
-
-def clamp(value, minimum, maximum):
- return min(max(value, minimum), maximum)
-
-
-@dataclass
-class Range:
- minimum: float
- """Inclusive minimum of the range."""
- maximum: float
- """Inclusive maximum of the range."""
- default: float = 0
- """Default value"""
-
- def __post_init__(self):
- self.minimum, self.maximum = sorted((self.minimum, self.maximum))
- self.default = clamp(self.default, self.minimum, self.maximum)
-
- def __contains__(self, value: Union[float, Range]) -> bool:
- if isinstance(value, Range):
- return self.minimum <= value.minimum and value.maximum <= self.maximum
- return self.minimum <= value <= self.maximum
-
- def intersection(self, other: Range) -> Optional[Range]:
- if self.maximum < other.minimum or self.minimum > other.maximum:
- return None
- else:
- return Range(
- max(self.minimum, other.minimum),
- min(self.maximum, other.maximum),
- self.default, # We don't care about the default in this use-case
- )
-
-
-# A region selection is either a range or a single value, as a Designspace v5
-# axis-subset element only allows a single discrete value or a range for a
-# variable-font element.
-Region = Dict[str, Union[Range, float]]
-
-# A conditionset is a set of named ranges.
-ConditionSet = Dict[str, Range]
-
-# A rule is a list of conditionsets where any has to be relevant for the whole rule to be relevant.
-Rule = List[ConditionSet]
-Rules = Dict[str, Rule]
-
-
-def locationInRegion(location: SimpleLocationDict, region: Region) -> bool:
- for name, value in location.items():
- if name not in region:
- return False
- regionValue = region[name]
- if isinstance(regionValue, (float, int)):
- if value != regionValue:
- return False
- else:
- if value not in regionValue:
- return False
- return True
-
-
-def regionInRegion(region: Region, superRegion: Region) -> bool:
- for name, value in region.items():
- if not name in superRegion:
- return False
- superValue = superRegion[name]
- if isinstance(superValue, (float, int)):
- if value != superValue:
- return False
- else:
- if value not in superValue:
- return False
- return True
-
-
-def userRegionToDesignRegion(doc: DesignSpaceDocument, userRegion: Region) -> Region:
- designRegion = {}
- for name, value in userRegion.items():
- axis = doc.getAxis(name)
- if axis is None:
- raise DesignSpaceDocumentError(
- f"Cannot find axis named '{name}' for region."
- )
- if isinstance(value, (float, int)):
- designRegion[name] = axis.map_forward(value)
- else:
- designRegion[name] = Range(
- axis.map_forward(value.minimum),
- axis.map_forward(value.maximum),
- axis.map_forward(value.default),
- )
- return designRegion
-
-
-def getVFUserRegion(doc: DesignSpaceDocument, vf: VariableFontDescriptor) -> Region:
- vfUserRegion: Region = {}
- # For each axis, 2 cases:
- # - it has a range = it's an axis in the VF DS
- # - it's a single location = use it to know which rules should apply in the VF
- for axisSubset in vf.axisSubsets:
- axis = doc.getAxis(axisSubset.name)
- if axis is None:
- raise DesignSpaceDocumentError(
- f"Cannot find axis named '{axisSubset.name}' for variable font '{vf.name}'."
- )
- if hasattr(axisSubset, "userMinimum"):
- # Mypy doesn't support narrowing union types via hasattr()
- # TODO(Python 3.10): use TypeGuard
- # https://mypy.readthedocs.io/en/stable/type_narrowing.html
- axisSubset = cast(RangeAxisSubsetDescriptor, axisSubset)
- if not hasattr(axis, "minimum"):
- raise DesignSpaceDocumentError(
- f"Cannot select a range over '{axis.name}' for variable font '{vf.name}' "
- "because it's a discrete axis, use only 'userValue' instead."
- )
- axis = cast(AxisDescriptor, axis)
- vfUserRegion[axis.name] = Range(
- max(axisSubset.userMinimum, axis.minimum),
- min(axisSubset.userMaximum, axis.maximum),
- axisSubset.userDefault or axis.default,
- )
- else:
- axisSubset = cast(ValueAxisSubsetDescriptor, axisSubset)
- vfUserRegion[axis.name] = axisSubset.userValue
- # Any axis not mentioned explicitly has a single location = default value
- for axis in doc.axes:
- if axis.name not in vfUserRegion:
- assert isinstance(
- axis.default, (int, float)
- ), f"Axis '{axis.name}' has no valid default value."
- vfUserRegion[axis.name] = axis.default
- return vfUserRegion
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-3b0ff54c.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-3b0ff54c.js
deleted file mode 100644
index f76a74f4bed7b360536d2e7258b9651c5ef9a93b..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-3b0ff54c.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as ce,e as he,s as me,Z as ge,O as R,m as Ne,p as D,Q as K,z as y,u as O,v as B,y as Q,A as L,B as Re,k as T,o as q,x as H,a7 as Te,h as ee,F as V,G as Z,N as S,K as b,U as N,M as G,ar as qe,V as we,T as P,L as le,a1 as He,P as be,R as de,E as Ie,ae as Me,q as Fe,r as Pe}from"./index-1d65707a.js";import{B as Ue}from"./Button-f155035a.js";import{B as Ve}from"./BlockLabel-66866176.js";import{E as Ke}from"./Empty-eec13822.js";import{u as Oe,S as Qe}from"./ShareButton-8cd3d8f6.js";import{n as te}from"./ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js";import{M as Ze}from"./ModifyUpload-c89cfce3.js";import{I as ke}from"./Image-0fe369ad.js";import"./IconButton-d42f3661.js";const Je=async l=>l?`
-
-|Roboflow|ClearML ⭐ NEW|Comet ⭐ NEW|Deci ⭐ NEW|
-|:-:|:-:|:-:|:-:|
-|Label and export your custom datasets directly to YOLOv5 for training with [Roboflow](https://roboflow.com/?ref=ultralytics)|Automatically track, visualize and even remotely train YOLOv5 using [ClearML](https://cutt.ly/yolov5-readme-clearml) (open-source!)|Free forever, [Comet](https://bit.ly/yolov5-readme-comet) lets you save YOLOv5 models, resume training, and interactively visualise and debug predictions|Automatically compile and quantize YOLOv5 for better inference performance in one click at [Deci](https://bit.ly/yolov5-deci-platform)|
-
-
-##
Ultralytics HUB
-
-[Ultralytics HUB](https://bit.ly/ultralytics_hub) is our ⭐ **NEW** no-code solution to visualize datasets, train YOLOv5 🚀 models, and deploy to the real world in a seamless experience. Get started for **Free** now!
-
-
-
-
-
-##
- Demo for Gertrude Model Stable Diffusion model.
- {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""}
-
- Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"} after duplicating the space
- """)
-
-demo.queue(concurrency_count=1)
-demo.launch()
diff --git a/spaces/Nightwing25/AICoverGen/src/rmvpe.py b/spaces/Nightwing25/AICoverGen/src/rmvpe.py
deleted file mode 100644
index 8d0d57297d4301e43a4fdcda216ae39c5e3b83b4..0000000000000000000000000000000000000000
--- a/spaces/Nightwing25/AICoverGen/src/rmvpe.py
+++ /dev/null
@@ -1,432 +0,0 @@
-import torch, numpy as np
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-
-class BiGRU(nn.Module):
- def __init__(self, input_features, hidden_features, num_layers):
- super(BiGRU, self).__init__()
- self.gru = nn.GRU(
- input_features,
- hidden_features,
- num_layers=num_layers,
- batch_first=True,
- bidirectional=True,
- )
-
- def forward(self, x):
- return self.gru(x)[0]
-
-
-class ConvBlockRes(nn.Module):
- def __init__(self, in_channels, out_channels, momentum=0.01):
- super(ConvBlockRes, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- in_channels=in_channels,
- out_channels=out_channels,
- kernel_size=(3, 3),
- stride=(1, 1),
- padding=(1, 1),
- bias=False,
- ),
- nn.BatchNorm2d(out_channels, momentum=momentum),
- nn.ReLU(),
- nn.Conv2d(
- in_channels=out_channels,
- out_channels=out_channels,
- kernel_size=(3, 3),
- stride=(1, 1),
- padding=(1, 1),
- bias=False,
- ),
- nn.BatchNorm2d(out_channels, momentum=momentum),
- nn.ReLU(),
- )
- if in_channels != out_channels:
- self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1))
- self.is_shortcut = True
- else:
- self.is_shortcut = False
-
- def forward(self, x):
- if self.is_shortcut:
- return self.conv(x) + self.shortcut(x)
- else:
- return self.conv(x) + x
-
-
-class Encoder(nn.Module):
- def __init__(
- self,
- in_channels,
- in_size,
- n_encoders,
- kernel_size,
- n_blocks,
- out_channels=16,
- momentum=0.01,
- ):
- super(Encoder, self).__init__()
- self.n_encoders = n_encoders
- self.bn = nn.BatchNorm2d(in_channels, momentum=momentum)
- self.layers = nn.ModuleList()
- self.latent_channels = []
- for i in range(self.n_encoders):
- self.layers.append(
- ResEncoderBlock(
- in_channels, out_channels, kernel_size, n_blocks, momentum=momentum
- )
- )
- self.latent_channels.append([out_channels, in_size])
- in_channels = out_channels
- out_channels *= 2
- in_size //= 2
- self.out_size = in_size
- self.out_channel = out_channels
-
- def forward(self, x):
- concat_tensors = []
- x = self.bn(x)
- for i in range(self.n_encoders):
- _, x = self.layers[i](x)
- concat_tensors.append(_)
- return x, concat_tensors
-
-
-class ResEncoderBlock(nn.Module):
- def __init__(
- self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01
- ):
- super(ResEncoderBlock, self).__init__()
- self.n_blocks = n_blocks
- self.conv = nn.ModuleList()
- self.conv.append(ConvBlockRes(in_channels, out_channels, momentum))
- for i in range(n_blocks - 1):
- self.conv.append(ConvBlockRes(out_channels, out_channels, momentum))
- self.kernel_size = kernel_size
- if self.kernel_size is not None:
- self.pool = nn.AvgPool2d(kernel_size=kernel_size)
-
- def forward(self, x):
- for i in range(self.n_blocks):
- x = self.conv[i](x)
- if self.kernel_size is not None:
- return x, self.pool(x)
- else:
- return x
-
-
-class Intermediate(nn.Module): #
- def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01):
- super(Intermediate, self).__init__()
- self.n_inters = n_inters
- self.layers = nn.ModuleList()
- self.layers.append(
- ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum)
- )
- for i in range(self.n_inters - 1):
- self.layers.append(
- ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum)
- )
-
- def forward(self, x):
- for i in range(self.n_inters):
- x = self.layers[i](x)
- return x
-
-
-class ResDecoderBlock(nn.Module):
- def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01):
- super(ResDecoderBlock, self).__init__()
- out_padding = (0, 1) if stride == (1, 2) else (1, 1)
- self.n_blocks = n_blocks
- self.conv1 = nn.Sequential(
- nn.ConvTranspose2d(
- in_channels=in_channels,
- out_channels=out_channels,
- kernel_size=(3, 3),
- stride=stride,
- padding=(1, 1),
- output_padding=out_padding,
- bias=False,
- ),
- nn.BatchNorm2d(out_channels, momentum=momentum),
- nn.ReLU(),
- )
- self.conv2 = nn.ModuleList()
- self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum))
- for i in range(n_blocks - 1):
- self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum))
-
- def forward(self, x, concat_tensor):
- x = self.conv1(x)
- x = torch.cat((x, concat_tensor), dim=1)
- for i in range(self.n_blocks):
- x = self.conv2[i](x)
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01):
- super(Decoder, self).__init__()
- self.layers = nn.ModuleList()
- self.n_decoders = n_decoders
- for i in range(self.n_decoders):
- out_channels = in_channels // 2
- self.layers.append(
- ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum)
- )
- in_channels = out_channels
-
- def forward(self, x, concat_tensors):
- for i in range(self.n_decoders):
- x = self.layers[i](x, concat_tensors[-1 - i])
- return x
-
-
-class DeepUnet(nn.Module):
- def __init__(
- self,
- kernel_size,
- n_blocks,
- en_de_layers=5,
- inter_layers=4,
- in_channels=1,
- en_out_channels=16,
- ):
- super(DeepUnet, self).__init__()
- self.encoder = Encoder(
- in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels
- )
- self.intermediate = Intermediate(
- self.encoder.out_channel // 2,
- self.encoder.out_channel,
- inter_layers,
- n_blocks,
- )
- self.decoder = Decoder(
- self.encoder.out_channel, en_de_layers, kernel_size, n_blocks
- )
-
- def forward(self, x):
- x, concat_tensors = self.encoder(x)
- x = self.intermediate(x)
- x = self.decoder(x, concat_tensors)
- return x
-
-
-class E2E(nn.Module):
- def __init__(
- self,
- n_blocks,
- n_gru,
- kernel_size,
- en_de_layers=5,
- inter_layers=4,
- in_channels=1,
- en_out_channels=16,
- ):
- super(E2E, self).__init__()
- self.unet = DeepUnet(
- kernel_size,
- n_blocks,
- en_de_layers,
- inter_layers,
- in_channels,
- en_out_channels,
- )
- self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1))
- if n_gru:
- self.fc = nn.Sequential(
- BiGRU(3 * 128, 256, n_gru),
- nn.Linear(512, 360),
- nn.Dropout(0.25),
- nn.Sigmoid(),
- )
- else:
- self.fc = nn.Sequential(
- nn.Linear(3 * nn.N_MELS, nn.N_CLASS), nn.Dropout(0.25), nn.Sigmoid()
- )
-
- def forward(self, mel):
- mel = mel.transpose(-1, -2).unsqueeze(1)
- x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2)
- x = self.fc(x)
- return x
-
-
-from librosa.filters import mel
-
-
-class MelSpectrogram(torch.nn.Module):
- def __init__(
- self,
- is_half,
- n_mel_channels,
- sampling_rate,
- win_length,
- hop_length,
- n_fft=None,
- mel_fmin=0,
- mel_fmax=None,
- clamp=1e-5,
- ):
- super().__init__()
- n_fft = win_length if n_fft is None else n_fft
- self.hann_window = {}
- mel_basis = mel(
- sr=sampling_rate,
- n_fft=n_fft,
- n_mels=n_mel_channels,
- fmin=mel_fmin,
- fmax=mel_fmax,
- htk=True,
- )
- mel_basis = torch.from_numpy(mel_basis).float()
- self.register_buffer("mel_basis", mel_basis)
- self.n_fft = win_length if n_fft is None else n_fft
- self.hop_length = hop_length
- self.win_length = win_length
- self.sampling_rate = sampling_rate
- self.n_mel_channels = n_mel_channels
- self.clamp = clamp
- self.is_half = is_half
-
- def forward(self, audio, keyshift=0, speed=1, center=True):
- factor = 2 ** (keyshift / 12)
- n_fft_new = int(np.round(self.n_fft * factor))
- win_length_new = int(np.round(self.win_length * factor))
- hop_length_new = int(np.round(self.hop_length * speed))
- keyshift_key = str(keyshift) + "_" + str(audio.device)
- if keyshift_key not in self.hann_window:
- self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to(
- audio.device
- )
- fft = torch.stft(
- audio,
- n_fft=n_fft_new,
- hop_length=hop_length_new,
- win_length=win_length_new,
- window=self.hann_window[keyshift_key],
- center=center,
- return_complex=True,
- )
- magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2))
- if keyshift != 0:
- size = self.n_fft // 2 + 1
- resize = magnitude.size(1)
- if resize < size:
- magnitude = F.pad(magnitude, (0, 0, 0, size - resize))
- magnitude = magnitude[:, :size, :] * self.win_length / win_length_new
- mel_output = torch.matmul(self.mel_basis, magnitude)
- if self.is_half == True:
- mel_output = mel_output.half()
- log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp))
- return log_mel_spec
-
-
-class RMVPE:
- def __init__(self, model_path, is_half, device=None):
- self.resample_kernel = {}
- model = E2E(4, 1, (2, 2))
- ckpt = torch.load(model_path, map_location="cpu")
- model.load_state_dict(ckpt)
- model.eval()
- if is_half == True:
- model = model.half()
- self.model = model
- self.resample_kernel = {}
- self.is_half = is_half
- if device is None:
- device = "cuda" if torch.cuda.is_available() else "cpu"
- self.device = device
- self.mel_extractor = MelSpectrogram(
- is_half, 128, 16000, 1024, 160, None, 30, 8000
- ).to(device)
- self.model = self.model.to(device)
- cents_mapping = 20 * np.arange(360) + 1997.3794084376191
- self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368
-
- def mel2hidden(self, mel):
- with torch.no_grad():
- n_frames = mel.shape[-1]
- mel = F.pad(
- mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="reflect"
- )
- hidden = self.model(mel)
- return hidden[:, :n_frames]
-
- def decode(self, hidden, thred=0.03):
- cents_pred = self.to_local_average_cents(hidden, thred=thred)
- f0 = 10 * (2 ** (cents_pred / 1200))
- f0[f0 == 10] = 0
- # f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred])
- return f0
-
- def infer_from_audio(self, audio, thred=0.03):
- audio = torch.from_numpy(audio).float().to(self.device).unsqueeze(0)
- # torch.cuda.synchronize()
- # t0=ttime()
- mel = self.mel_extractor(audio, center=True)
- # torch.cuda.synchronize()
- # t1=ttime()
- hidden = self.mel2hidden(mel)
- # torch.cuda.synchronize()
- # t2=ttime()
- hidden = hidden.squeeze(0).cpu().numpy()
- if self.is_half == True:
- hidden = hidden.astype("float32")
- f0 = self.decode(hidden, thred=thred)
- # torch.cuda.synchronize()
- # t3=ttime()
- # print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0))
- return f0
-
- def to_local_average_cents(self, salience, thred=0.05):
- # t0 = ttime()
- center = np.argmax(salience, axis=1) # frame length#index
- salience = np.pad(salience, ((0, 0), (4, 4))) # frame length,368
- # t1 = ttime()
- center += 4
- todo_salience = []
- todo_cents_mapping = []
- starts = center - 4
- ends = center + 5
- for idx in range(salience.shape[0]):
- todo_salience.append(salience[:, starts[idx] : ends[idx]][idx])
- todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]])
- # t2 = ttime()
- todo_salience = np.array(todo_salience) # frame length,9
- todo_cents_mapping = np.array(todo_cents_mapping) # frame length,9
- product_sum = np.sum(todo_salience * todo_cents_mapping, 1)
- weight_sum = np.sum(todo_salience, 1) # frame length
- devided = product_sum / weight_sum # frame length
- # t3 = ttime()
- maxx = np.max(salience, axis=1) # frame length
- devided[maxx <= thred] = 0
- # t4 = ttime()
- # print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3))
- return devided
-
-
-# if __name__ == '__main__':
-# audio, sampling_rate = sf.read("Quotations~1.wav") ### edit
-# if len(audio.shape) > 1:
-# audio = librosa.to_mono(audio.transpose(1, 0))
-# audio_bak = audio.copy()
-# if sampling_rate != 16000:
-# audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
-# model_path = "/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/test-RMVPE/weights/rmvpe_llc_half.pt"
-# thred = 0.03 # 0.01
-# device = 'cuda' if torch.cuda.is_available() else 'cpu'
-# rmvpe = RMVPE(model_path,is_half=False, device=device)
-# t0=ttime()
-# f0 = rmvpe.infer_from_audio(audio, thred=thred)
-# f0 = rmvpe.infer_from_audio(audio, thred=thred)
-# f0 = rmvpe.infer_from_audio(audio, thred=thred)
-# f0 = rmvpe.infer_from_audio(audio, thred=thred)
-# f0 = rmvpe.infer_from_audio(audio, thred=thred)
-# t1=ttime()
-# print(f0.shape,t1-t0)
diff --git a/spaces/Nultx/VITS-TTS/ONNXVITS_utils.py b/spaces/Nultx/VITS-TTS/ONNXVITS_utils.py
deleted file mode 100644
index b634ce380421571e6e07fb45dd59717b3f63115c..0000000000000000000000000000000000000000
--- a/spaces/Nultx/VITS-TTS/ONNXVITS_utils.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import torch
-import numpy as np
-import random
-import onnxruntime as ort
-def set_random_seed(seed=0):
- ort.set_seed(seed)
- torch.manual_seed(seed)
- torch.cuda.manual_seed(seed)
- torch.backends.cudnn.deterministic = True
- random.seed(seed)
- np.random.seed(seed)
-
-def runonnx(model_path, **kwargs):
- ort_session = ort.InferenceSession(model_path)
- outputs = ort_session.run(
- None,
- kwargs
- )
- return outputs
\ No newline at end of file
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wmt20/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wmt20/README.md
deleted file mode 100644
index b4f2874652f8be19998a65faa1d9276d8017ec59..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wmt20/README.md
+++ /dev/null
@@ -1,72 +0,0 @@
-# WMT 20
-
-This page provides pointers to the models of Facebook-FAIR's WMT'20 news translation task submission [(Chen et al., 2020)](https://arxiv.org/abs/2011.08298).
-
-## Single best MT models (after finetuning on part of WMT20 news dev set)
-
-Model | Description | Download
----|---|---
-`transformer.wmt20.ta-en` | Ta->En | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.ta-en.single.tar.gz)
-`transformer.wmt20.en-ta` | En->Ta | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-ta.single.tar.gz)
-`transformer.wmt20.iu-en.news` | Iu->En (News domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu-en.news.single.tar.gz)
-`transformer.wmt20.en-iu.news` | En->Iu (News domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-iu.news.single.tar.gz)
-`transformer.wmt20.iu-en.nh` | Iu->En (Nunavut Hansard domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu-en.nh.single.tar.gz)
-`transformer.wmt20.en-iu.nh` | En->Iu (Nunavut Hansard domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-iu.nh.single.tar.gz)
-
-## Language models
-Model | Description | Download
----|---|---
-`transformer_lm.wmt20.en` | En Language Model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en.tar.gz)
-`transformer_lm.wmt20.ta` | Ta Language Model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.ta.tar.gz)
-`transformer_lm.wmt20.iu.news` | Iu Language Model (News domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu.news.tar.gz)
-`transformer_lm.wmt20.iu.nh` | Iu Language Model (Nunavut Hansard domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu.nh.tar.gz)
-
-## Example usage (torch.hub)
-
-#### Translation
-
-```python
-import torch
-
-# English to Tamil translation
-en2ta = torch.hub.load('pytorch/fairseq', 'transformer.wmt20.en-ta')
-en2ta.translate("Machine learning is great!") # 'இயந்திரக் கற்றல் அருமை!'
-
-# Tamil to English translation
-ta2en = torch.hub.load('pytorch/fairseq', 'transformer.wmt20.ta-en')
-ta2en.translate("இயந்திரக் கற்றல் அருமை!") # 'Machine learning is great!'
-
-# English to Inuktitut translation
-en2iu = torch.hub.load('pytorch/fairseq', 'transformer.wmt20.en-iu.news')
-en2iu.translate("machine learning is great!") # 'ᖃᒧᑕᐅᔭᓄᑦ ᐃᓕᓐᓂᐊᕐᓂᖅ ᐱᐅᔪᒻᒪᕆᒃ!'
-
-# Inuktitut to English translation
-iu2en = torch.hub.load('pytorch/fairseq', 'transformer.wmt20.iu-en.news')
-iu2en.translate("ᖃᒧᑕᐅᔭᓄᑦ ᐃᓕᓐᓂᐊᕐᓂᖅ ᐱᐅᔪᒻᒪᕆᒃ!") # 'Machine learning excellence!'
-```
-
-#### Language Modeling
-
-```python
-# Sample from the English LM
-en_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt20.en')
-en_lm.sample("Machine learning is") # 'Machine learning is a type of artificial intelligence that uses machine learning to learn from data and make predictions.'
-
-# Sample from the Tamil LM
-ta_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt20.ta')
-ta_lm.sample("இயந்திரக் கற்றல் என்பது செயற்கை நுண்ணறிவின்") # 'இயந்திரக் கற்றல் என்பது செயற்கை நுண்ணறிவின் ஒரு பகுதியாகும்.'
-
-# Sample from the Inuktitut LM
-iu_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt20.iu.news')
-iu_lm.sample("ᖃᒧᑕᐅᔭᓄᑦ ᐃᓕᓐᓂᐊᕐᓂᖅ") # 'ᖃᒧᑕᐅᔭᓄᑦ ᐃᓕᓐᓂᐊᕐᓂᖅ, ᐊᒻᒪᓗ ᓯᓚᐅᑉ ᐊᓯᙳᖅᐸᓪᓕᐊᓂᖓᓄᑦ ᖃᓄᐃᓕᐅᕈᑎᒃᓴᑦ, ᐃᓚᖃᖅᖢᑎᒃ ᐅᑯᓂᖓ:'
-```
-
-## Citation
-```bibtex
-@inproceedings{chen2020facebook
- title={Facebook AI's WMT20 News Translation Task Submission},
- author={Peng-Jen Chen and Ann Lee and Changhan Wang and Naman Goyal and Angela Fan and Mary Williamson and Jiatao Gu},
- booktitle={Proc. of WMT},
- year={2020},
-}
-```
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_utils.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_utils.py
deleted file mode 100644
index 79195903e0f34372a24fa50312a6e00170c14471..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_utils.py
+++ /dev/null
@@ -1,114 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import unittest
-
-import torch
-from fairseq import utils
-
-
-class TestUtils(unittest.TestCase):
- def test_convert_padding_direction(self):
- pad = 1
- left_pad = torch.LongTensor(
- [
- [2, 3, 4, 5, 6],
- [1, 7, 8, 9, 10],
- [1, 1, 1, 11, 12],
- ]
- )
- right_pad = torch.LongTensor(
- [
- [2, 3, 4, 5, 6],
- [7, 8, 9, 10, 1],
- [11, 12, 1, 1, 1],
- ]
- )
-
- self.assertAlmostEqual(
- right_pad,
- utils.convert_padding_direction(
- left_pad,
- pad,
- left_to_right=True,
- ),
- )
- self.assertAlmostEqual(
- left_pad,
- utils.convert_padding_direction(
- right_pad,
- pad,
- right_to_left=True,
- ),
- )
-
- def test_make_positions(self):
- pad = 1
- left_pad_input = torch.LongTensor(
- [
- [9, 9, 9, 9, 9],
- [1, 9, 9, 9, 9],
- [1, 1, 1, 9, 9],
- ]
- )
- left_pad_output = torch.LongTensor(
- [
- [2, 3, 4, 5, 6],
- [1, 2, 3, 4, 5],
- [1, 1, 1, 2, 3],
- ]
- )
- right_pad_input = torch.LongTensor(
- [
- [9, 9, 9, 9, 9],
- [9, 9, 9, 9, 1],
- [9, 9, 1, 1, 1],
- ]
- )
- right_pad_output = torch.LongTensor(
- [
- [2, 3, 4, 5, 6],
- [2, 3, 4, 5, 1],
- [2, 3, 1, 1, 1],
- ]
- )
-
- self.assertAlmostEqual(
- left_pad_output,
- utils.make_positions(left_pad_input, pad),
- )
- self.assertAlmostEqual(
- right_pad_output,
- utils.make_positions(right_pad_input, pad),
- )
-
- def test_clip_grad_norm_(self):
- params = torch.nn.Parameter(torch.zeros(5)).requires_grad_(False)
- grad_norm = utils.clip_grad_norm_(params, 1.0)
- self.assertTrue(torch.is_tensor(grad_norm))
- self.assertEqual(grad_norm, 0.0)
-
- params = [torch.nn.Parameter(torch.zeros(5)) for i in range(3)]
- for p in params:
- p.grad = torch.full((5,), fill_value=2.0)
- grad_norm = utils.clip_grad_norm_(params, 1.0)
- exp_grad_norm = torch.full((15,), fill_value=2.0).norm()
- self.assertTrue(torch.is_tensor(grad_norm))
- self.assertEqual(grad_norm, exp_grad_norm)
-
- grad_norm = utils.clip_grad_norm_(params, 1.0)
- self.assertAlmostEqual(grad_norm, torch.tensor(1.0))
-
- def test_resolve_max_positions_with_tuple(self):
- resolved = utils.resolve_max_positions(None, (2000, 100, 2000), 12000)
- self.assertEqual(resolved, (2000, 100, 2000))
-
- def assertAlmostEqual(self, t1, t2):
- self.assertEqual(t1.size(), t2.size(), "size mismatch")
- self.assertLess(utils.item((t1 - t2).abs().max()), 1e-4)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/truncated_bptt/README.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/truncated_bptt/README.md
deleted file mode 100644
index 86518c9d5ef09fbd4fed1512a52e9431b74f08fa..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/truncated_bptt/README.md
+++ /dev/null
@@ -1,70 +0,0 @@
-# Truncated Backpropagation Through Time (BPTT)
-
-Truncated BPTT is a useful technique for training language models on very long
-sequences. Typically a long sequences is split into chunks and a language model
-is trained over the chunks sequentially. The LM may condition on previous
-chunks, but gradients only flow through the current chunk. This technique was
-the basis for the paper: [Transformer-XL: Attentive Language Models Beyond a
-Fixed-Length Context](https://arxiv.org/abs/1901.02860), which achieved
-state-of-the-art language modeling results at the time of publication.
-
-It is slightly tricky to implement Truncated BPTT efficiently in fairseq, since
-we need to iterate over the data sequentially and disable any batch shuffling
-logic. The code provided in this example illustrates how to implement Truncated
-BPTT in fairseq by overriding ``FairseqTask::get_batch_iterator`` to iterate
-over the data sequentially. Crucially, this example supports batching and
-multi-GPU (data parallel) training.
-
-##### 0. Setup
-
-First, see the general [language modeling README](README.md) for instructions on
-preprocessing the WikiText-103 data.
-
-##### 1. Train a Transformer-XL model on WikiText-103
-
-We will train a 16-layer Transformer-XL model following the [hyperparameters
-used in the original
-paper](https://github.com/kimiyoung/transformer-xl/blob/master/pytorch/run_wt103_base.sh).
-
-The following command assumes 4 GPUs, so that the total batch size is 60
-sequences (15 x 4). Training should take ~24 hours on 4 V100 GPUs:
-```bash
-CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train \
- --user-dir examples/truncated_bptt \
- data-bin/wikitext-103/ \
- --task truncated_bptt_lm --tokens-per-sample 150 \
- --batch-size 15 --max-update 200000 \
- --arch transformer_xl --n-layer 16 --d-model 410 --n-head 10 \
- --d-head 41 --d-inner 2100 --dropout 0.1 --dropatt 0.0 --mem-len 150 \
- --optimizer adam --clip-norm 0.25 \
- --lr-scheduler cosine --warmup-updates 0 --min-lr 0.0 --lr 0.00025 \
- --log-format json --log-interval 25 \
- --fp16
-```
-
-If training on a single GPU, set `--update-freq=4` to accumulate 4x gradients
-and simulate training on 4 GPUs.
-
-##### 2. Evaluate
-
-```bash
-fairseq-eval-lm data-bin/wikitext-103/ \
- --path checkpoints/checkpoint_best.pt \
- --user-dir examples/truncated_bptt/ \
- --task truncated_bptt_lm \
- --batch-size 1 --required-batch-size-multiple 1 \
- --model-overrides '{"mem_len":640,"clamp_len":400,"same_length":True}' \
- --tokens-per-sample 64
-# ... | INFO | fairseq_cli.eval_lm | num. model params: 151123537
-# ... | INFO | fairseq_cli.eval_lm | Evaluated 245569 tokens in 83.1s (2956.82 tokens/s)
-# ... | INFO | fairseq_cli.eval_lm | Loss (base 2): 4.5668, Perplexity: 23.70
-# Compare to 24.0 test perplexity from the paper
-```
-
-*Note:* During training the model saw 150 tokens of context
-(``--tokens-per-sample=150``) and 150 extra memory tokens (``--mem-len=150``).
-During evaluation we measure perplexity on sequences of 64 tokens
-(``--tokens-per-sample=64``) and increase the memory length
-(``--model-overrides='{"mem_len":640}'``). These settings match the evaluation
-settings from [the original
-paper](https://github.com/kimiyoung/transformer-xl/blob/master/pytorch/run_wt103_base.sh).
diff --git a/spaces/Open-Orca/Mistral-7B-OpenOrca/app.py b/spaces/Open-Orca/Mistral-7B-OpenOrca/app.py
deleted file mode 100644
index 7a53b3dc32d2853bb84837368e32bebf40ec3b60..0000000000000000000000000000000000000000
--- a/spaces/Open-Orca/Mistral-7B-OpenOrca/app.py
+++ /dev/null
@@ -1,128 +0,0 @@
-import os
-import re
-import logging
-import gradio as gr
-import openai
-
-print(os.environ)
-openai.api_base = os.environ.get("OPENAI_API_BASE")
-openai.api_key = os.environ.get("OPENAI_API_KEY")
-
-BASE_SYSTEM_MESSAGE = """I carefully provide accurate, factual, thoughtful, nuanced answers and am brilliant at reasoning.
-I am an assistant who thinks through their answers step-by-step to be sure I always get the right answer.
-I think more clearly if I write out my thought process in a scratchpad manner first; therefore, I always explain background context, assumptions, and step-by-step thinking BEFORE trying to answer or solve anything."""
-
-def make_prediction(prompt, max_tokens=None, temperature=None, top_p=None, top_k=None, repetition_penalty=None):
- completion = openai.Completion.create(model="Open-Orca/Mistral-7B-OpenOrca", prompt=prompt, max_tokens=max_tokens, temperature=temperature, top_p=top_p, top_k=top_k, repetition_penalty=repetition_penalty, stream=True, stop=["", "<|im_end|>"])
- for chunk in completion:
- yield chunk["choices"][0]["text"]
-
-
-def clear_chat(chat_history_state, chat_message):
- chat_history_state = []
- chat_message = ''
- return chat_history_state, chat_message
-
-
-def user(message, history):
- history = history or []
- # Append the user's message to the conversation history
- history.append([message, ""])
- return "", history
-
-
-def chat(history, system_message, max_tokens, temperature, top_p, top_k, repetition_penalty):
- history = history or []
-
- if system_message.strip():
- messages = "<|im_start|> "+"system\n" + system_message.strip() + "<|im_end|>\n" + \
- "\n".join(["\n".join(["<|im_start|> "+"user\n"+item[0]+"<|im_end|>", "<|im_start|> assistant\n"+item[1]+"<|im_end|>"])
- for item in history])
- else:
- messages = "<|im_start|> "+"system\n" + BASE_SYSTEM_MESSAGE + "<|im_end|>\n" + \
- "\n".join(["\n".join(["<|im_start|> "+"user\n"+item[0]+"<|im_end|>", "<|im_start|> assistant\n"+item[1]+"<|im_end|>"])
- for item in history])
- # strip the last `<|end_of_turn|>` from the messages
- messages = messages.rstrip("<|im_end|>")
- # remove last space from assistant, some models output a ZWSP if you leave a space
- messages = messages.rstrip()
-
- # If temperature is set to 0, force Top P to 1 and Top K to -1
- if temperature == 0:
- top_p = 1
- top_k = -1
-
- prediction = make_prediction(
- messages,
- max_tokens=max_tokens,
- temperature=temperature,
- top_p=top_p,
- top_k=top_k,
- repetition_penalty=repetition_penalty,
- )
- for tokens in prediction:
- tokens = re.findall(r'(.*?)(\s|$)', tokens)
- for subtoken in tokens:
- subtoken = "".join(subtoken)
- answer = subtoken
- history[-1][1] += answer
- # stream the response
- yield history, history, ""
-
-
-start_message = ""
-
-CSS ="""
-.contain { display: flex; flex-direction: column; }
-.gradio-container { height: 100vh !important; }
-#component-0 { height: 100%; }
-#chatbot { flex-grow: 1; overflow: auto; resize: vertical; }
-"""
-
-#with gr.Blocks() as demo:
-with gr.Blocks(css=CSS) as demo:
- with gr.Row():
- with gr.Column():
- gr.Markdown(f"""
- ## This demo is an unquantized GPU chatbot of [Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
- Brought to you by your friends at Alignment Lab AI, OpenChat, and Open Access AI Collective!
- """)
- with gr.Row():
- gr.Markdown("# 🐋 Mistral-7B-OpenOrca Playground Space! 🐋")
- with gr.Row():
- #chatbot = gr.Chatbot().style(height=500)
- chatbot = gr.Chatbot(elem_id="chatbot")
- with gr.Row():
- message = gr.Textbox(
- label="What do you want to chat about?",
- placeholder="Ask me anything.",
- lines=3,
- )
- with gr.Row():
- submit = gr.Button(value="Send message", variant="secondary").style(full_width=True)
- clear = gr.Button(value="New topic", variant="secondary").style(full_width=False)
- stop = gr.Button(value="Stop", variant="secondary").style(full_width=False)
- with gr.Accordion("Show Model Parameters", open=False):
- with gr.Row():
- with gr.Column():
- max_tokens = gr.Slider(20, 2500, label="Max Tokens", step=20, value=500)
- temperature = gr.Slider(0.0, 2.0, label="Temperature", step=0.1, value=0.4)
- top_p = gr.Slider(0.0, 1.0, label="Top P", step=0.05, value=0.95)
- top_k = gr.Slider(1, 100, label="Top K", step=1, value=40)
- repetition_penalty = gr.Slider(1.0, 2.0, label="Repetition Penalty", step=0.1, value=1.1)
-
- system_msg = gr.Textbox(
- start_message, label="System Message", interactive=True, visible=True, placeholder="System prompt. Provide instructions which you want the model to remember.", lines=5)
-
- chat_history_state = gr.State()
- clear.click(clear_chat, inputs=[chat_history_state, message], outputs=[chat_history_state, message], queue=False)
- clear.click(lambda: None, None, chatbot, queue=False)
-
- submit_click_event = submit.click(
- fn=user, inputs=[message, chat_history_state], outputs=[message, chat_history_state], queue=True
- ).then(
- fn=chat, inputs=[chat_history_state, system_msg, max_tokens, temperature, top_p, top_k, repetition_penalty], outputs=[chatbot, chat_history_state, message], queue=True
- )
- stop.click(fn=None, inputs=None, outputs=None, cancels=[submit_click_event], queue=False)
-
-demo.queue(max_size=128, concurrency_count=48).launch(debug=True, server_name="0.0.0.0", server_port=7860)
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/data/datasets/grit_coco.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/data/datasets/grit_coco.py
deleted file mode 100644
index fea81f7dd8ad2c27dac8438753b845ab64cef81e..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/data/datasets/grit_coco.py
+++ /dev/null
@@ -1,112 +0,0 @@
-import logging
-import os
-from fvcore.common.timer import Timer
-from detectron2.structures import BoxMode
-from fvcore.common.file_io import PathManager
-from detectron2.data import DatasetCatalog, MetadataCatalog
-from lvis import LVIS
-
-logger = logging.getLogger(__name__)
-
-__all__ = ["load_GRiTcoco_json", "register_GRiTcoco_instances"]
-
-
-def register_GRiTcoco_instances(name, metadata, json_file, image_root):
- """
- """
- DatasetCatalog.register(name, lambda: load_GRiTcoco_json(
- json_file, image_root, name))
- MetadataCatalog.get(name).set(
- json_file=json_file, image_root=image_root,
- evaluator_type="coco", **metadata
- )
-
-
-def get_GRiTcoco_meta():
- categories = [{'supercategory': 'object', 'id': 1, 'name': 'object'}]
- categories = sorted(categories, key=lambda x: x["id"])
- thing_classes = [k["name"] for k in categories]
- meta = {"thing_classes": thing_classes}
- return meta
-
-
-def load_GRiTcoco_json(json_file, image_root, dataset_name=None):
- '''
- Load COCO class name text for object description for GRiT
- '''
-
- json_file = PathManager.get_local_path(json_file)
-
- timer = Timer()
- lvis_api = LVIS(json_file)
- if timer.seconds() > 1:
- logger.info("Loading {} takes {:.2f} seconds.".format(
- json_file, timer.seconds()))
-
- class_names = {}
- sort_cat = sorted(lvis_api.dataset['categories'], key=lambda x: x['id'])
- for x in sort_cat:
- class_names[x['id']] = x['name']
-
- img_ids = sorted(lvis_api.imgs.keys())
- imgs = lvis_api.load_imgs(img_ids)
- anns = [lvis_api.img_ann_map[img_id] for img_id in img_ids]
-
- ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image]
- assert len(set(ann_ids)) == len(ann_ids), \
- "Annotation ids in '{}' are not unique".format(json_file)
-
- imgs_anns = list(zip(imgs, anns))
- logger.info("Loaded {} images in the LVIS v1 format from {}".format(
- len(imgs_anns), json_file))
-
- dataset_dicts = []
-
- for (img_dict, anno_dict_list) in imgs_anns:
- record = {}
- if "file_name" in img_dict:
- file_name = img_dict["file_name"]
- record["file_name"] = os.path.join(image_root, file_name)
-
- record["height"] = int(img_dict["height"])
- record["width"] = int(img_dict["width"])
- image_id = record["image_id"] = img_dict["id"]
-
- objs = []
- for anno in anno_dict_list:
- assert anno["image_id"] == image_id
- if anno.get('iscrowd', 0) > 0:
- continue
- obj = {"bbox": anno["bbox"], "bbox_mode": BoxMode.XYWH_ABS}
- obj["category_id"] = 0
- obj["object_description"] = class_names[anno['category_id']]
- if 'segmentation' in anno:
- segm = anno["segmentation"]
- valid_segm = [poly for poly in segm \
- if len(poly) % 2 == 0 and len(poly) >= 6]
- if not len(segm) == len(valid_segm):
- print('Annotation contains an invalid polygon with < 3 points')
- assert len(segm) > 0
- obj["segmentation"] = segm
- objs.append(obj)
- record["annotations"] = objs
- if len(record["annotations"]) == 0:
- continue
- record["task"] = "ObjectDet"
- dataset_dicts.append(record)
-
- return dataset_dicts
-
-
-_CUSTOM_SPLITS_LVIS = {
- "GRiT_coco2017_train": ("coco/train2017/", "coco/annotations/instances_train2017.json"),
-}
-
-
-for key, (image_root, json_file) in _CUSTOM_SPLITS_LVIS.items():
- register_GRiTcoco_instances(
- key,
- get_GRiTcoco_meta(),
- os.path.join("datasets", json_file) if "://" not in json_file else json_file,
- os.path.join("datasets", image_root),
- )
\ No newline at end of file
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/modulated_deform_conv.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/modulated_deform_conv.py
deleted file mode 100644
index 75559579cf053abcc99538606cbb88c723faf783..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/modulated_deform_conv.py
+++ /dev/null
@@ -1,282 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import math
-
-import torch
-import torch.nn as nn
-from torch.autograd import Function
-from torch.autograd.function import once_differentiable
-from torch.nn.modules.utils import _pair, _single
-
-from annotator.uniformer.mmcv.utils import deprecated_api_warning
-from ..cnn import CONV_LAYERS
-from ..utils import ext_loader, print_log
-
-ext_module = ext_loader.load_ext(
- '_ext',
- ['modulated_deform_conv_forward', 'modulated_deform_conv_backward'])
-
-
-class ModulatedDeformConv2dFunction(Function):
-
- @staticmethod
- def symbolic(g, input, offset, mask, weight, bias, stride, padding,
- dilation, groups, deform_groups):
- input_tensors = [input, offset, mask, weight]
- if bias is not None:
- input_tensors.append(bias)
- return g.op(
- 'mmcv::MMCVModulatedDeformConv2d',
- *input_tensors,
- stride_i=stride,
- padding_i=padding,
- dilation_i=dilation,
- groups_i=groups,
- deform_groups_i=deform_groups)
-
- @staticmethod
- def forward(ctx,
- input,
- offset,
- mask,
- weight,
- bias=None,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- deform_groups=1):
- if input is not None and input.dim() != 4:
- raise ValueError(
- f'Expected 4D tensor as input, got {input.dim()}D tensor \
- instead.')
- ctx.stride = _pair(stride)
- ctx.padding = _pair(padding)
- ctx.dilation = _pair(dilation)
- ctx.groups = groups
- ctx.deform_groups = deform_groups
- ctx.with_bias = bias is not None
- if not ctx.with_bias:
- bias = input.new_empty(0) # fake tensor
- # When pytorch version >= 1.6.0, amp is adopted for fp16 mode;
- # amp won't cast the type of model (float32), but "offset" is cast
- # to float16 by nn.Conv2d automatically, leading to the type
- # mismatch with input (when it is float32) or weight.
- # The flag for whether to use fp16 or amp is the type of "offset",
- # we cast weight and input to temporarily support fp16 and amp
- # whatever the pytorch version is.
- input = input.type_as(offset)
- weight = weight.type_as(input)
- ctx.save_for_backward(input, offset, mask, weight, bias)
- output = input.new_empty(
- ModulatedDeformConv2dFunction._output_size(ctx, input, weight))
- ctx._bufs = [input.new_empty(0), input.new_empty(0)]
- ext_module.modulated_deform_conv_forward(
- input,
- weight,
- bias,
- ctx._bufs[0],
- offset,
- mask,
- output,
- ctx._bufs[1],
- kernel_h=weight.size(2),
- kernel_w=weight.size(3),
- stride_h=ctx.stride[0],
- stride_w=ctx.stride[1],
- pad_h=ctx.padding[0],
- pad_w=ctx.padding[1],
- dilation_h=ctx.dilation[0],
- dilation_w=ctx.dilation[1],
- group=ctx.groups,
- deformable_group=ctx.deform_groups,
- with_bias=ctx.with_bias)
- return output
-
- @staticmethod
- @once_differentiable
- def backward(ctx, grad_output):
- input, offset, mask, weight, bias = ctx.saved_tensors
- grad_input = torch.zeros_like(input)
- grad_offset = torch.zeros_like(offset)
- grad_mask = torch.zeros_like(mask)
- grad_weight = torch.zeros_like(weight)
- grad_bias = torch.zeros_like(bias)
- grad_output = grad_output.contiguous()
- ext_module.modulated_deform_conv_backward(
- input,
- weight,
- bias,
- ctx._bufs[0],
- offset,
- mask,
- ctx._bufs[1],
- grad_input,
- grad_weight,
- grad_bias,
- grad_offset,
- grad_mask,
- grad_output,
- kernel_h=weight.size(2),
- kernel_w=weight.size(3),
- stride_h=ctx.stride[0],
- stride_w=ctx.stride[1],
- pad_h=ctx.padding[0],
- pad_w=ctx.padding[1],
- dilation_h=ctx.dilation[0],
- dilation_w=ctx.dilation[1],
- group=ctx.groups,
- deformable_group=ctx.deform_groups,
- with_bias=ctx.with_bias)
- if not ctx.with_bias:
- grad_bias = None
-
- return (grad_input, grad_offset, grad_mask, grad_weight, grad_bias,
- None, None, None, None, None)
-
- @staticmethod
- def _output_size(ctx, input, weight):
- channels = weight.size(0)
- output_size = (input.size(0), channels)
- for d in range(input.dim() - 2):
- in_size = input.size(d + 2)
- pad = ctx.padding[d]
- kernel = ctx.dilation[d] * (weight.size(d + 2) - 1) + 1
- stride_ = ctx.stride[d]
- output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, )
- if not all(map(lambda s: s > 0, output_size)):
- raise ValueError(
- 'convolution input is too small (output would be ' +
- 'x'.join(map(str, output_size)) + ')')
- return output_size
-
-
-modulated_deform_conv2d = ModulatedDeformConv2dFunction.apply
-
-
-class ModulatedDeformConv2d(nn.Module):
-
- @deprecated_api_warning({'deformable_groups': 'deform_groups'},
- cls_name='ModulatedDeformConv2d')
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- deform_groups=1,
- bias=True):
- super(ModulatedDeformConv2d, self).__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.kernel_size = _pair(kernel_size)
- self.stride = _pair(stride)
- self.padding = _pair(padding)
- self.dilation = _pair(dilation)
- self.groups = groups
- self.deform_groups = deform_groups
- # enable compatibility with nn.Conv2d
- self.transposed = False
- self.output_padding = _single(0)
-
- self.weight = nn.Parameter(
- torch.Tensor(out_channels, in_channels // groups,
- *self.kernel_size))
- if bias:
- self.bias = nn.Parameter(torch.Tensor(out_channels))
- else:
- self.register_parameter('bias', None)
- self.init_weights()
-
- def init_weights(self):
- n = self.in_channels
- for k in self.kernel_size:
- n *= k
- stdv = 1. / math.sqrt(n)
- self.weight.data.uniform_(-stdv, stdv)
- if self.bias is not None:
- self.bias.data.zero_()
-
- def forward(self, x, offset, mask):
- return modulated_deform_conv2d(x, offset, mask, self.weight, self.bias,
- self.stride, self.padding,
- self.dilation, self.groups,
- self.deform_groups)
-
-
-@CONV_LAYERS.register_module('DCNv2')
-class ModulatedDeformConv2dPack(ModulatedDeformConv2d):
- """A ModulatedDeformable Conv Encapsulation that acts as normal Conv
- layers.
-
- Args:
- in_channels (int): Same as nn.Conv2d.
- out_channels (int): Same as nn.Conv2d.
- kernel_size (int or tuple[int]): Same as nn.Conv2d.
- stride (int): Same as nn.Conv2d, while tuple is not supported.
- padding (int): Same as nn.Conv2d, while tuple is not supported.
- dilation (int): Same as nn.Conv2d, while tuple is not supported.
- groups (int): Same as nn.Conv2d.
- bias (bool or str): If specified as `auto`, it will be decided by the
- norm_cfg. Bias will be set as True if norm_cfg is None, otherwise
- False.
- """
-
- _version = 2
-
- def __init__(self, *args, **kwargs):
- super(ModulatedDeformConv2dPack, self).__init__(*args, **kwargs)
- self.conv_offset = nn.Conv2d(
- self.in_channels,
- self.deform_groups * 3 * self.kernel_size[0] * self.kernel_size[1],
- kernel_size=self.kernel_size,
- stride=self.stride,
- padding=self.padding,
- dilation=self.dilation,
- bias=True)
- self.init_weights()
-
- def init_weights(self):
- super(ModulatedDeformConv2dPack, self).init_weights()
- if hasattr(self, 'conv_offset'):
- self.conv_offset.weight.data.zero_()
- self.conv_offset.bias.data.zero_()
-
- def forward(self, x):
- out = self.conv_offset(x)
- o1, o2, mask = torch.chunk(out, 3, dim=1)
- offset = torch.cat((o1, o2), dim=1)
- mask = torch.sigmoid(mask)
- return modulated_deform_conv2d(x, offset, mask, self.weight, self.bias,
- self.stride, self.padding,
- self.dilation, self.groups,
- self.deform_groups)
-
- def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
- missing_keys, unexpected_keys, error_msgs):
- version = local_metadata.get('version', None)
-
- if version is None or version < 2:
- # the key is different in early versions
- # In version < 2, ModulatedDeformConvPack
- # loads previous benchmark models.
- if (prefix + 'conv_offset.weight' not in state_dict
- and prefix[:-1] + '_offset.weight' in state_dict):
- state_dict[prefix + 'conv_offset.weight'] = state_dict.pop(
- prefix[:-1] + '_offset.weight')
- if (prefix + 'conv_offset.bias' not in state_dict
- and prefix[:-1] + '_offset.bias' in state_dict):
- state_dict[prefix +
- 'conv_offset.bias'] = state_dict.pop(prefix[:-1] +
- '_offset.bias')
-
- if version is not None and version > 1:
- print_log(
- f'ModulatedDeformConvPack {prefix.rstrip(".")} is upgraded to '
- 'version 2.',
- logger='root')
-
- super()._load_from_state_dict(state_dict, prefix, local_metadata,
- strict, missing_keys, unexpected_keys,
- error_msgs)
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/gap-buffer.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/gap-buffer.go
deleted file mode 100644
index 02692c462779ec40201c14d6ef3b2e2b8b0ae986..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/gap-buffer.go and /dev/null differ
diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/samplers/grouped_batch_sampler.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/samplers/grouped_batch_sampler.py
deleted file mode 100644
index d72e2f0265e1016e7bbac67590075fda2bc28a55..0000000000000000000000000000000000000000
--- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/samplers/grouped_batch_sampler.py
+++ /dev/null
@@ -1,115 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-import itertools
-
-import torch
-from torch.utils.data.sampler import BatchSampler
-from torch.utils.data.sampler import Sampler
-
-
-class GroupedBatchSampler(BatchSampler):
- """
- Wraps another sampler to yield a mini-batch of indices.
- It enforces that elements from the same group should appear in groups of batch_size.
- It also tries to provide mini-batches which follows an ordering which is
- as close as possible to the ordering from the original sampler.
-
- Arguments:
- sampler (Sampler): Base sampler.
- batch_size (int): Size of mini-batch.
- drop_uneven (bool): If ``True``, the sampler will drop the batches whose
- size is less than ``batch_size``
-
- """
-
- def __init__(self, sampler, group_ids, batch_size, drop_uneven=False):
- if not isinstance(sampler, Sampler):
- raise ValueError(
- "sampler should be an instance of "
- "torch.utils.data.Sampler, but got sampler={}".format(sampler)
- )
- self.sampler = sampler
- self.group_ids = torch.as_tensor(group_ids)
- assert self.group_ids.dim() == 1
- self.batch_size = batch_size
- self.drop_uneven = drop_uneven
-
- self.groups = torch.unique(self.group_ids).sort(0)[0]
-
- self._can_reuse_batches = False
-
- def _prepare_batches(self):
- dataset_size = len(self.group_ids)
- # get the sampled indices from the sampler
- sampled_ids = torch.as_tensor(list(self.sampler))
- # potentially not all elements of the dataset were sampled
- # by the sampler (e.g., DistributedSampler).
- # construct a tensor which contains -1 if the element was
- # not sampled, and a non-negative number indicating the
- # order where the element was sampled.
- # for example. if sampled_ids = [3, 1] and dataset_size = 5,
- # the order is [-1, 1, -1, 0, -1]
- order = torch.full((dataset_size,), -1, dtype=torch.int64)
- order[sampled_ids] = torch.arange(len(sampled_ids))
-
- # get a mask with the elements that were sampled
- mask = order >= 0
-
- # find the elements that belong to each individual cluster
- clusters = [(self.group_ids == i) & mask for i in self.groups]
- # get relative order of the elements inside each cluster
- # that follows the order from the sampler
- relative_order = [order[cluster] for cluster in clusters]
- # with the relative order, find the absolute order in the
- # sampled space
- permutation_ids = [s[s.sort()[1]] for s in relative_order]
- # permute each cluster so that they follow the order from
- # the sampler
- permuted_clusters = [sampled_ids[idx] for idx in permutation_ids]
-
- # splits each cluster in batch_size, and merge as a list of tensors
- splits = [c.split(self.batch_size) for c in permuted_clusters]
- merged = tuple(itertools.chain.from_iterable(splits))
-
- # now each batch internally has the right order, but
- # they are grouped by clusters. Find the permutation between
- # different batches that brings them as close as possible to
- # the order that we have in the sampler. For that, we will consider the
- # ordering as coming from the first element of each batch, and sort
- # correspondingly
- first_element_of_batch = [t[0].item() for t in merged]
- # get and inverse mapping from sampled indices and the position where
- # they occur (as returned by the sampler)
- inv_sampled_ids_map = {v: k for k, v in enumerate(sampled_ids.tolist())}
- # from the first element in each batch, get a relative ordering
- first_index_of_batch = torch.as_tensor(
- [inv_sampled_ids_map[s] for s in first_element_of_batch]
- )
-
- # permute the batches so that they approximately follow the order
- # from the sampler
- permutation_order = first_index_of_batch.sort(0)[1].tolist()
- # finally, permute the batches
- batches = [merged[i].tolist() for i in permutation_order]
-
- if self.drop_uneven:
- kept = []
- for batch in batches:
- if len(batch) == self.batch_size:
- kept.append(batch)
- batches = kept
- return batches
-
- def __iter__(self):
- if self._can_reuse_batches:
- batches = self._batches
- self._can_reuse_batches = False
- else:
- batches = self._prepare_batches()
- self._batches = batches
- return iter(batches)
-
- def __len__(self):
- if not hasattr(self, "_batches"):
- self._batches = self._prepare_batches()
- self._can_reuse_batches = True
- return len(self._batches)
diff --git a/spaces/Podtekatel/Arcane_Style_Transfer/inference/face_detector.py b/spaces/Podtekatel/Arcane_Style_Transfer/inference/face_detector.py
deleted file mode 100644
index bb33ea1ccc50e6a58ab3ef3d22c3d616900c26c2..0000000000000000000000000000000000000000
--- a/spaces/Podtekatel/Arcane_Style_Transfer/inference/face_detector.py
+++ /dev/null
@@ -1,121 +0,0 @@
-import os
-from abc import ABC, abstractmethod
-from typing import List
-
-import cv2
-import numpy as np
-from retinaface import RetinaFace
-from retinaface.model import retinaface_model
-
-from .box_utils import convert_to_square
-
-
-class FaceDetector(ABC):
- def __init__(self, target_size):
- self.target_size = target_size
- @abstractmethod
- def detect_crops(self, img, *args, **kwargs) -> List[np.ndarray]:
- """
- Img is a numpy ndarray in range [0..255], uint8 dtype, RGB type
- Returns ndarray with [x1, y1, x2, y2] in row
- """
- pass
-
- @abstractmethod
- def postprocess_crops(self, crops, *args, **kwargs) -> List[np.ndarray]:
- return crops
-
- def sort_faces(self, crops):
- sorted_faces = sorted(crops, key=lambda x: -(x[2] - x[0]) * (x[3] - x[1]))
- sorted_faces = np.stack(sorted_faces, axis=0)
- return sorted_faces
-
- def fix_range_crops(self, img, crops):
- H, W, _ = img.shape
- final_crops = []
- for crop in crops:
- x1, y1, x2, y2 = crop
- x1 = max(min(round(x1), W), 0)
- y1 = max(min(round(y1), H), 0)
- x2 = max(min(round(x2), W), 0)
- y2 = max(min(round(y2), H), 0)
- new_crop = [x1, y1, x2, y2]
- final_crops.append(new_crop)
- final_crops = np.array(final_crops, dtype=np.int)
- return final_crops
-
- def crop_faces(self, img, crops) -> List[np.ndarray]:
- cropped_faces = []
- for crop in crops:
- x1, y1, x2, y2 = crop
- face_crop = img[y1:y2, x1:x2, :]
- cropped_faces.append(face_crop)
- return cropped_faces
-
- def unify_and_merge(self, cropped_images):
- return cropped_images
-
- def __call__(self, img):
- return self.detect_faces(img)
-
- def detect_faces(self, img):
- crops = self.detect_crops(img)
- if crops is None or len(crops) == 0:
- return [], []
- crops = self.sort_faces(crops)
- updated_crops = self.postprocess_crops(crops)
- updated_crops = self.fix_range_crops(img, updated_crops)
- cropped_faces = self.crop_faces(img, updated_crops)
- unified_faces = self.unify_and_merge(cropped_faces)
- return unified_faces, updated_crops
-
-
-class StatRetinaFaceDetector(FaceDetector):
- def __init__(self, target_size=None):
- super().__init__(target_size)
- self.model = retinaface_model.build_model()
- #self.relative_offsets = [0.3258, 0.5225, 0.3258, 0.1290]
- self.relative_offsets = [0.3619, 0.5830, 0.3619, 0.1909]
-
- def postprocess_crops(self, crops, *args, **kwargs) -> np.ndarray:
- final_crops = []
- x1_offset, y1_offset, x2_offset, y2_offset = self.relative_offsets
- for crop in crops:
- x1, y1, x2, y2 = crop
- w, h = x2 - x1, y2 - y1
- x1 -= w * x1_offset
- y1 -= h * y1_offset
- x2 += w * x2_offset
- y2 += h * y2_offset
- crop = np.array([x1, y1, x2, y2], dtype=crop.dtype)
- crop = convert_to_square(crop)
- final_crops.append(crop)
- final_crops = np.stack(final_crops, axis=0)
- return final_crops
-
- def detect_crops(self, img, *args, **kwargs):
- faces = RetinaFace.detect_faces(img, model=self.model)
- crops = []
- if isinstance(faces, tuple):
- faces = {}
- for name, face in faces.items():
- x1, y1, x2, y2 = face['facial_area']
- crop = np.array([x1, y1, x2, y2])
- crops.append(crop)
- if len(crops) > 0:
- crops = np.stack(crops, axis=0)
- return crops
-
- def unify_and_merge(self, cropped_images):
- if self.target_size is None:
- return cropped_images
- else:
- resized_images = []
- for cropped_image in cropped_images:
- resized_image = cv2.resize(cropped_image, (self.target_size, self.target_size),
- interpolation=cv2.INTER_LINEAR)
- resized_images.append(resized_image)
-
- resized_images = np.stack(resized_images, axis=0)
- return resized_images
-
diff --git a/spaces/Poupeto/RVC_Ryu7ztv/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/Poupeto/RVC_Ryu7ztv/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
deleted file mode 100644
index b2c592527a5966e6f8e79e8c52dc5b414246dcc6..0000000000000000000000000000000000000000
--- a/spaces/Poupeto/RVC_Ryu7ztv/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
+++ /dev/null
@@ -1,97 +0,0 @@
-from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import parselmouth
-import numpy as np
-
-
-class PMF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def compute_f0(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0
-
- def compute_f0_uv(self, wav, p_len=None):
- x = wav
- if p_len is None:
- p_len = x.shape[0] // self.hop_length
- else:
- assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
- time_step = self.hop_length / self.sampling_rate * 1000
- f0 = (
- parselmouth.Sound(x, self.sampling_rate)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=self.f0_min,
- pitch_ceiling=self.f0_max,
- )
- .selected_array["frequency"]
- )
-
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
- f0, uv = self.interpolate_f0(f0)
- return f0, uv
diff --git a/spaces/Qiukai/gpt/core_functional.py b/spaces/Qiukai/gpt/core_functional.py
deleted file mode 100644
index 536ccb609c38cbbebfda4ba17bd51a78857d711e..0000000000000000000000000000000000000000
--- a/spaces/Qiukai/gpt/core_functional.py
+++ /dev/null
@@ -1,71 +0,0 @@
-# 'primary' 颜色对应 theme.py 中的 primary_hue
-# 'secondary' 颜色对应 theme.py 中的 neutral_hue
-# 'stop' 颜色对应 theme.py 中的 color_er
-# 默认按钮颜色是 secondary
-from toolbox import clear_line_break
-
-
-def get_core_functions():
- return {
- "英语学术润色": {
- # 前言
- "Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, " +
- r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. " +
- r"Furthermore, list all modification and explain the reasons to do so in markdown table." + "\n\n",
- # 后语
- "Suffix": r"",
- "Color": r"secondary", # 按钮颜色
- },
- "中文学术润色": {
- "Prefix": r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性," +
- r"同时分解长句,减少重复,并提供改进建议。请只提供文本的更正版本,避免包括解释。请编辑以下文本" + "\n\n",
- "Suffix": r"",
- },
- "查找语法错误": {
- "Prefix": r"Can you help me ensure that the grammar and the spelling is correct? " +
- r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good." +
- r"If you find grammar or spelling mistakes, please list mistakes you find in a two-column markdown table, " +
- r"put the original text the first column, " +
- r"put the corrected text in the second column and highlight the key words you fixed.""\n"
- r"Example:""\n"
- r"Paragraph: How is you? Do you knows what is it?""\n"
- r"| Original sentence | Corrected sentence |""\n"
- r"| :--- | :--- |""\n"
- r"| How **is** you? | How **are** you? |""\n"
- r"| Do you **knows** what **is** **it**? | Do you **know** what **it** **is** ? |""\n"
- r"Below is a paragraph from an academic paper. "
- r"You need to report all grammar and spelling mistakes as the example before."
- + "\n\n",
- "Suffix": r"",
- "PreProcess": clear_line_break, # 预处理:清除换行符
- },
- "中译英": {
- "Prefix": r"Please translate following sentence to English:" + "\n\n",
- "Suffix": r"",
- },
- "学术中英互译": {
- "Prefix": r"I want you to act as a scientific English-Chinese translator, " +
- r"I will provide you with some paragraphs in one language " +
- r"and your task is to accurately and academically translate the paragraphs only into the other language. " +
- r"Do not repeat the original provided paragraphs after translation. " +
- r"You should use artificial intelligence tools, " +
- r"such as natural language processing, and rhetorical knowledge " +
- r"and experience about effective writing techniques to reply. " +
- r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:" + "\n\n",
- "Suffix": "",
- "Color": "secondary",
- },
- "英译中": {
- "Prefix": r"翻译成地道的中文:" + "\n\n",
- "Suffix": r"",
- },
- "找图片": {
- "Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," +
- r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片:" + "\n\n",
- "Suffix": r"",
- },
- "解释代码": {
- "Prefix": r"请解释以下代码:" + "\n```\n",
- "Suffix": "\n```\n",
- },
- }
diff --git a/spaces/RMXK/RVC_HFF/tools/calc_rvc_model_similarity.py b/spaces/RMXK/RVC_HFF/tools/calc_rvc_model_similarity.py
deleted file mode 100644
index 42496e088e51dc5162d0714470c2226f696e260c..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/tools/calc_rvc_model_similarity.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# This code references https://huggingface.co/JosephusCheung/ASimilarityCalculatior/blob/main/qwerty.py
-# Fill in the path of the model to be queried and the root directory of the reference models, and this script will return the similarity between the model to be queried and all reference models.
-import os
-import logging
-
-logger = logging.getLogger(__name__)
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-def cal_cross_attn(to_q, to_k, to_v, rand_input):
- hidden_dim, embed_dim = to_q.shape
- attn_to_q = nn.Linear(hidden_dim, embed_dim, bias=False)
- attn_to_k = nn.Linear(hidden_dim, embed_dim, bias=False)
- attn_to_v = nn.Linear(hidden_dim, embed_dim, bias=False)
- attn_to_q.load_state_dict({"weight": to_q})
- attn_to_k.load_state_dict({"weight": to_k})
- attn_to_v.load_state_dict({"weight": to_v})
-
- return torch.einsum(
- "ik, jk -> ik",
- F.softmax(
- torch.einsum("ij, kj -> ik", attn_to_q(rand_input), attn_to_k(rand_input)),
- dim=-1,
- ),
- attn_to_v(rand_input),
- )
-
-
-def model_hash(filename):
- try:
- with open(filename, "rb") as file:
- import hashlib
-
- m = hashlib.sha256()
-
- file.seek(0x100000)
- m.update(file.read(0x10000))
- return m.hexdigest()[0:8]
- except FileNotFoundError:
- return "NOFILE"
-
-
-def eval(model, n, input):
- qk = f"enc_p.encoder.attn_layers.{n}.conv_q.weight"
- uk = f"enc_p.encoder.attn_layers.{n}.conv_k.weight"
- vk = f"enc_p.encoder.attn_layers.{n}.conv_v.weight"
- atoq, atok, atov = model[qk][:, :, 0], model[uk][:, :, 0], model[vk][:, :, 0]
-
- attn = cal_cross_attn(atoq, atok, atov, input)
- return attn
-
-
-def main(path, root):
- torch.manual_seed(114514)
- model_a = torch.load(path, map_location="cpu")["weight"]
-
- logger.info("Query:\t\t%s\t%s" % (path, model_hash(path)))
-
- map_attn_a = {}
- map_rand_input = {}
- for n in range(6):
- hidden_dim, embed_dim, _ = model_a[
- f"enc_p.encoder.attn_layers.{n}.conv_v.weight"
- ].shape
- rand_input = torch.randn([embed_dim, hidden_dim])
-
- map_attn_a[n] = eval(model_a, n, rand_input)
- map_rand_input[n] = rand_input
-
- del model_a
-
- for name in sorted(list(os.listdir(root))):
- path = "%s/%s" % (root, name)
- model_b = torch.load(path, map_location="cpu")["weight"]
-
- sims = []
- for n in range(6):
- attn_a = map_attn_a[n]
- attn_b = eval(model_b, n, map_rand_input[n])
-
- sim = torch.mean(torch.cosine_similarity(attn_a, attn_b))
- sims.append(sim)
-
- logger.info(
- "Reference:\t%s\t%s\t%s"
- % (path, model_hash(path), f"{torch.mean(torch.stack(sims)) * 1e2:.2f}%")
- )
-
-
-if __name__ == "__main__":
- query_path = r"assets\weights\mi v3.pth"
- reference_root = r"assets\weights"
- main(query_path, reference_root)
diff --git a/spaces/RamAnanth1/videocrafter/extralibs/midas/midas/dpt_depth.py b/spaces/RamAnanth1/videocrafter/extralibs/midas/midas/dpt_depth.py
deleted file mode 100644
index 95bd762d4a46a29e090687f775322809b5a7b6c5..0000000000000000000000000000000000000000
--- a/spaces/RamAnanth1/videocrafter/extralibs/midas/midas/dpt_depth.py
+++ /dev/null
@@ -1,110 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from .base_model import BaseModel
-from .blocks import (
- FeatureFusionBlock,
- FeatureFusionBlock_custom,
- Interpolate,
- _make_encoder,
- forward_vit,
-)
-
-
-def _make_fusion_block(features, use_bn):
- return FeatureFusionBlock_custom(
- features,
- nn.ReLU(False),
- deconv=False,
- bn=use_bn,
- expand=False,
- align_corners=True,
- )
-
-
-class DPT(BaseModel):
- def __init__(
- self,
- head,
- features=256,
- backbone="vitb_rn50_384",
- readout="project",
- channels_last=False,
- use_bn=False,
- ):
-
- super(DPT, self).__init__()
-
- self.channels_last = channels_last
-
- hooks = {
- "vitb_rn50_384": [0, 1, 8, 11],
- "vitb16_384": [2, 5, 8, 11],
- "vitl16_384": [5, 11, 17, 23],
- }
-
- # Instantiate backbone and reassemble blocks
- self.pretrained, self.scratch = _make_encoder(
- backbone,
- features,
- False, # Set to true of you want to train from scratch, uses ImageNet weights
- groups=1,
- expand=False,
- exportable=False,
- hooks=hooks[backbone],
- use_readout=readout,
- )
-
- self.scratch.refinenet1 = _make_fusion_block(features, use_bn)
- self.scratch.refinenet2 = _make_fusion_block(features, use_bn)
- self.scratch.refinenet3 = _make_fusion_block(features, use_bn)
- self.scratch.refinenet4 = _make_fusion_block(features, use_bn)
-
- self.scratch.output_conv = head
-
-
- def forward(self, x):
- if self.channels_last == True:
- x.contiguous(memory_format=torch.channels_last)
-
- layer_1, layer_2, layer_3, layer_4 = forward_vit(self.pretrained, x)
-
- layer_1_rn = self.scratch.layer1_rn(layer_1)
- layer_2_rn = self.scratch.layer2_rn(layer_2)
- layer_3_rn = self.scratch.layer3_rn(layer_3)
- layer_4_rn = self.scratch.layer4_rn(layer_4)
-
- path_4 = self.scratch.refinenet4(layer_4_rn)
- path_3 = self.scratch.refinenet3(path_4, layer_3_rn)
- path_2 = self.scratch.refinenet2(path_3, layer_2_rn)
- path_1 = self.scratch.refinenet1(path_2, layer_1_rn)
-
- out = self.scratch.output_conv(path_1)
-
- return out
-
-
-class DPTDepthModel(DPT):
- def __init__(self, path=None, non_negative=True, **kwargs):
- features = kwargs["features"] if "features" in kwargs else 256
-
- head = nn.Sequential(
- nn.Conv2d(features, features // 2, kernel_size=3, stride=1, padding=1),
- Interpolate(scale_factor=2, mode="bilinear", align_corners=True),
- nn.Conv2d(features // 2, 32, kernel_size=3, stride=1, padding=1),
- nn.ReLU(True),
- nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0),
- nn.ReLU(True) if non_negative else nn.Identity(),
- nn.Identity(),
- )
-
- super().__init__(head, **kwargs)
-
- if path is not None:
- self.load(path)
- print("Midas depth estimation model loaded.")
-
- def forward(self, x):
- return super().forward(x).squeeze(dim=1)
-
diff --git a/spaces/Riksarkivet/htr_demo/helper/text/overview/duplicate_api/api_code2.md b/spaces/Riksarkivet/htr_demo/helper/text/overview/duplicate_api/api_code2.md
deleted file mode 100644
index 4de17d6784413aa343484e91de1048351258611d..0000000000000000000000000000000000000000
--- a/spaces/Riksarkivet/htr_demo/helper/text/overview/duplicate_api/api_code2.md
+++ /dev/null
@@ -1,26 +0,0 @@
-Loaded as API: http://127.0.0.1:7860/ ✔
-
-
-
-
- Swedish National Archives
- 2023-08-21, 13:28:06
-
-
-
-
-
-
-
- År 1865.
-
-
-
-
-
-......................................
-
-
-
-
-# Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
diff --git a/spaces/Riksarkivet/htr_demo/tabs/htr_tool.py b/spaces/Riksarkivet/htr_demo/tabs/htr_tool.py
deleted file mode 100644
index 839e2787f944efd0b3b4ea778d5240d01c100e11..0000000000000000000000000000000000000000
--- a/spaces/Riksarkivet/htr_demo/tabs/htr_tool.py
+++ /dev/null
@@ -1,324 +0,0 @@
-import os
-
-import gradio as gr
-
-from helper.examples.examples import DemoImages
-from helper.utils import TrafficDataHandler
-from src.htr_pipeline.gradio_backend import (
- FastTrack,
- SingletonModelLoader,
- compare_diff_runs_highlight,
- compute_cer_a_and_b_with_gt,
- update_selected_tab_image_viewer,
- update_selected_tab_model_compare,
- update_selected_tab_output_and_setting,
- upload_file,
-)
-
-model_loader = SingletonModelLoader()
-fast_track = FastTrack(model_loader)
-images_for_demo = DemoImages()
-
-terminate = False
-
-
-with gr.Blocks() as htr_tool_tab:
- with gr.Row(equal_height=True):
- with gr.Column(scale=2):
- with gr.Row():
- fast_track_input_region_image = gr.Image(
- label="Image to run HTR on", type="numpy", tool="editor", elem_id="image_upload", height=395
- )
-
- with gr.Row():
- with gr.Tab("HTRFLOW") as tab_output_and_setting_selector:
- with gr.Row():
- stop_htr_button = gr.Button(
- value="Stop run",
- variant="stop",
- )
-
- htr_pipeline_button = gr.Button(
- "Run ",
- variant="primary",
- visible=True,
- elem_id="run_pipeline_button",
- )
- htr_pipeline_button_var = gr.State(value="htr_pipeline_button")
-
- htr_pipeline_button_api = gr.Button("Run pipeline", variant="primary", visible=False, scale=1)
-
- fast_file_downlod = gr.File(
- label="Download output file", visible=True, scale=1, height=100, elem_id="download_file"
- )
-
- with gr.Tab("Visualize") as tab_image_viewer_selector:
- with gr.Row():
- gr.Markdown("")
- run_image_visualizer_button = gr.Button(
- value="Visualize results", variant="primary", interactive=True
- )
-
- selection_text_from_image_viewer = gr.Textbox(
- interactive=False, label="Text Selector", info="Select a line on Image Viewer to return text"
- )
-
- with gr.Tab("Compare") as tab_model_compare_selector:
- with gr.Row():
- diff_runs_button = gr.Button("Compare runs", variant="primary", visible=True)
- calc_cer_button_fast = gr.Button("Calculate CER", variant="primary", visible=True)
- with gr.Row():
- cer_output_fast = gr.Textbox(
- label="Character Error Rate:",
- info="The percentage of characters that have been transcribed incorrectly",
- )
-
- with gr.Column(scale=4):
- with gr.Box():
- with gr.Row(visible=True) as output_and_setting_tab:
- with gr.Column(scale=2):
- fast_name_files_placeholder = gr.Markdown(visible=False)
- gr.Examples(
- examples=images_for_demo.examples_list,
- inputs=[fast_name_files_placeholder, fast_track_input_region_image],
- label="Example images",
- examples_per_page=5,
- )
-
- gr.Markdown(" ")
-
- with gr.Column(scale=3):
- with gr.Group():
- gr.Markdown(" ⚙️ Settings ")
- with gr.Row():
- radio_file_input = gr.CheckboxGroup(
- choices=["Txt", "Page XML"],
- value=["Txt", "Page XML"],
- label="Output file extension",
- info="JSON and ALTO-XML will be added",
- scale=1,
- )
- with gr.Row():
- gr.Checkbox(
- value=True,
- label="Binarize image",
- info="Binarize image to reduce background noise",
- )
- gr.Checkbox(
- value=True,
- label="Output prediction threshold",
- info="Output XML with prediction score",
- )
-
- with gr.Accordion("Advanced settings", open=False):
- with gr.Group():
- with gr.Row():
- htr_tool_region_segment_model_dropdown = gr.Dropdown(
- choices=["Riksarkivet/rtmdet_region"],
- value="Riksarkivet/rtmdet_region",
- label="Region segmentation models",
- info="More models will be added",
- )
-
- gr.Slider(
- minimum=0.4,
- maximum=1,
- value=0.5,
- step=0.05,
- label="P-threshold",
- info="""Filter confidence score for a prediction score to be considered""",
- )
-
- with gr.Row():
- htr_tool_line_segment_model_dropdown = gr.Dropdown(
- choices=["Riksarkivet/rtmdet_lines"],
- value="Riksarkivet/rtmdet_lines",
- label="Line segmentation models",
- info="More models will be added",
- )
-
- gr.Slider(
- minimum=0.4,
- maximum=1,
- value=0.5,
- step=0.05,
- label="P-threshold",
- info="""Filter confidence score for a prediction score to be considered""",
- )
-
- with gr.Row():
- htr_tool_transcriber_model_dropdown = gr.Dropdown(
- choices=[
- "Riksarkivet/satrn_htr",
- "microsoft/trocr-base-handwritten",
- "pstroe/bullinger-general-model",
- ],
- value="Riksarkivet/satrn_htr",
- label="Text recognition models",
- info="More models will be added",
- )
-
- gr.Slider(
- value=0.6,
- minimum=0.5,
- maximum=1,
- label="HTR threshold",
- info="Prediction score threshold for transcribed lines",
- scale=1,
- )
- with gr.Row():
- gr.Markdown(" More settings will be added")
-
- with gr.Row(visible=False) as image_viewer_tab:
- text_polygon_dict = gr.Variable()
-
- fast_track_output_image = gr.Image(
- label="Image Viewer", type="numpy", height=600, interactive=False
- )
-
- with gr.Column(visible=False) as model_compare_selector:
- with gr.Row():
- gr.Markdown("Compare different runs (Page XML output) with Ground Truth (GT)")
- with gr.Row():
- with gr.Group():
- upload_button_run_a = gr.UploadButton("A", file_types=[".xml"], file_count="single")
- file_input_xml_run_a = gr.File(
- label=None,
- file_count="single",
- height=100,
- elem_id="download_file",
- interactive=False,
- visible=False,
- )
-
- with gr.Group():
- upload_button_run_b = gr.UploadButton("B", file_types=[".xml"], file_count="single")
- file_input_xml_run_b = gr.File(
- label=None,
- file_count="single",
- height=100,
- elem_id="download_file",
- interactive=False,
- visible=False,
- )
-
- with gr.Group():
- upload_button_run_gt = gr.UploadButton("GT", file_types=[".xml"], file_count="single")
- file_input_xml_run_gt = gr.File(
- label=None,
- file_count="single",
- height=100,
- elem_id="download_file",
- interactive=False,
- visible=False,
- )
- with gr.Tab("Comparing run A with B"):
- text_diff_runs = gr.HighlightedText(
- label="A with B",
- combine_adjacent=True,
- show_legend=True,
- color_map={"+": "red", "-": "green"},
- )
- with gr.Tab("Compare run A with Ground Truth"):
- text_diff_gt = gr.HighlightedText(
- label="A with GT",
- combine_adjacent=True,
- show_legend=True,
- color_map={"+": "red", "-": "green"},
- )
-
- xml_rendered_placeholder_for_api = gr.Textbox(placeholder="XML", visible=False)
-
- htr_event_click_event = htr_pipeline_button.click(
- fast_track.segment_to_xml,
- inputs=[fast_track_input_region_image, radio_file_input, htr_tool_transcriber_model_dropdown],
- outputs=[fast_file_downlod, fast_file_downlod],
- api_name=False,
- )
-
- htr_pipeline_button_api.click(
- fast_track.segment_to_xml_api,
- inputs=[fast_track_input_region_image],
- outputs=[xml_rendered_placeholder_for_api],
- queue=False,
- api_name="run_htr_pipeline",
- )
-
- tab_output_and_setting_selector.select(
- fn=update_selected_tab_output_and_setting,
- outputs=[output_and_setting_tab, image_viewer_tab, model_compare_selector],
- api_name=False,
- )
-
- tab_image_viewer_selector.select(
- fn=update_selected_tab_image_viewer,
- outputs=[output_and_setting_tab, image_viewer_tab, model_compare_selector],
- api_name=False,
- )
-
- tab_model_compare_selector.select(
- fn=update_selected_tab_model_compare,
- outputs=[output_and_setting_tab, image_viewer_tab, model_compare_selector],
- api_name=False,
- )
-
- def stop_function():
- from src.htr_pipeline.utils import pipeline_inferencer
-
- pipeline_inferencer.terminate = True
- gr.Info("The HTR execution was halted")
-
- stop_htr_button.click(
- fn=stop_function,
- inputs=None,
- outputs=None,
- api_name=False,
- # cancels=[htr_event_click_event],
- )
-
- run_image_visualizer_button.click(
- fn=fast_track.visualize_image_viewer,
- inputs=fast_track_input_region_image,
- outputs=[fast_track_output_image, text_polygon_dict],
- api_name=False,
- )
-
- fast_track_output_image.select(
- fast_track.get_text_from_coords,
- inputs=text_polygon_dict,
- outputs=selection_text_from_image_viewer,
- api_name=False,
- )
-
- upload_button_run_a.upload(
- upload_file, inputs=upload_button_run_a, outputs=[file_input_xml_run_a, file_input_xml_run_a], api_name=False
- )
-
- upload_button_run_b.upload(
- upload_file, inputs=upload_button_run_b, outputs=[file_input_xml_run_b, file_input_xml_run_b], api_name=False
- )
-
- upload_button_run_gt.upload(
- upload_file, inputs=upload_button_run_gt, outputs=[file_input_xml_run_gt, file_input_xml_run_gt], api_name=False
- )
-
- diff_runs_button.click(
- fn=compare_diff_runs_highlight,
- inputs=[file_input_xml_run_a, file_input_xml_run_b, file_input_xml_run_gt],
- outputs=[text_diff_runs, text_diff_gt],
- api_name=False,
- )
-
- calc_cer_button_fast.click(
- fn=compute_cer_a_and_b_with_gt,
- inputs=[file_input_xml_run_a, file_input_xml_run_b, file_input_xml_run_gt],
- outputs=cer_output_fast,
- api_name=False,
- )
-
- SECRET_KEY = os.environ.get("HUB_TOKEN", False)
- if SECRET_KEY:
- htr_pipeline_button.click(
- fn=TrafficDataHandler.store_metric_data,
- inputs=htr_pipeline_button_var,
- )
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/plugin.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/plugin.py
deleted file mode 100644
index 07c010d4053174dd41107aa654ea67e82b46a25c..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/plugin.py
+++ /dev/null
@@ -1,88 +0,0 @@
-import inspect
-import platform
-
-from .registry import PLUGIN_LAYERS
-
-if platform.system() == 'Windows':
- import regex as re
-else:
- import re
-
-
-def infer_abbr(class_type):
- """Infer abbreviation from the class name.
-
- This method will infer the abbreviation to map class types to
- abbreviations.
-
- Rule 1: If the class has the property "abbr", return the property.
- Rule 2: Otherwise, the abbreviation falls back to snake case of class
- name, e.g. the abbreviation of ``FancyBlock`` will be ``fancy_block``.
-
- Args:
- class_type (type): The norm layer type.
-
- Returns:
- str: The inferred abbreviation.
- """
-
- def camel2snack(word):
- """Convert camel case word into snack case.
-
- Modified from `inflection lib
- `_.
-
- Example::
-
- >>> camel2snack("FancyBlock")
- 'fancy_block'
- """
-
- word = re.sub(r'([A-Z]+)([A-Z][a-z])', r'\1_\2', word)
- word = re.sub(r'([a-z\d])([A-Z])', r'\1_\2', word)
- word = word.replace('-', '_')
- return word.lower()
-
- if not inspect.isclass(class_type):
- raise TypeError(
- f'class_type must be a type, but got {type(class_type)}')
- if hasattr(class_type, '_abbr_'):
- return class_type._abbr_
- else:
- return camel2snack(class_type.__name__)
-
-
-def build_plugin_layer(cfg, postfix='', **kwargs):
- """Build plugin layer.
-
- Args:
- cfg (None or dict): cfg should contain:
- type (str): identify plugin layer type.
- layer args: args needed to instantiate a plugin layer.
- postfix (int, str): appended into norm abbreviation to
- create named layer. Default: ''.
-
- Returns:
- tuple[str, nn.Module]:
- name (str): abbreviation + postfix
- layer (nn.Module): created plugin layer
- """
- if not isinstance(cfg, dict):
- raise TypeError('cfg must be a dict')
- if 'type' not in cfg:
- raise KeyError('the cfg dict must contain the key "type"')
- cfg_ = cfg.copy()
-
- layer_type = cfg_.pop('type')
- if layer_type not in PLUGIN_LAYERS:
- raise KeyError(f'Unrecognized plugin type {layer_type}')
-
- plugin_layer = PLUGIN_LAYERS.get(layer_type)
- abbr = infer_abbr(plugin_layer)
-
- assert isinstance(postfix, (int, str))
- name = abbr + str(postfix)
-
- layer = plugin_layer(**kwargs, **cfg_)
-
- return name, layer
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/losses/gfocal_loss.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/losses/gfocal_loss.py
deleted file mode 100644
index 9d3b8833dc50c76f6741db5341dbf8da3402d07b..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/losses/gfocal_loss.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import mmcv
-import torch.nn as nn
-import torch.nn.functional as F
-
-from ..builder import LOSSES
-from .utils import weighted_loss
-
-
-@mmcv.jit(derivate=True, coderize=True)
-@weighted_loss
-def quality_focal_loss(pred, target, beta=2.0):
- r"""Quality Focal Loss (QFL) is from `Generalized Focal Loss: Learning
- Qualified and Distributed Bounding Boxes for Dense Object Detection
- `_.
-
- Args:
- pred (torch.Tensor): Predicted joint representation of classification
- and quality (IoU) estimation with shape (N, C), C is the number of
- classes.
- target (tuple([torch.Tensor])): Target category label with shape (N,)
- and target quality label with shape (N,).
- beta (float): The beta parameter for calculating the modulating factor.
- Defaults to 2.0.
-
- Returns:
- torch.Tensor: Loss tensor with shape (N,).
- """
- assert len(target) == 2, """target for QFL must be a tuple of two elements,
- including category label and quality label, respectively"""
- # label denotes the category id, score denotes the quality score
- label, score = target
-
- # negatives are supervised by 0 quality score
- pred_sigmoid = pred.sigmoid()
- scale_factor = pred_sigmoid
- zerolabel = scale_factor.new_zeros(pred.shape)
- loss = F.binary_cross_entropy_with_logits(
- pred, zerolabel, reduction='none') * scale_factor.pow(beta)
-
- # FG cat_id: [0, num_classes -1], BG cat_id: num_classes
- bg_class_ind = pred.size(1)
- pos = ((label >= 0) & (label < bg_class_ind)).nonzero().squeeze(1)
- pos_label = label[pos].long()
- # positives are supervised by bbox quality (IoU) score
- scale_factor = score[pos] - pred_sigmoid[pos, pos_label]
- loss[pos, pos_label] = F.binary_cross_entropy_with_logits(
- pred[pos, pos_label], score[pos],
- reduction='none') * scale_factor.abs().pow(beta)
-
- loss = loss.sum(dim=1, keepdim=False)
- return loss
-
-
-@mmcv.jit(derivate=True, coderize=True)
-@weighted_loss
-def distribution_focal_loss(pred, label):
- r"""Distribution Focal Loss (DFL) is from `Generalized Focal Loss: Learning
- Qualified and Distributed Bounding Boxes for Dense Object Detection
- `_.
-
- Args:
- pred (torch.Tensor): Predicted general distribution of bounding boxes
- (before softmax) with shape (N, n+1), n is the max value of the
- integral set `{0, ..., n}` in paper.
- label (torch.Tensor): Target distance label for bounding boxes with
- shape (N,).
-
- Returns:
- torch.Tensor: Loss tensor with shape (N,).
- """
- dis_left = label.long()
- dis_right = dis_left + 1
- weight_left = dis_right.float() - label
- weight_right = label - dis_left.float()
- loss = F.cross_entropy(pred, dis_left, reduction='none') * weight_left \
- + F.cross_entropy(pred, dis_right, reduction='none') * weight_right
- return loss
-
-
-@LOSSES.register_module()
-class QualityFocalLoss(nn.Module):
- r"""Quality Focal Loss (QFL) is a variant of `Generalized Focal Loss:
- Learning Qualified and Distributed Bounding Boxes for Dense Object
- Detection `_.
-
- Args:
- use_sigmoid (bool): Whether sigmoid operation is conducted in QFL.
- Defaults to True.
- beta (float): The beta parameter for calculating the modulating factor.
- Defaults to 2.0.
- reduction (str): Options are "none", "mean" and "sum".
- loss_weight (float): Loss weight of current loss.
- """
-
- def __init__(self,
- use_sigmoid=True,
- beta=2.0,
- reduction='mean',
- loss_weight=1.0):
- super(QualityFocalLoss, self).__init__()
- assert use_sigmoid is True, 'Only sigmoid in QFL supported now.'
- self.use_sigmoid = use_sigmoid
- self.beta = beta
- self.reduction = reduction
- self.loss_weight = loss_weight
-
- def forward(self,
- pred,
- target,
- weight=None,
- avg_factor=None,
- reduction_override=None):
- """Forward function.
-
- Args:
- pred (torch.Tensor): Predicted joint representation of
- classification and quality (IoU) estimation with shape (N, C),
- C is the number of classes.
- target (tuple([torch.Tensor])): Target category label with shape
- (N,) and target quality label with shape (N,).
- weight (torch.Tensor, optional): The weight of loss for each
- prediction. Defaults to None.
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- reduction_override (str, optional): The reduction method used to
- override the original reduction method of the loss.
- Defaults to None.
- """
- assert reduction_override in (None, 'none', 'mean', 'sum')
- reduction = (
- reduction_override if reduction_override else self.reduction)
- if self.use_sigmoid:
- loss_cls = self.loss_weight * quality_focal_loss(
- pred,
- target,
- weight,
- beta=self.beta,
- reduction=reduction,
- avg_factor=avg_factor)
- else:
- raise NotImplementedError
- return loss_cls
-
-
-@LOSSES.register_module()
-class DistributionFocalLoss(nn.Module):
- r"""Distribution Focal Loss (DFL) is a variant of `Generalized Focal Loss:
- Learning Qualified and Distributed Bounding Boxes for Dense Object
- Detection `_.
-
- Args:
- reduction (str): Options are `'none'`, `'mean'` and `'sum'`.
- loss_weight (float): Loss weight of current loss.
- """
-
- def __init__(self, reduction='mean', loss_weight=1.0):
- super(DistributionFocalLoss, self).__init__()
- self.reduction = reduction
- self.loss_weight = loss_weight
-
- def forward(self,
- pred,
- target,
- weight=None,
- avg_factor=None,
- reduction_override=None):
- """Forward function.
-
- Args:
- pred (torch.Tensor): Predicted general distribution of bounding
- boxes (before softmax) with shape (N, n+1), n is the max value
- of the integral set `{0, ..., n}` in paper.
- target (torch.Tensor): Target distance label for bounding boxes
- with shape (N,).
- weight (torch.Tensor, optional): The weight of loss for each
- prediction. Defaults to None.
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- reduction_override (str, optional): The reduction method used to
- override the original reduction method of the loss.
- Defaults to None.
- """
- assert reduction_override in (None, 'none', 'mean', 'sum')
- reduction = (
- reduction_override if reduction_override else self.reduction)
- loss_cls = self.loss_weight * distribution_focal_loss(
- pred, target, weight, reduction=reduction, avg_factor=avg_factor)
- return loss_cls
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/points_in_boxes.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/points_in_boxes.py
deleted file mode 100644
index 4003173a53052161dbcd687a2fa1d755642fdab8..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/points_in_boxes.py
+++ /dev/null
@@ -1,133 +0,0 @@
-import torch
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext('_ext', [
- 'points_in_boxes_part_forward', 'points_in_boxes_cpu_forward',
- 'points_in_boxes_all_forward'
-])
-
-
-def points_in_boxes_part(points, boxes):
- """Find the box in which each point is (CUDA).
-
- Args:
- points (torch.Tensor): [B, M, 3], [x, y, z] in LiDAR/DEPTH coordinate
- boxes (torch.Tensor): [B, T, 7],
- num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz] in
- LiDAR/DEPTH coordinate, (x, y, z) is the bottom center
-
- Returns:
- box_idxs_of_pts (torch.Tensor): (B, M), default background = -1
- """
- assert points.shape[0] == boxes.shape[0], \
- 'Points and boxes should have the same batch size, ' \
- f'but got {points.shape[0]} and {boxes.shape[0]}'
- assert boxes.shape[2] == 7, \
- 'boxes dimension should be 7, ' \
- f'but got unexpected shape {boxes.shape[2]}'
- assert points.shape[2] == 3, \
- 'points dimension should be 3, ' \
- f'but got unexpected shape {points.shape[2]}'
- batch_size, num_points, _ = points.shape
-
- box_idxs_of_pts = points.new_zeros((batch_size, num_points),
- dtype=torch.int).fill_(-1)
-
- # If manually put the tensor 'points' or 'boxes' on a device
- # which is not the current device, some temporary variables
- # will be created on the current device in the cuda op,
- # and the output will be incorrect.
- # Therefore, we force the current device to be the same
- # as the device of the tensors if it was not.
- # Please refer to https://github.com/open-mmlab/mmdetection3d/issues/305
- # for the incorrect output before the fix.
- points_device = points.get_device()
- assert points_device == boxes.get_device(), \
- 'Points and boxes should be put on the same device'
- if torch.cuda.current_device() != points_device:
- torch.cuda.set_device(points_device)
-
- ext_module.points_in_boxes_part_forward(boxes.contiguous(),
- points.contiguous(),
- box_idxs_of_pts)
-
- return box_idxs_of_pts
-
-
-def points_in_boxes_cpu(points, boxes):
- """Find all boxes in which each point is (CPU). The CPU version of
- :meth:`points_in_boxes_all`.
-
- Args:
- points (torch.Tensor): [B, M, 3], [x, y, z] in
- LiDAR/DEPTH coordinate
- boxes (torch.Tensor): [B, T, 7],
- num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz],
- (x, y, z) is the bottom center.
-
- Returns:
- box_idxs_of_pts (torch.Tensor): (B, M, T), default background = 0.
- """
- assert points.shape[0] == boxes.shape[0], \
- 'Points and boxes should have the same batch size, ' \
- f'but got {points.shape[0]} and {boxes.shape[0]}'
- assert boxes.shape[2] == 7, \
- 'boxes dimension should be 7, ' \
- f'but got unexpected shape {boxes.shape[2]}'
- assert points.shape[2] == 3, \
- 'points dimension should be 3, ' \
- f'but got unexpected shape {points.shape[2]}'
- batch_size, num_points, _ = points.shape
- num_boxes = boxes.shape[1]
-
- point_indices = points.new_zeros((batch_size, num_boxes, num_points),
- dtype=torch.int)
- for b in range(batch_size):
- ext_module.points_in_boxes_cpu_forward(boxes[b].float().contiguous(),
- points[b].float().contiguous(),
- point_indices[b])
- point_indices = point_indices.transpose(1, 2)
-
- return point_indices
-
-
-def points_in_boxes_all(points, boxes):
- """Find all boxes in which each point is (CUDA).
-
- Args:
- points (torch.Tensor): [B, M, 3], [x, y, z] in LiDAR/DEPTH coordinate
- boxes (torch.Tensor): [B, T, 7],
- num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz],
- (x, y, z) is the bottom center.
-
- Returns:
- box_idxs_of_pts (torch.Tensor): (B, M, T), default background = 0.
- """
- assert boxes.shape[0] == points.shape[0], \
- 'Points and boxes should have the same batch size, ' \
- f'but got {boxes.shape[0]} and {boxes.shape[0]}'
- assert boxes.shape[2] == 7, \
- 'boxes dimension should be 7, ' \
- f'but got unexpected shape {boxes.shape[2]}'
- assert points.shape[2] == 3, \
- 'points dimension should be 3, ' \
- f'but got unexpected shape {points.shape[2]}'
- batch_size, num_points, _ = points.shape
- num_boxes = boxes.shape[1]
-
- box_idxs_of_pts = points.new_zeros((batch_size, num_points, num_boxes),
- dtype=torch.int).fill_(0)
-
- # Same reason as line 25-32
- points_device = points.get_device()
- assert points_device == boxes.get_device(), \
- 'Points and boxes should be put on the same device'
- if torch.cuda.current_device() != points_device:
- torch.cuda.set_device(points_device)
-
- ext_module.points_in_boxes_all_forward(boxes.contiguous(),
- points.contiguous(),
- box_idxs_of_pts)
-
- return box_idxs_of_pts
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/neptune.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/neptune.py
deleted file mode 100644
index 7a38772b0c93a8608f32c6357b8616e77c139dc9..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/neptune.py
+++ /dev/null
@@ -1,82 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from ...dist_utils import master_only
-from ..hook import HOOKS
-from .base import LoggerHook
-
-
-@HOOKS.register_module()
-class NeptuneLoggerHook(LoggerHook):
- """Class to log metrics to NeptuneAI.
-
- It requires `neptune-client` to be installed.
-
- Args:
- init_kwargs (dict): a dict contains the initialization keys as below:
- - project (str): Name of a project in a form of
- namespace/project_name. If None, the value of
- NEPTUNE_PROJECT environment variable will be taken.
- - api_token (str): User’s API token.
- If None, the value of NEPTUNE_API_TOKEN environment
- variable will be taken. Note: It is strongly recommended
- to use NEPTUNE_API_TOKEN environment variable rather than
- placing your API token in plain text in your source code.
- - name (str, optional, default is 'Untitled'): Editable name of
- the run. Name is displayed in the run's Details and in
- Runs table as a column.
- Check https://docs.neptune.ai/api-reference/neptune#init for
- more init arguments.
- interval (int): Logging interval (every k iterations).
- ignore_last (bool): Ignore the log of last iterations in each epoch
- if less than `interval`.
- reset_flag (bool): Whether to clear the output buffer after logging
- by_epoch (bool): Whether EpochBasedRunner is used.
-
- .. _NeptuneAI:
- https://docs.neptune.ai/you-should-know/logging-metadata
- """
-
- def __init__(self,
- init_kwargs=None,
- interval=10,
- ignore_last=True,
- reset_flag=True,
- with_step=True,
- by_epoch=True):
-
- super(NeptuneLoggerHook, self).__init__(interval, ignore_last,
- reset_flag, by_epoch)
- self.import_neptune()
- self.init_kwargs = init_kwargs
- self.with_step = with_step
-
- def import_neptune(self):
- try:
- import neptune.new as neptune
- except ImportError:
- raise ImportError(
- 'Please run "pip install neptune-client" to install neptune')
- self.neptune = neptune
- self.run = None
-
- @master_only
- def before_run(self, runner):
- if self.init_kwargs:
- self.run = self.neptune.init(**self.init_kwargs)
- else:
- self.run = self.neptune.init()
-
- @master_only
- def log(self, runner):
- tags = self.get_loggable_tags(runner)
- if tags:
- for tag_name, tag_value in tags.items():
- if self.with_step:
- self.run[tag_name].log(
- tag_value, step=self.get_iter(runner))
- else:
- tags['global_step'] = self.get_iter(runner)
- self.run[tag_name].log(tags)
-
- @master_only
- def after_run(self, runner):
- self.run.stop()
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/utils/registry.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/utils/registry.py
deleted file mode 100644
index fa9df39bc9f3d8d568361e7250ab35468f2b74e0..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/utils/registry.py
+++ /dev/null
@@ -1,315 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import inspect
-import warnings
-from functools import partial
-
-from .misc import is_seq_of
-
-
-def build_from_cfg(cfg, registry, default_args=None):
- """Build a module from config dict.
-
- Args:
- cfg (dict): Config dict. It should at least contain the key "type".
- registry (:obj:`Registry`): The registry to search the type from.
- default_args (dict, optional): Default initialization arguments.
-
- Returns:
- object: The constructed object.
- """
- if not isinstance(cfg, dict):
- raise TypeError(f'cfg must be a dict, but got {type(cfg)}')
- if 'type' not in cfg:
- if default_args is None or 'type' not in default_args:
- raise KeyError(
- '`cfg` or `default_args` must contain the key "type", '
- f'but got {cfg}\n{default_args}')
- if not isinstance(registry, Registry):
- raise TypeError('registry must be an mmcv.Registry object, '
- f'but got {type(registry)}')
- if not (isinstance(default_args, dict) or default_args is None):
- raise TypeError('default_args must be a dict or None, '
- f'but got {type(default_args)}')
-
- args = cfg.copy()
-
- if default_args is not None:
- for name, value in default_args.items():
- args.setdefault(name, value)
-
- obj_type = args.pop('type')
- if isinstance(obj_type, str):
- obj_cls = registry.get(obj_type)
- if obj_cls is None:
- raise KeyError(
- f'{obj_type} is not in the {registry.name} registry')
- elif inspect.isclass(obj_type):
- obj_cls = obj_type
- else:
- raise TypeError(
- f'type must be a str or valid type, but got {type(obj_type)}')
- try:
- return obj_cls(**args)
- except Exception as e:
- # Normal TypeError does not print class name.
- raise type(e)(f'{obj_cls.__name__}: {e}')
-
-
-class Registry:
- """A registry to map strings to classes.
-
- Registered object could be built from registry.
- Example:
- >>> MODELS = Registry('models')
- >>> @MODELS.register_module()
- >>> class ResNet:
- >>> pass
- >>> resnet = MODELS.build(dict(type='ResNet'))
-
- Please refer to
- https://mmcv.readthedocs.io/en/latest/understand_mmcv/registry.html for
- advanced usage.
-
- Args:
- name (str): Registry name.
- build_func(func, optional): Build function to construct instance from
- Registry, func:`build_from_cfg` is used if neither ``parent`` or
- ``build_func`` is specified. If ``parent`` is specified and
- ``build_func`` is not given, ``build_func`` will be inherited
- from ``parent``. Default: None.
- parent (Registry, optional): Parent registry. The class registered in
- children registry could be built from parent. Default: None.
- scope (str, optional): The scope of registry. It is the key to search
- for children registry. If not specified, scope will be the name of
- the package where class is defined, e.g. mmdet, mmcls, mmseg.
- Default: None.
- """
-
- def __init__(self, name, build_func=None, parent=None, scope=None):
- self._name = name
- self._module_dict = dict()
- self._children = dict()
- self._scope = self.infer_scope() if scope is None else scope
-
- # self.build_func will be set with the following priority:
- # 1. build_func
- # 2. parent.build_func
- # 3. build_from_cfg
- if build_func is None:
- if parent is not None:
- self.build_func = parent.build_func
- else:
- self.build_func = build_from_cfg
- else:
- self.build_func = build_func
- if parent is not None:
- assert isinstance(parent, Registry)
- parent._add_children(self)
- self.parent = parent
- else:
- self.parent = None
-
- def __len__(self):
- return len(self._module_dict)
-
- def __contains__(self, key):
- return self.get(key) is not None
-
- def __repr__(self):
- format_str = self.__class__.__name__ + \
- f'(name={self._name}, ' \
- f'items={self._module_dict})'
- return format_str
-
- @staticmethod
- def infer_scope():
- """Infer the scope of registry.
-
- The name of the package where registry is defined will be returned.
-
- Example:
- # in mmdet/models/backbone/resnet.py
- >>> MODELS = Registry('models')
- >>> @MODELS.register_module()
- >>> class ResNet:
- >>> pass
- The scope of ``ResNet`` will be ``mmdet``.
-
-
- Returns:
- scope (str): The inferred scope name.
- """
- # inspect.stack() trace where this function is called, the index-2
- # indicates the frame where `infer_scope()` is called
- filename = inspect.getmodule(inspect.stack()[2][0]).__name__
- split_filename = filename.split('.')
- return split_filename[0]
-
- @staticmethod
- def split_scope_key(key):
- """Split scope and key.
-
- The first scope will be split from key.
-
- Examples:
- >>> Registry.split_scope_key('mmdet.ResNet')
- 'mmdet', 'ResNet'
- >>> Registry.split_scope_key('ResNet')
- None, 'ResNet'
-
- Return:
- scope (str, None): The first scope.
- key (str): The remaining key.
- """
- split_index = key.find('.')
- if split_index != -1:
- return key[:split_index], key[split_index + 1:]
- else:
- return None, key
-
- @property
- def name(self):
- return self._name
-
- @property
- def scope(self):
- return self._scope
-
- @property
- def module_dict(self):
- return self._module_dict
-
- @property
- def children(self):
- return self._children
-
- def get(self, key):
- """Get the registry record.
-
- Args:
- key (str): The class name in string format.
-
- Returns:
- class: The corresponding class.
- """
- scope, real_key = self.split_scope_key(key)
- if scope is None or scope == self._scope:
- # get from self
- if real_key in self._module_dict:
- return self._module_dict[real_key]
- else:
- # get from self._children
- if scope in self._children:
- return self._children[scope].get(real_key)
- else:
- # goto root
- parent = self.parent
- while parent.parent is not None:
- parent = parent.parent
- return parent.get(key)
-
- def build(self, *args, **kwargs):
- return self.build_func(*args, **kwargs, registry=self)
-
- def _add_children(self, registry):
- """Add children for a registry.
-
- The ``registry`` will be added as children based on its scope.
- The parent registry could build objects from children registry.
-
- Example:
- >>> models = Registry('models')
- >>> mmdet_models = Registry('models', parent=models)
- >>> @mmdet_models.register_module()
- >>> class ResNet:
- >>> pass
- >>> resnet = models.build(dict(type='mmdet.ResNet'))
- """
-
- assert isinstance(registry, Registry)
- assert registry.scope is not None
- assert registry.scope not in self.children, \
- f'scope {registry.scope} exists in {self.name} registry'
- self.children[registry.scope] = registry
-
- def _register_module(self, module_class, module_name=None, force=False):
- if not inspect.isclass(module_class):
- raise TypeError('module must be a class, '
- f'but got {type(module_class)}')
-
- if module_name is None:
- module_name = module_class.__name__
- if isinstance(module_name, str):
- module_name = [module_name]
- for name in module_name:
- if not force and name in self._module_dict:
- raise KeyError(f'{name} is already registered '
- f'in {self.name}')
- self._module_dict[name] = module_class
-
- def deprecated_register_module(self, cls=None, force=False):
- warnings.warn(
- 'The old API of register_module(module, force=False) '
- 'is deprecated and will be removed, please use the new API '
- 'register_module(name=None, force=False, module=None) instead.')
- if cls is None:
- return partial(self.deprecated_register_module, force=force)
- self._register_module(cls, force=force)
- return cls
-
- def register_module(self, name=None, force=False, module=None):
- """Register a module.
-
- A record will be added to `self._module_dict`, whose key is the class
- name or the specified name, and value is the class itself.
- It can be used as a decorator or a normal function.
-
- Example:
- >>> backbones = Registry('backbone')
- >>> @backbones.register_module()
- >>> class ResNet:
- >>> pass
-
- >>> backbones = Registry('backbone')
- >>> @backbones.register_module(name='mnet')
- >>> class MobileNet:
- >>> pass
-
- >>> backbones = Registry('backbone')
- >>> class ResNet:
- >>> pass
- >>> backbones.register_module(ResNet)
-
- Args:
- name (str | None): The module name to be registered. If not
- specified, the class name will be used.
- force (bool, optional): Whether to override an existing class with
- the same name. Default: False.
- module (type): Module class to be registered.
- """
- if not isinstance(force, bool):
- raise TypeError(f'force must be a boolean, but got {type(force)}')
- # NOTE: This is a walkaround to be compatible with the old api,
- # while it may introduce unexpected bugs.
- if isinstance(name, type):
- return self.deprecated_register_module(name, force=force)
-
- # raise the error ahead of time
- if not (name is None or isinstance(name, str) or is_seq_of(name, str)):
- raise TypeError(
- 'name must be either of None, an instance of str or a sequence'
- f' of str, but got {type(name)}')
-
- # use it as a normal method: x.register_module(module=SomeClass)
- if module is not None:
- self._register_module(
- module_class=module, module_name=name, force=force)
- return module
-
- # use it as a decorator: @x.register_module()
- def _register(cls):
- self._register_module(
- module_class=cls, module_name=name, force=force)
- return cls
-
- return _register
diff --git a/spaces/SIGMitch/Real-Time-Chad/txt2img/index.html b/spaces/SIGMitch/Real-Time-Chad/txt2img/index.html
deleted file mode 100644
index a6119b78615df60cc9a315c35e24eb827fe94dc7..0000000000000000000000000000000000000000
--- a/spaces/SIGMitch/Real-Time-Chad/txt2img/index.html
+++ /dev/null
@@ -1,263 +0,0 @@
-
-
-
-
-
- Real-Time Chad
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Real-Time Latent Consistency Model
-
Text to Image
-
- Hope you lik pain.
-
-
- There are 0 user(s) sharing the same GPU, affecting
- real-time performance. Maximum queue size is 10.
-
-
-
-
Prompt
-
- Start your session and type your prompt here, accepts
- Compel syntax.
-
-
-
-
-
-
-
-
- Advanced Options
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/SShaik/SS-03-GR-AI-Text2ArtGenerator/README.md b/spaces/SShaik/SS-03-GR-AI-Text2ArtGenerator/README.md
deleted file mode 100644
index be11355ea4d63c4d589b72032ba14675b2d0f555..0000000000000000000000000000000000000000
--- a/spaces/SShaik/SS-03-GR-AI-Text2ArtGenerator/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: SS 03 GR AI Text2ArtGenerator
-emoji: 📚
-colorFrom: gray
-colorTo: pink
-sdk: gradio
-sdk_version: 3.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Salesforce/BLIP/models/nlvr_encoder.py b/spaces/Salesforce/BLIP/models/nlvr_encoder.py
deleted file mode 100644
index 1946bb4a300f75afa4848f6622839445903c34a9..0000000000000000000000000000000000000000
--- a/spaces/Salesforce/BLIP/models/nlvr_encoder.py
+++ /dev/null
@@ -1,843 +0,0 @@
-import math
-import os
-import warnings
-from dataclasses import dataclass
-from typing import Optional, Tuple
-
-import torch
-from torch import Tensor, device, dtype, nn
-import torch.utils.checkpoint
-from torch import nn
-from torch.nn import CrossEntropyLoss
-import torch.nn.functional as F
-
-from transformers.activations import ACT2FN
-from transformers.file_utils import (
- ModelOutput,
-)
-from transformers.modeling_outputs import (
- BaseModelOutputWithPastAndCrossAttentions,
- BaseModelOutputWithPoolingAndCrossAttentions,
- CausalLMOutputWithCrossAttentions,
- MaskedLMOutput,
- MultipleChoiceModelOutput,
- NextSentencePredictorOutput,
- QuestionAnsweringModelOutput,
- SequenceClassifierOutput,
- TokenClassifierOutput,
-)
-from transformers.modeling_utils import (
- PreTrainedModel,
- apply_chunking_to_forward,
- find_pruneable_heads_and_indices,
- prune_linear_layer,
-)
-from transformers.utils import logging
-from transformers.models.bert.configuration_bert import BertConfig
-
-
-logger = logging.get_logger(__name__)
-
-
-class BertEmbeddings(nn.Module):
- """Construct the embeddings from word and position embeddings."""
-
- def __init__(self, config):
- super().__init__()
- self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id)
- self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
-
- # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
- # any TensorFlow checkpoint file
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
-
- # position_ids (1, len position emb) is contiguous in memory and exported when serialized
- self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1)))
- self.position_embedding_type = getattr(config, "position_embedding_type", "absolute")
-
- self.config = config
-
- def forward(
- self, input_ids=None, position_ids=None, inputs_embeds=None, past_key_values_length=0
- ):
- if input_ids is not None:
- input_shape = input_ids.size()
- else:
- input_shape = inputs_embeds.size()[:-1]
-
- seq_length = input_shape[1]
-
- if position_ids is None:
- position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length]
-
- if inputs_embeds is None:
- inputs_embeds = self.word_embeddings(input_ids)
-
- embeddings = inputs_embeds
-
- if self.position_embedding_type == "absolute":
- position_embeddings = self.position_embeddings(position_ids)
- embeddings += position_embeddings
- embeddings = self.LayerNorm(embeddings)
- embeddings = self.dropout(embeddings)
- return embeddings
-
-
-class BertSelfAttention(nn.Module):
- def __init__(self, config, is_cross_attention):
- super().__init__()
- self.config = config
- if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"):
- raise ValueError(
- "The hidden size (%d) is not a multiple of the number of attention "
- "heads (%d)" % (config.hidden_size, config.num_attention_heads)
- )
-
- self.num_attention_heads = config.num_attention_heads
- self.attention_head_size = int(config.hidden_size / config.num_attention_heads)
- self.all_head_size = self.num_attention_heads * self.attention_head_size
-
- self.query = nn.Linear(config.hidden_size, self.all_head_size)
- if is_cross_attention:
- self.key = nn.Linear(config.encoder_width, self.all_head_size)
- self.value = nn.Linear(config.encoder_width, self.all_head_size)
- else:
- self.key = nn.Linear(config.hidden_size, self.all_head_size)
- self.value = nn.Linear(config.hidden_size, self.all_head_size)
-
- self.dropout = nn.Dropout(config.attention_probs_dropout_prob)
- self.position_embedding_type = getattr(config, "position_embedding_type", "absolute")
- if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query":
- self.max_position_embeddings = config.max_position_embeddings
- self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size)
- self.save_attention = False
-
- def save_attn_gradients(self, attn_gradients):
- self.attn_gradients = attn_gradients
-
- def get_attn_gradients(self):
- return self.attn_gradients
-
- def save_attention_map(self, attention_map):
- self.attention_map = attention_map
-
- def get_attention_map(self):
- return self.attention_map
-
- def transpose_for_scores(self, x):
- new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size)
- x = x.view(*new_x_shape)
- return x.permute(0, 2, 1, 3)
-
- def forward(
- self,
- hidden_states,
- attention_mask=None,
- head_mask=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- past_key_value=None,
- output_attentions=False,
- ):
- mixed_query_layer = self.query(hidden_states)
-
- # If this is instantiated as a cross-attention module, the keys
- # and values come from an encoder; the attention mask needs to be
- # such that the encoder's padding tokens are not attended to.
- is_cross_attention = encoder_hidden_states is not None
-
- if is_cross_attention:
- key_layer = self.transpose_for_scores(self.key(encoder_hidden_states))
- value_layer = self.transpose_for_scores(self.value(encoder_hidden_states))
- attention_mask = encoder_attention_mask
- elif past_key_value is not None:
- key_layer = self.transpose_for_scores(self.key(hidden_states))
- value_layer = self.transpose_for_scores(self.value(hidden_states))
- key_layer = torch.cat([past_key_value[0], key_layer], dim=2)
- value_layer = torch.cat([past_key_value[1], value_layer], dim=2)
- else:
- key_layer = self.transpose_for_scores(self.key(hidden_states))
- value_layer = self.transpose_for_scores(self.value(hidden_states))
-
- query_layer = self.transpose_for_scores(mixed_query_layer)
-
- past_key_value = (key_layer, value_layer)
-
- # Take the dot product between "query" and "key" to get the raw attention scores.
- attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
-
- if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query":
- seq_length = hidden_states.size()[1]
- position_ids_l = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(-1, 1)
- position_ids_r = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(1, -1)
- distance = position_ids_l - position_ids_r
- positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1)
- positional_embedding = positional_embedding.to(dtype=query_layer.dtype) # fp16 compatibility
-
- if self.position_embedding_type == "relative_key":
- relative_position_scores = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding)
- attention_scores = attention_scores + relative_position_scores
- elif self.position_embedding_type == "relative_key_query":
- relative_position_scores_query = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding)
- relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding)
- attention_scores = attention_scores + relative_position_scores_query + relative_position_scores_key
-
- attention_scores = attention_scores / math.sqrt(self.attention_head_size)
- if attention_mask is not None:
- # Apply the attention mask is (precomputed for all layers in BertModel forward() function)
- attention_scores = attention_scores + attention_mask
-
- # Normalize the attention scores to probabilities.
- attention_probs = nn.Softmax(dim=-1)(attention_scores)
-
- if is_cross_attention and self.save_attention:
- self.save_attention_map(attention_probs)
- attention_probs.register_hook(self.save_attn_gradients)
-
- # This is actually dropping out entire tokens to attend to, which might
- # seem a bit unusual, but is taken from the original Transformer paper.
- attention_probs_dropped = self.dropout(attention_probs)
-
- # Mask heads if we want to
- if head_mask is not None:
- attention_probs_dropped = attention_probs_dropped * head_mask
-
- context_layer = torch.matmul(attention_probs_dropped, value_layer)
-
- context_layer = context_layer.permute(0, 2, 1, 3).contiguous()
- new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,)
- context_layer = context_layer.view(*new_context_layer_shape)
-
- outputs = (context_layer, attention_probs) if output_attentions else (context_layer,)
-
- outputs = outputs + (past_key_value,)
- return outputs
-
-
-class BertSelfOutput(nn.Module):
- def __init__(self, config, twin=False, merge=False):
- super().__init__()
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
- if twin:
- self.dense0 = nn.Linear(config.hidden_size, config.hidden_size)
- self.dense1 = nn.Linear(config.hidden_size, config.hidden_size)
- else:
- self.dense = nn.Linear(config.hidden_size, config.hidden_size)
- if merge:
- self.act = ACT2FN[config.hidden_act]
- self.merge_layer = nn.Linear(config.hidden_size * 2, config.hidden_size)
- self.merge = True
- else:
- self.merge = False
-
- def forward(self, hidden_states, input_tensor):
- if type(hidden_states) == list:
- hidden_states0 = self.dense0(hidden_states[0])
- hidden_states1 = self.dense1(hidden_states[1])
- if self.merge:
- #hidden_states = self.merge_layer(self.act(torch.cat([hidden_states0,hidden_states1],dim=-1)))
- hidden_states = self.merge_layer(torch.cat([hidden_states0,hidden_states1],dim=-1))
- else:
- hidden_states = (hidden_states0+hidden_states1)/2
- else:
- hidden_states = self.dense(hidden_states)
- hidden_states = self.dropout(hidden_states)
- hidden_states = self.LayerNorm(hidden_states + input_tensor)
- return hidden_states
-
-
-class BertAttention(nn.Module):
- def __init__(self, config, is_cross_attention=False, layer_num=-1):
- super().__init__()
- if is_cross_attention:
- self.self0 = BertSelfAttention(config, is_cross_attention)
- self.self1 = BertSelfAttention(config, is_cross_attention)
- else:
- self.self = BertSelfAttention(config, is_cross_attention)
- self.output = BertSelfOutput(config, twin=is_cross_attention, merge=(is_cross_attention and layer_num>=6))
- self.pruned_heads = set()
-
- def prune_heads(self, heads):
- if len(heads) == 0:
- return
- heads, index = find_pruneable_heads_and_indices(
- heads, self.self.num_attention_heads, self.self.attention_head_size, self.pruned_heads
- )
-
- # Prune linear layers
- self.self.query = prune_linear_layer(self.self.query, index)
- self.self.key = prune_linear_layer(self.self.key, index)
- self.self.value = prune_linear_layer(self.self.value, index)
- self.output.dense = prune_linear_layer(self.output.dense, index, dim=1)
-
- # Update hyper params and store pruned heads
- self.self.num_attention_heads = self.self.num_attention_heads - len(heads)
- self.self.all_head_size = self.self.attention_head_size * self.self.num_attention_heads
- self.pruned_heads = self.pruned_heads.union(heads)
-
- def forward(
- self,
- hidden_states,
- attention_mask=None,
- head_mask=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- past_key_value=None,
- output_attentions=False,
- ):
- if type(encoder_hidden_states)==list:
- self_outputs0 = self.self0(
- hidden_states,
- attention_mask,
- head_mask,
- encoder_hidden_states[0],
- encoder_attention_mask[0],
- past_key_value,
- output_attentions,
- )
- self_outputs1 = self.self1(
- hidden_states,
- attention_mask,
- head_mask,
- encoder_hidden_states[1],
- encoder_attention_mask[1],
- past_key_value,
- output_attentions,
- )
- attention_output = self.output([self_outputs0[0],self_outputs1[0]], hidden_states)
-
- outputs = (attention_output,) + self_outputs0[1:] # add attentions if we output them
- else:
- self_outputs = self.self(
- hidden_states,
- attention_mask,
- head_mask,
- encoder_hidden_states,
- encoder_attention_mask,
- past_key_value,
- output_attentions,
- )
- attention_output = self.output(self_outputs[0], hidden_states)
- outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them
- return outputs
-
-
-class BertIntermediate(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.dense = nn.Linear(config.hidden_size, config.intermediate_size)
- if isinstance(config.hidden_act, str):
- self.intermediate_act_fn = ACT2FN[config.hidden_act]
- else:
- self.intermediate_act_fn = config.hidden_act
-
- def forward(self, hidden_states):
- hidden_states = self.dense(hidden_states)
- hidden_states = self.intermediate_act_fn(hidden_states)
- return hidden_states
-
-
-class BertOutput(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.dense = nn.Linear(config.intermediate_size, config.hidden_size)
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
-
- def forward(self, hidden_states, input_tensor):
- hidden_states = self.dense(hidden_states)
- hidden_states = self.dropout(hidden_states)
- hidden_states = self.LayerNorm(hidden_states + input_tensor)
- return hidden_states
-
-
-class BertLayer(nn.Module):
- def __init__(self, config, layer_num):
- super().__init__()
- self.config = config
- self.chunk_size_feed_forward = config.chunk_size_feed_forward
- self.seq_len_dim = 1
- self.attention = BertAttention(config)
- self.layer_num = layer_num
- if self.config.add_cross_attention:
- self.crossattention = BertAttention(config, is_cross_attention=self.config.add_cross_attention, layer_num=layer_num)
- self.intermediate = BertIntermediate(config)
- self.output = BertOutput(config)
-
- def forward(
- self,
- hidden_states,
- attention_mask=None,
- head_mask=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- past_key_value=None,
- output_attentions=False,
- mode=None,
- ):
- # decoder uni-directional self-attention cached key/values tuple is at positions 1,2
- self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None
- self_attention_outputs = self.attention(
- hidden_states,
- attention_mask,
- head_mask,
- output_attentions=output_attentions,
- past_key_value=self_attn_past_key_value,
- )
- attention_output = self_attention_outputs[0]
-
- outputs = self_attention_outputs[1:-1]
- present_key_value = self_attention_outputs[-1]
-
- if mode=='multimodal':
- assert encoder_hidden_states is not None, "encoder_hidden_states must be given for cross-attention layers"
- cross_attention_outputs = self.crossattention(
- attention_output,
- attention_mask,
- head_mask,
- encoder_hidden_states,
- encoder_attention_mask,
- output_attentions=output_attentions,
- )
- attention_output = cross_attention_outputs[0]
- outputs = outputs + cross_attention_outputs[1:-1] # add cross attentions if we output attention weights
- layer_output = apply_chunking_to_forward(
- self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output
- )
- outputs = (layer_output,) + outputs
-
- outputs = outputs + (present_key_value,)
-
- return outputs
-
- def feed_forward_chunk(self, attention_output):
- intermediate_output = self.intermediate(attention_output)
- layer_output = self.output(intermediate_output, attention_output)
- return layer_output
-
-
-class BertEncoder(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.config = config
- self.layer = nn.ModuleList([BertLayer(config,i) for i in range(config.num_hidden_layers)])
- self.gradient_checkpointing = False
-
- def forward(
- self,
- hidden_states,
- attention_mask=None,
- head_mask=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- past_key_values=None,
- use_cache=None,
- output_attentions=False,
- output_hidden_states=False,
- return_dict=True,
- mode='multimodal',
- ):
- all_hidden_states = () if output_hidden_states else None
- all_self_attentions = () if output_attentions else None
- all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None
-
- next_decoder_cache = () if use_cache else None
-
- for i in range(self.config.num_hidden_layers):
- layer_module = self.layer[i]
- if output_hidden_states:
- all_hidden_states = all_hidden_states + (hidden_states,)
-
- layer_head_mask = head_mask[i] if head_mask is not None else None
- past_key_value = past_key_values[i] if past_key_values is not None else None
-
- if self.gradient_checkpointing and self.training:
-
- if use_cache:
- logger.warn(
- "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
- )
- use_cache = False
-
- def create_custom_forward(module):
- def custom_forward(*inputs):
- return module(*inputs, past_key_value, output_attentions)
-
- return custom_forward
-
- layer_outputs = torch.utils.checkpoint.checkpoint(
- create_custom_forward(layer_module),
- hidden_states,
- attention_mask,
- layer_head_mask,
- encoder_hidden_states,
- encoder_attention_mask,
- mode=mode,
- )
- else:
- layer_outputs = layer_module(
- hidden_states,
- attention_mask,
- layer_head_mask,
- encoder_hidden_states,
- encoder_attention_mask,
- past_key_value,
- output_attentions,
- mode=mode,
- )
-
- hidden_states = layer_outputs[0]
- if use_cache:
- next_decoder_cache += (layer_outputs[-1],)
- if output_attentions:
- all_self_attentions = all_self_attentions + (layer_outputs[1],)
-
- if output_hidden_states:
- all_hidden_states = all_hidden_states + (hidden_states,)
-
- if not return_dict:
- return tuple(
- v
- for v in [
- hidden_states,
- next_decoder_cache,
- all_hidden_states,
- all_self_attentions,
- all_cross_attentions,
- ]
- if v is not None
- )
- return BaseModelOutputWithPastAndCrossAttentions(
- last_hidden_state=hidden_states,
- past_key_values=next_decoder_cache,
- hidden_states=all_hidden_states,
- attentions=all_self_attentions,
- cross_attentions=all_cross_attentions,
- )
-
-
-class BertPooler(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.dense = nn.Linear(config.hidden_size, config.hidden_size)
- self.activation = nn.Tanh()
-
- def forward(self, hidden_states):
- # We "pool" the model by simply taking the hidden state corresponding
- # to the first token.
- first_token_tensor = hidden_states[:, 0]
- pooled_output = self.dense(first_token_tensor)
- pooled_output = self.activation(pooled_output)
- return pooled_output
-
-
-class BertPredictionHeadTransform(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.dense = nn.Linear(config.hidden_size, config.hidden_size)
- if isinstance(config.hidden_act, str):
- self.transform_act_fn = ACT2FN[config.hidden_act]
- else:
- self.transform_act_fn = config.hidden_act
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
-
- def forward(self, hidden_states):
- hidden_states = self.dense(hidden_states)
- hidden_states = self.transform_act_fn(hidden_states)
- hidden_states = self.LayerNorm(hidden_states)
- return hidden_states
-
-
-class BertLMPredictionHead(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.transform = BertPredictionHeadTransform(config)
-
- # The output weights are the same as the input embeddings, but there is
- # an output-only bias for each token.
- self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
-
- self.bias = nn.Parameter(torch.zeros(config.vocab_size))
-
- # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings`
- self.decoder.bias = self.bias
-
- def forward(self, hidden_states):
- hidden_states = self.transform(hidden_states)
- hidden_states = self.decoder(hidden_states)
- return hidden_states
-
-
-class BertOnlyMLMHead(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.predictions = BertLMPredictionHead(config)
-
- def forward(self, sequence_output):
- prediction_scores = self.predictions(sequence_output)
- return prediction_scores
-
-
-class BertPreTrainedModel(PreTrainedModel):
- """
- An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
- models.
- """
-
- config_class = BertConfig
- base_model_prefix = "bert"
- _keys_to_ignore_on_load_missing = [r"position_ids"]
-
- def _init_weights(self, module):
- """ Initialize the weights """
- if isinstance(module, (nn.Linear, nn.Embedding)):
- # Slightly different from the TF version which uses truncated_normal for initialization
- # cf https://github.com/pytorch/pytorch/pull/5617
- module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
- elif isinstance(module, nn.LayerNorm):
- module.bias.data.zero_()
- module.weight.data.fill_(1.0)
- if isinstance(module, nn.Linear) and module.bias is not None:
- module.bias.data.zero_()
-
-
-class BertModel(BertPreTrainedModel):
- """
- The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
- cross-attention is added between the self-attention layers, following the architecture described in `Attention is
- all you need `__ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
- Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
- argument and :obj:`add_cross_attention` set to :obj:`True`; an :obj:`encoder_hidden_states` is then expected as an
- input to the forward pass.
- """
-
- def __init__(self, config, add_pooling_layer=True):
- super().__init__(config)
- self.config = config
-
- self.embeddings = BertEmbeddings(config)
-
- self.encoder = BertEncoder(config)
-
- self.pooler = BertPooler(config) if add_pooling_layer else None
-
- self.init_weights()
-
-
- def get_input_embeddings(self):
- return self.embeddings.word_embeddings
-
- def set_input_embeddings(self, value):
- self.embeddings.word_embeddings = value
-
- def _prune_heads(self, heads_to_prune):
- """
- Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
- class PreTrainedModel
- """
- for layer, heads in heads_to_prune.items():
- self.encoder.layer[layer].attention.prune_heads(heads)
-
-
- def get_extended_attention_mask(self, attention_mask: Tensor, input_shape: Tuple[int], device: device, is_decoder: bool) -> Tensor:
- """
- Makes broadcastable attention and causal masks so that future and masked tokens are ignored.
-
- Arguments:
- attention_mask (:obj:`torch.Tensor`):
- Mask with ones indicating tokens to attend to, zeros for tokens to ignore.
- input_shape (:obj:`Tuple[int]`):
- The shape of the input to the model.
- device: (:obj:`torch.device`):
- The device of the input to the model.
-
- Returns:
- :obj:`torch.Tensor` The extended attention mask, with a the same dtype as :obj:`attention_mask.dtype`.
- """
- # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
- # ourselves in which case we just need to make it broadcastable to all heads.
- if attention_mask.dim() == 3:
- extended_attention_mask = attention_mask[:, None, :, :]
- elif attention_mask.dim() == 2:
- # Provided a padding mask of dimensions [batch_size, seq_length]
- # - if the model is a decoder, apply a causal mask in addition to the padding mask
- # - if the model is an encoder, make the mask broadcastable to [batch_size, num_heads, seq_length, seq_length]
- if is_decoder:
- batch_size, seq_length = input_shape
-
- seq_ids = torch.arange(seq_length, device=device)
- causal_mask = seq_ids[None, None, :].repeat(batch_size, seq_length, 1) <= seq_ids[None, :, None]
- # in case past_key_values are used we need to add a prefix ones mask to the causal mask
- # causal and attention masks must have same type with pytorch version < 1.3
- causal_mask = causal_mask.to(attention_mask.dtype)
-
- if causal_mask.shape[1] < attention_mask.shape[1]:
- prefix_seq_len = attention_mask.shape[1] - causal_mask.shape[1]
- causal_mask = torch.cat(
- [
- torch.ones((batch_size, seq_length, prefix_seq_len), device=device, dtype=causal_mask.dtype),
- causal_mask,
- ],
- axis=-1,
- )
-
- extended_attention_mask = causal_mask[:, None, :, :] * attention_mask[:, None, None, :]
- else:
- extended_attention_mask = attention_mask[:, None, None, :]
- else:
- raise ValueError(
- "Wrong shape for input_ids (shape {}) or attention_mask (shape {})".format(
- input_shape, attention_mask.shape
- )
- )
-
- # Since attention_mask is 1.0 for positions we want to attend and 0.0 for
- # masked positions, this operation will create a tensor which is 0.0 for
- # positions we want to attend and -10000.0 for masked positions.
- # Since we are adding it to the raw scores before the softmax, this is
- # effectively the same as removing these entirely.
- extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility
- extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0
- return extended_attention_mask
-
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- encoder_embeds=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- past_key_values=None,
- use_cache=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- is_decoder=False,
- mode='multimodal',
- ):
- r"""
- encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):
- Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
- the model is configured as a decoder.
- encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
- Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
- the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``:
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
- past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
- Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
- If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids`
- (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)`
- instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`.
- use_cache (:obj:`bool`, `optional`):
- If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up
- decoding (see :obj:`past_key_values`).
- """
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- if is_decoder:
- use_cache = use_cache if use_cache is not None else self.config.use_cache
- else:
- use_cache = False
-
- if input_ids is not None and inputs_embeds is not None:
- raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
- elif input_ids is not None:
- input_shape = input_ids.size()
- batch_size, seq_length = input_shape
- device = input_ids.device
- elif inputs_embeds is not None:
- input_shape = inputs_embeds.size()[:-1]
- batch_size, seq_length = input_shape
- device = inputs_embeds.device
- elif encoder_embeds is not None:
- input_shape = encoder_embeds.size()[:-1]
- batch_size, seq_length = input_shape
- device = encoder_embeds.device
- else:
- raise ValueError("You have to specify either input_ids or inputs_embeds or encoder_embeds")
-
- # past_key_values_length
- past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0
-
- if attention_mask is None:
- attention_mask = torch.ones(((batch_size, seq_length + past_key_values_length)), device=device)
-
- # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
- # ourselves in which case we just need to make it broadcastable to all heads.
- extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape,
- device, is_decoder)
-
- # If a 2D or 3D attention mask is provided for the cross-attention
- # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
- if encoder_hidden_states is not None:
- if type(encoder_hidden_states) == list:
- encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states[0].size()
- else:
- encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size()
- encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
-
- if type(encoder_attention_mask) == list:
- encoder_extended_attention_mask = [self.invert_attention_mask(mask) for mask in encoder_attention_mask]
- elif encoder_attention_mask is None:
- encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device)
- encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
- else:
- encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
- else:
- encoder_extended_attention_mask = None
-
- # Prepare head mask if needed
- # 1.0 in head_mask indicate we keep the head
- # attention_probs has shape bsz x n_heads x N x N
- # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
- # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
- head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
-
- if encoder_embeds is None:
- embedding_output = self.embeddings(
- input_ids=input_ids,
- position_ids=position_ids,
- inputs_embeds=inputs_embeds,
- past_key_values_length=past_key_values_length,
- )
- else:
- embedding_output = encoder_embeds
-
- encoder_outputs = self.encoder(
- embedding_output,
- attention_mask=extended_attention_mask,
- head_mask=head_mask,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_extended_attention_mask,
- past_key_values=past_key_values,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- mode=mode,
- )
- sequence_output = encoder_outputs[0]
- pooled_output = self.pooler(sequence_output) if self.pooler is not None else None
-
- if not return_dict:
- return (sequence_output, pooled_output) + encoder_outputs[1:]
-
- return BaseModelOutputWithPoolingAndCrossAttentions(
- last_hidden_state=sequence_output,
- pooler_output=pooled_output,
- past_key_values=encoder_outputs.past_key_values,
- hidden_states=encoder_outputs.hidden_states,
- attentions=encoder_outputs.attentions,
- cross_attentions=encoder_outputs.cross_attentions,
- )
-
diff --git a/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/text/korean.py b/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/text/korean.py
deleted file mode 100644
index edee07429a450c55e3d8e246997faaa1e0b89cc9..0000000000000000000000000000000000000000
--- a/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/text/korean.py
+++ /dev/null
@@ -1,210 +0,0 @@
-import re
-from jamo import h2j, j2hcj
-import ko_pron
-
-
-# This is a list of Korean classifiers preceded by pure Korean numerals.
-_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통'
-
-# List of (hangul, hangul divided) pairs:
-_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄳ', 'ㄱㅅ'),
- ('ㄵ', 'ㄴㅈ'),
- ('ㄶ', 'ㄴㅎ'),
- ('ㄺ', 'ㄹㄱ'),
- ('ㄻ', 'ㄹㅁ'),
- ('ㄼ', 'ㄹㅂ'),
- ('ㄽ', 'ㄹㅅ'),
- ('ㄾ', 'ㄹㅌ'),
- ('ㄿ', 'ㄹㅍ'),
- ('ㅀ', 'ㄹㅎ'),
- ('ㅄ', 'ㅂㅅ'),
- ('ㅘ', 'ㅗㅏ'),
- ('ㅙ', 'ㅗㅐ'),
- ('ㅚ', 'ㅗㅣ'),
- ('ㅝ', 'ㅜㅓ'),
- ('ㅞ', 'ㅜㅔ'),
- ('ㅟ', 'ㅜㅣ'),
- ('ㅢ', 'ㅡㅣ'),
- ('ㅑ', 'ㅣㅏ'),
- ('ㅒ', 'ㅣㅐ'),
- ('ㅕ', 'ㅣㅓ'),
- ('ㅖ', 'ㅣㅔ'),
- ('ㅛ', 'ㅣㅗ'),
- ('ㅠ', 'ㅣㅜ')
-]]
-
-# List of (Latin alphabet, hangul) pairs:
-_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', '에이'),
- ('b', '비'),
- ('c', '시'),
- ('d', '디'),
- ('e', '이'),
- ('f', '에프'),
- ('g', '지'),
- ('h', '에이치'),
- ('i', '아이'),
- ('j', '제이'),
- ('k', '케이'),
- ('l', '엘'),
- ('m', '엠'),
- ('n', '엔'),
- ('o', '오'),
- ('p', '피'),
- ('q', '큐'),
- ('r', '아르'),
- ('s', '에스'),
- ('t', '티'),
- ('u', '유'),
- ('v', '브이'),
- ('w', '더블유'),
- ('x', '엑스'),
- ('y', '와이'),
- ('z', '제트')
-]]
-
-# List of (ipa, lazy ipa) pairs:
-_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('t͡ɕ','ʧ'),
- ('d͡ʑ','ʥ'),
- ('ɲ','n^'),
- ('ɕ','ʃ'),
- ('ʷ','w'),
- ('ɭ','l`'),
- ('ʎ','ɾ'),
- ('ɣ','ŋ'),
- ('ɰ','ɯ'),
- ('ʝ','j'),
- ('ʌ','ə'),
- ('ɡ','g'),
- ('\u031a','#'),
- ('\u0348','='),
- ('\u031e',''),
- ('\u0320',''),
- ('\u0339','')
-]]
-
-
-def latin_to_hangul(text):
- for regex, replacement in _latin_to_hangul:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def divide_hangul(text):
- text = j2hcj(h2j(text))
- for regex, replacement in _hangul_divided:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def hangul_number(num, sino=True):
- '''Reference https://github.com/Kyubyong/g2pK'''
- num = re.sub(',', '', num)
-
- if num == '0':
- return '영'
- if not sino and num == '20':
- return '스무'
-
- digits = '123456789'
- names = '일이삼사오육칠팔구'
- digit2name = {d: n for d, n in zip(digits, names)}
-
- modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉'
- decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔'
- digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())}
- digit2dec = {d: dec for d, dec in zip(digits, decimals.split())}
-
- spelledout = []
- for i, digit in enumerate(num):
- i = len(num) - i - 1
- if sino:
- if i == 0:
- name = digit2name.get(digit, '')
- elif i == 1:
- name = digit2name.get(digit, '') + '십'
- name = name.replace('일십', '십')
- else:
- if i == 0:
- name = digit2mod.get(digit, '')
- elif i == 1:
- name = digit2dec.get(digit, '')
- if digit == '0':
- if i % 4 == 0:
- last_three = spelledout[-min(3, len(spelledout)):]
- if ''.join(last_three) == '':
- spelledout.append('')
- continue
- else:
- spelledout.append('')
- continue
- if i == 2:
- name = digit2name.get(digit, '') + '백'
- name = name.replace('일백', '백')
- elif i == 3:
- name = digit2name.get(digit, '') + '천'
- name = name.replace('일천', '천')
- elif i == 4:
- name = digit2name.get(digit, '') + '만'
- name = name.replace('일만', '만')
- elif i == 5:
- name = digit2name.get(digit, '') + '십'
- name = name.replace('일십', '십')
- elif i == 6:
- name = digit2name.get(digit, '') + '백'
- name = name.replace('일백', '백')
- elif i == 7:
- name = digit2name.get(digit, '') + '천'
- name = name.replace('일천', '천')
- elif i == 8:
- name = digit2name.get(digit, '') + '억'
- elif i == 9:
- name = digit2name.get(digit, '') + '십'
- elif i == 10:
- name = digit2name.get(digit, '') + '백'
- elif i == 11:
- name = digit2name.get(digit, '') + '천'
- elif i == 12:
- name = digit2name.get(digit, '') + '조'
- elif i == 13:
- name = digit2name.get(digit, '') + '십'
- elif i == 14:
- name = digit2name.get(digit, '') + '백'
- elif i == 15:
- name = digit2name.get(digit, '') + '천'
- spelledout.append(name)
- return ''.join(elem for elem in spelledout)
-
-
-def number_to_hangul(text):
- '''Reference https://github.com/Kyubyong/g2pK'''
- tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text))
- for token in tokens:
- num, classifier = token
- if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers:
- spelledout = hangul_number(num, sino=False)
- else:
- spelledout = hangul_number(num, sino=True)
- text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}')
- # digit by digit for remaining digits
- digits = '0123456789'
- names = '영일이삼사오육칠팔구'
- for d, n in zip(digits, names):
- text = text.replace(d, n)
- return text
-
-
-def korean_to_lazy_ipa(text):
- text = latin_to_hangul(text)
- text = number_to_hangul(text)
- text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa').split('] ~ [')[0],text)
- for regex, replacement in _ipa_to_lazy_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def korean_to_ipa(text):
- text = korean_to_lazy_ipa(text)
- return text.replace('ʧ','tʃ').replace('ʥ','dʑ')
diff --git a/spaces/ServerX/PorcoDiaz/infer/lib/infer_pack/models.py b/spaces/ServerX/PorcoDiaz/infer/lib/infer_pack/models.py
deleted file mode 100644
index 7a387b888f63ecd6f1f1bd3ed10aa2176a944d2c..0000000000000000000000000000000000000000
--- a/spaces/ServerX/PorcoDiaz/infer/lib/infer_pack/models.py
+++ /dev/null
@@ -1,1174 +0,0 @@
-import math
-import logging
-
-logger = logging.getLogger(__name__)
-
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import AvgPool1d, Conv1d, Conv2d, ConvTranspose1d
-from torch.nn import functional as F
-from torch.nn.utils import remove_weight_norm, spectral_norm, weight_norm
-
-from infer.lib.infer_pack import attentions, commons, modules
-from infer.lib.infer_pack.commons import get_padding, init_weights
-has_xpu = bool(hasattr(torch, "xpu") and torch.xpu.is_available())
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder768(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(768, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- if uv.device.type == "privateuseone": # for DirectML
- uv = uv.float()
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- if hasattr(self, "ddtype") == False:
- self.ddtype = self.l_linear.weight.dtype
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- # print(x.dtype,sine_wavs.dtype,self.l_linear.weight.dtype)
- # if self.is_half:
- # sine_wavs = sine_wavs.half()
- # sine_merge = self.l_tanh(self.l_linear(sine_wavs.to(x)))
- # print(sine_wavs.dtype,self.ddtype)
- if sine_wavs.dtype != self.ddtype:
- sine_wavs = sine_wavs.to(self.ddtype)
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- logger.debug(
- "gin_channels: "
- + str(gin_channels)
- + ", self.spk_embed_dim: "
- + str(self.spk_embed_dim)
- )
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- if rate:
- head = int(z_p.shape[2] * rate)
- z_p = z_p[:, :, -head:]
- x_mask = x_mask[:, :, -head:]
- nsff0 = nsff0[:, -head:]
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec(z * x_mask, nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs768NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- logger.debug(
- "gin_channels: "
- + str(gin_channels)
- + ", self.spk_embed_dim: "
- + str(self.spk_embed_dim)
- )
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- if rate:
- head = int(z_p.shape[2] * rate)
- z_p = z_p[:, :, -head:]
- x_mask = x_mask[:, :, -head:]
- nsff0 = nsff0[:, -head:]
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec(z * x_mask, nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- logger.debug(
- "gin_channels: "
- + str(gin_channels)
- + ", self.spk_embed_dim: "
- + str(self.spk_embed_dim)
- )
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, rate=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- if rate:
- head = int(z_p.shape[2] * rate)
- z_p = z_p[:, :, -head:]
- x_mask = x_mask[:, :, -head:]
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec(z * x_mask, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs768NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- logger.debug(
- "gin_channels: "
- + str(gin_channels)
- + ", self.spk_embed_dim: "
- + str(self.spk_embed_dim)
- )
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, rate=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- if rate:
- head = int(z_p.shape[2] * rate)
- z_p = z_p[:, :, -head:]
- x_mask = x_mask[:, :, -head:]
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec(z * x_mask, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class MultiPeriodDiscriminatorV2(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminatorV2, self).__init__()
- # periods = [2, 3, 5, 7, 11, 17]
- periods = [2, 3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- if has_xpu and x.dtype == torch.bfloat16:
- x = F.pad(x.to(dtype=torch.float16), (0, n_pad), "reflect").to(dtype=torch.bfloat16)
- else:
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/Shad0ws/crowdcounting/model.py b/spaces/Shad0ws/crowdcounting/model.py
deleted file mode 100644
index dcba66762d1152a6587649a4816101d7734b8a7b..0000000000000000000000000000000000000000
--- a/spaces/Shad0ws/crowdcounting/model.py
+++ /dev/null
@@ -1,81 +0,0 @@
-import torch.nn as nn
-import torch
-from torch.nn import functional as F
-from torchvision import models
-
-class ContextualModule(nn.Module):
- def __init__(self, features, out_features=512, sizes=(1, 2, 3, 6)):
- super(ContextualModule, self).__init__()
- self.scales = []
- self.scales = nn.ModuleList([self._make_scale(features, size) for size in sizes])
- self.bottleneck = nn.Conv2d(features * 2, out_features, kernel_size=1)
- self.relu = nn.ReLU()
- self.weight_net = nn.Conv2d(features,features,kernel_size=1)
-
- def __make_weight(self,feature,scale_feature):
- weight_feature = feature - scale_feature
- return F.sigmoid(self.weight_net(weight_feature))
-
- def _make_scale(self, features, size):
- prior = nn.AdaptiveAvgPool2d(output_size=(size, size))
- conv = nn.Conv2d(features, features, kernel_size=1, bias=False)
- return nn.Sequential(prior, conv)
-
- def forward(self, feats):
- h, w = feats.size(2), feats.size(3)
- multi_scales = [F.upsample(input=stage(feats), size=(h, w), mode='bilinear') for stage in self.scales]
- weights = [self.__make_weight(feats,scale_feature) for scale_feature in multi_scales]
- overall_features = [(multi_scales[0]*weights[0]+multi_scales[1]*weights[1]+multi_scales[2]*weights[2]+multi_scales[3]*weights[3])/(weights[0]+weights[1]+weights[2]+weights[3])]+ [feats]
- bottle = self.bottleneck(torch.cat(overall_features, 1))
- return self.relu(bottle)
-
-class CANNet(nn.Module):
- def __init__(self, load_weights=False):
- super(CANNet, self).__init__()
- self.seen = 0
- self.context = ContextualModule(512, 512)
- self.frontend_feat = [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512]
- self.backend_feat = [512, 512, 512,256,128,64]
- self.frontend = make_layers(self.frontend_feat)
- self.backend = make_layers(self.backend_feat,in_channels = 512,batch_norm=True, dilation = True)
- self.output_layer = nn.Conv2d(64, 1, kernel_size=1)
- if not load_weights:
- mod = models.vgg16(pretrained = True)
- self._initialize_weights()
- for i in range(len(self.frontend.state_dict().items())):
- list(self.frontend.state_dict().items())[i][1].data[:] = list(mod.state_dict().items())[i][1].data[:]
-
- def forward(self,x):
- x = self.frontend(x)
- x = self.context(x)
- x = self.backend(x)
- x = self.output_layer(x)
- return x
-
- def _initialize_weights(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.normal_(m.weight, std=0.01)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.BatchNorm2d):
- nn.init.constant_(m.weight, 1)
- nn.init.constant_(m.bias, 0)
-
-def make_layers(cfg, in_channels = 3,batch_norm=False,dilation = False):
- if dilation:
- d_rate = 2
- else:
- d_rate = 1
- layers = []
- for v in cfg:
- if v == 'M':
- layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
- else:
- conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=d_rate,dilation = d_rate)
- if batch_norm:
- layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
- else:
- layers += [conv2d, nn.ReLU(inplace=True)]
- in_channels = v
- return nn.Sequential(*layers)
diff --git a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/util/visualizer.py b/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/util/visualizer.py
deleted file mode 100644
index 7a1b7b101e9b73f75f9136bc67f2063c7c1cf1c1..0000000000000000000000000000000000000000
--- a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/util/visualizer.py
+++ /dev/null
@@ -1,318 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-@File : visualizer.py
-@Time : 2022/04/05 11:39:33
-@Author : Shilong Liu
-@Contact : slongliu86@gmail.com
-"""
-
-import datetime
-import os
-
-import cv2
-import matplotlib.pyplot as plt
-import numpy as np
-import torch
-from matplotlib import transforms
-from matplotlib.collections import PatchCollection
-from matplotlib.patches import Polygon
-from pycocotools import mask as maskUtils
-
-
-def renorm(
- img: torch.FloatTensor, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
-) -> torch.FloatTensor:
- # img: tensor(3,H,W) or tensor(B,3,H,W)
- # return: same as img
- assert img.dim() == 3 or img.dim() == 4, "img.dim() should be 3 or 4 but %d" % img.dim()
- if img.dim() == 3:
- assert img.size(0) == 3, 'img.size(0) shoule be 3 but "%d". (%s)' % (
- img.size(0),
- str(img.size()),
- )
- img_perm = img.permute(1, 2, 0)
- mean = torch.Tensor(mean)
- std = torch.Tensor(std)
- img_res = img_perm * std + mean
- return img_res.permute(2, 0, 1)
- else: # img.dim() == 4
- assert img.size(1) == 3, 'img.size(1) shoule be 3 but "%d". (%s)' % (
- img.size(1),
- str(img.size()),
- )
- img_perm = img.permute(0, 2, 3, 1)
- mean = torch.Tensor(mean)
- std = torch.Tensor(std)
- img_res = img_perm * std + mean
- return img_res.permute(0, 3, 1, 2)
-
-
-class ColorMap:
- def __init__(self, basergb=[255, 255, 0]):
- self.basergb = np.array(basergb)
-
- def __call__(self, attnmap):
- # attnmap: h, w. np.uint8.
- # return: h, w, 4. np.uint8.
- assert attnmap.dtype == np.uint8
- h, w = attnmap.shape
- res = self.basergb.copy()
- res = res[None][None].repeat(h, 0).repeat(w, 1) # h, w, 3
- attn1 = attnmap.copy()[..., None] # h, w, 1
- res = np.concatenate((res, attn1), axis=-1).astype(np.uint8)
- return res
-
-
-def rainbow_text(x, y, ls, lc, **kw):
- """
- Take a list of strings ``ls`` and colors ``lc`` and place them next to each
- other, with text ls[i] being shown in color lc[i].
-
- This example shows how to do both vertical and horizontal text, and will
- pass all keyword arguments to plt.text, so you can set the font size,
- family, etc.
- """
- t = plt.gca().transData
- fig = plt.gcf()
- plt.show()
-
- # horizontal version
- for s, c in zip(ls, lc):
- text = plt.text(x, y, " " + s + " ", color=c, transform=t, **kw)
- text.draw(fig.canvas.get_renderer())
- ex = text.get_window_extent()
- t = transforms.offset_copy(text._transform, x=ex.width, units="dots")
-
- # #vertical version
- # for s,c in zip(ls,lc):
- # text = plt.text(x,y," "+s+" ",color=c, transform=t,
- # rotation=90,va='bottom',ha='center',**kw)
- # text.draw(fig.canvas.get_renderer())
- # ex = text.get_window_extent()
- # t = transforms.offset_copy(text._transform, y=ex.height, units='dots')
-
-
-class COCOVisualizer:
- def __init__(self, coco=None, tokenlizer=None) -> None:
- self.coco = coco
-
- def visualize(self, img, tgt, caption=None, dpi=180, savedir="vis"):
- """
- img: tensor(3, H, W)
- tgt: make sure they are all on cpu.
- must have items: 'image_id', 'boxes', 'size'
- """
- plt.figure(dpi=dpi)
- plt.rcParams["font.size"] = "5"
- ax = plt.gca()
- img = renorm(img).permute(1, 2, 0)
- # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO':
- # import ipdb; ipdb.set_trace()
- ax.imshow(img)
-
- self.addtgt(tgt)
-
- if tgt is None:
- image_id = 0
- elif "image_id" not in tgt:
- image_id = 0
- else:
- image_id = tgt["image_id"]
-
- if caption is None:
- savename = "{}/{}-{}.png".format(
- savedir, int(image_id), str(datetime.datetime.now()).replace(" ", "-")
- )
- else:
- savename = "{}/{}-{}-{}.png".format(
- savedir, caption, int(image_id), str(datetime.datetime.now()).replace(" ", "-")
- )
- print("savename: {}".format(savename))
- os.makedirs(os.path.dirname(savename), exist_ok=True)
- plt.savefig(savename)
- plt.close()
-
- def addtgt(self, tgt):
- """ """
- if tgt is None or not "boxes" in tgt:
- ax = plt.gca()
-
- if "caption" in tgt:
- ax.set_title(tgt["caption"], wrap=True)
-
- ax.set_axis_off()
- return
-
- ax = plt.gca()
- H, W = tgt["size"]
- numbox = tgt["boxes"].shape[0]
-
- color = []
- polygons = []
- boxes = []
- for box in tgt["boxes"].cpu():
- unnormbbox = box * torch.Tensor([W, H, W, H])
- unnormbbox[:2] -= unnormbbox[2:] / 2
- [bbox_x, bbox_y, bbox_w, bbox_h] = unnormbbox.tolist()
- boxes.append([bbox_x, bbox_y, bbox_w, bbox_h])
- poly = [
- [bbox_x, bbox_y],
- [bbox_x, bbox_y + bbox_h],
- [bbox_x + bbox_w, bbox_y + bbox_h],
- [bbox_x + bbox_w, bbox_y],
- ]
- np_poly = np.array(poly).reshape((4, 2))
- polygons.append(Polygon(np_poly))
- c = (np.random.random((1, 3)) * 0.6 + 0.4).tolist()[0]
- color.append(c)
-
- p = PatchCollection(polygons, facecolor=color, linewidths=0, alpha=0.1)
- ax.add_collection(p)
- p = PatchCollection(polygons, facecolor="none", edgecolors=color, linewidths=2)
- ax.add_collection(p)
-
- if "strings_positive" in tgt and len(tgt["strings_positive"]) > 0:
- assert (
- len(tgt["strings_positive"]) == numbox
- ), f"{len(tgt['strings_positive'])} = {numbox}, "
- for idx, strlist in enumerate(tgt["strings_positive"]):
- cate_id = int(tgt["labels"][idx])
- _string = str(cate_id) + ":" + " ".join(strlist)
- bbox_x, bbox_y, bbox_w, bbox_h = boxes[idx]
- # ax.text(bbox_x, bbox_y, _string, color='black', bbox={'facecolor': 'yellow', 'alpha': 1.0, 'pad': 1})
- ax.text(
- bbox_x,
- bbox_y,
- _string,
- color="black",
- bbox={"facecolor": color[idx], "alpha": 0.6, "pad": 1},
- )
-
- if "box_label" in tgt:
- assert len(tgt["box_label"]) == numbox, f"{len(tgt['box_label'])} = {numbox}, "
- for idx, bl in enumerate(tgt["box_label"]):
- _string = str(bl)
- bbox_x, bbox_y, bbox_w, bbox_h = boxes[idx]
- # ax.text(bbox_x, bbox_y, _string, color='black', bbox={'facecolor': 'yellow', 'alpha': 1.0, 'pad': 1})
- ax.text(
- bbox_x,
- bbox_y,
- _string,
- color="black",
- bbox={"facecolor": color[idx], "alpha": 0.6, "pad": 1},
- )
-
- if "caption" in tgt:
- ax.set_title(tgt["caption"], wrap=True)
- # plt.figure()
- # rainbow_text(0.0,0.0,"all unicorns poop rainbows ! ! !".split(),
- # ['red', 'orange', 'brown', 'green', 'blue', 'purple', 'black'])
-
- if "attn" in tgt:
- # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO':
- # import ipdb; ipdb.set_trace()
- if isinstance(tgt["attn"], tuple):
- tgt["attn"] = [tgt["attn"]]
- for item in tgt["attn"]:
- attn_map, basergb = item
- attn_map = (attn_map - attn_map.min()) / (attn_map.max() - attn_map.min() + 1e-3)
- attn_map = (attn_map * 255).astype(np.uint8)
- cm = ColorMap(basergb)
- heatmap = cm(attn_map)
- ax.imshow(heatmap)
- ax.set_axis_off()
-
- def showAnns(self, anns, draw_bbox=False):
- """
- Display the specified annotations.
- :param anns (array of object): annotations to display
- :return: None
- """
- if len(anns) == 0:
- return 0
- if "segmentation" in anns[0] or "keypoints" in anns[0]:
- datasetType = "instances"
- elif "caption" in anns[0]:
- datasetType = "captions"
- else:
- raise Exception("datasetType not supported")
- if datasetType == "instances":
- ax = plt.gca()
- ax.set_autoscale_on(False)
- polygons = []
- color = []
- for ann in anns:
- c = (np.random.random((1, 3)) * 0.6 + 0.4).tolist()[0]
- if "segmentation" in ann:
- if type(ann["segmentation"]) == list:
- # polygon
- for seg in ann["segmentation"]:
- poly = np.array(seg).reshape((int(len(seg) / 2), 2))
- polygons.append(Polygon(poly))
- color.append(c)
- else:
- # mask
- t = self.imgs[ann["image_id"]]
- if type(ann["segmentation"]["counts"]) == list:
- rle = maskUtils.frPyObjects(
- [ann["segmentation"]], t["height"], t["width"]
- )
- else:
- rle = [ann["segmentation"]]
- m = maskUtils.decode(rle)
- img = np.ones((m.shape[0], m.shape[1], 3))
- if ann["iscrowd"] == 1:
- color_mask = np.array([2.0, 166.0, 101.0]) / 255
- if ann["iscrowd"] == 0:
- color_mask = np.random.random((1, 3)).tolist()[0]
- for i in range(3):
- img[:, :, i] = color_mask[i]
- ax.imshow(np.dstack((img, m * 0.5)))
- if "keypoints" in ann and type(ann["keypoints"]) == list:
- # turn skeleton into zero-based index
- sks = np.array(self.loadCats(ann["category_id"])[0]["skeleton"]) - 1
- kp = np.array(ann["keypoints"])
- x = kp[0::3]
- y = kp[1::3]
- v = kp[2::3]
- for sk in sks:
- if np.all(v[sk] > 0):
- plt.plot(x[sk], y[sk], linewidth=3, color=c)
- plt.plot(
- x[v > 0],
- y[v > 0],
- "o",
- markersize=8,
- markerfacecolor=c,
- markeredgecolor="k",
- markeredgewidth=2,
- )
- plt.plot(
- x[v > 1],
- y[v > 1],
- "o",
- markersize=8,
- markerfacecolor=c,
- markeredgecolor=c,
- markeredgewidth=2,
- )
-
- if draw_bbox:
- [bbox_x, bbox_y, bbox_w, bbox_h] = ann["bbox"]
- poly = [
- [bbox_x, bbox_y],
- [bbox_x, bbox_y + bbox_h],
- [bbox_x + bbox_w, bbox_y + bbox_h],
- [bbox_x + bbox_w, bbox_y],
- ]
- np_poly = np.array(poly).reshape((4, 2))
- polygons.append(Polygon(np_poly))
- color.append(c)
-
- # p = PatchCollection(polygons, facecolor=color, linewidths=0, alpha=0.4)
- # ax.add_collection(p)
- p = PatchCollection(polygons, facecolor="none", edgecolors=color, linewidths=2)
- ax.add_collection(p)
- elif datasetType == "captions":
- for ann in anns:
- print(ann["caption"])
diff --git a/spaces/SumanthKarnati/SumanthKarnati-Image2Ingredients/app.py b/spaces/SumanthKarnati/SumanthKarnati-Image2Ingredients/app.py
deleted file mode 100644
index 33fbe06cfcc7e038749a8989728284d4d9c91404..0000000000000000000000000000000000000000
--- a/spaces/SumanthKarnati/SumanthKarnati-Image2Ingredients/app.py
+++ /dev/null
@@ -1,130 +0,0 @@
-import os
-os.system('pip install --upgrade transformers')
-import nltk
-from transformers import VisionEncoderDecoderModel, AutoTokenizer, ViTImageProcessor, pipeline
-import torch
-from PIL import Image
-import streamlit as st
-from nltk.corpus import stopwords
-from io import BytesIO
-
-
-
-# os.system('pip install nltk')
-nltk.download('stopwords')
-
-# Load the pre-trained model
-model = VisionEncoderDecoderModel.from_pretrained(
- "SumanthKarnati/Image2Ingredients")
-model.eval()
-
-# Define the feature extractor
-feature_extractor = ViTImageProcessor.from_pretrained(
- 'nlpconnect/vit-gpt2-image-captioning')
-
-# Load the tokenizer
-tokenizer = AutoTokenizer.from_pretrained(
- 'nlpconnect/vit-gpt2-image-captioning')
-
-# Set up text generation pipeline
-generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')
-
-# Device configuration
-device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-
-# Transfer the model to GPU if available
-model = model.to(device)
-
-# Set prediction arguments
-max_length = 16
-num_beams = 4
-gen_kwargs = {"max_length": max_length, "num_beams": num_beams}
-
-# Function to predict ingredients from images
-
-
-def predict_step(image_files, model, feature_extractor, tokenizer, device, gen_kwargs):
- images = []
- for image_file in image_files:
- if image_file is not None:
- # Create a BytesIO object from the UploadedFile (image_file)
- byte_stream = BytesIO(image_file.getvalue())
- image = Image.open(byte_stream)
- if image.mode != "RGB":
- image = image.convert(mode="RGB")
- images.append(image)
-
- if not images:
- return None
-
- inputs = feature_extractor(images=images, return_tensors="pt")
- inputs.to(device)
- output_ids = model.generate(inputs["pixel_values"], **gen_kwargs)
-
- preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
- preds = [pred.strip() for pred in preds]
- return preds
-
-
-# Get the list of English stop words
-stop_words = set(stopwords.words('english'))
-
-# Function to remove stop words from a list of words
-
-
-def remove_stop_words(word_list):
- return [word for word in word_list if word not in stop_words]
-
-# Streamlit app code
-
-
-def main():
- st.title("Image2Nutrients: Food Ingredient Recognition")
- st.write("Upload an image of your food to recognize the ingredients!")
-
- # File upload
- uploaded_file = st.file_uploader(
- "Choose an image", type=["jpg", "jpeg", "png"])
-
- if uploaded_file is not None:
- # Display the uploaded image
- image = Image.open(uploaded_file)
- st.image(image, caption="Uploaded Image", use_column_width=True)
-
- # Perform ingredient recognition
- preds = predict_step([uploaded_file], model,
- feature_extractor, tokenizer, device, gen_kwargs)
-
- preds = preds[0].split('-')
- # remove numbers
- preds = [x for x in preds if not any(c.isdigit() for c in x)]
- # remove empty strings
- preds = list(filter(None, preds))
- # remove duplicates
-
- preds = list(dict.fromkeys(preds))
-
- preds = remove_stop_words(preds)
-
- # Display the recognized ingredients
- st.subheader("Recognized Ingredients:")
- for ingredient in preds:
- st.write(ingredient)
-
- preds_str = ', '.join(preds)
-
- # Prepare the prompt
- prompt = f"You are a knowledgeable assistant that provides nutritional advice based on a list of ingredients. The identified ingredients are: {preds_str}. Note that some ingredients may not make sense, so use the ones that do. Can you provide a nutritional analysis and suggestions for improvement?"
-
- # Generate a sequence of text
- suggestions = generator(prompt, do_sample=True, min_length=200)
-
- # Extract the generated text
- suggestions = suggestions[0]['generated_text'][len(prompt):]
-
- st.subheader("Nutritional Analysis and Suggestions:")
- st.write(suggestions)
-
-
-if __name__ == "__main__":
- main()
\ No newline at end of file
diff --git a/spaces/Sumit7864/Image-Enhancer/test.py b/spaces/Sumit7864/Image-Enhancer/test.py
deleted file mode 100644
index 84901d6adba1ce76c296384c230ea594df5479ca..0000000000000000000000000000000000000000
--- a/spaces/Sumit7864/Image-Enhancer/test.py
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-
-
-
-print(result)
\ No newline at end of file
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/charset_normalizer/api.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/charset_normalizer/api.py
deleted file mode 100644
index 9dbf4201e9ec54d125886978a871169d9a9f4818..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/charset_normalizer/api.py
+++ /dev/null
@@ -1,554 +0,0 @@
-import logging
-from os import PathLike
-from typing import Any, BinaryIO, List, Optional, Set
-
-from .cd import (
- coherence_ratio,
- encoding_languages,
- mb_encoding_languages,
- merge_coherence_ratios,
-)
-from .constant import IANA_SUPPORTED, TOO_BIG_SEQUENCE, TOO_SMALL_SEQUENCE, TRACE
-from .md import mess_ratio
-from .models import CharsetMatch, CharsetMatches
-from .utils import (
- any_specified_encoding,
- cut_sequence_chunks,
- iana_name,
- identify_sig_or_bom,
- is_cp_similar,
- is_multi_byte_encoding,
- should_strip_sig_or_bom,
-)
-
-# Will most likely be controversial
-# logging.addLevelName(TRACE, "TRACE")
-logger = logging.getLogger("charset_normalizer")
-explain_handler = logging.StreamHandler()
-explain_handler.setFormatter(
- logging.Formatter("%(asctime)s | %(levelname)s | %(message)s")
-)
-
-
-def from_bytes(
- sequences: bytes,
- steps: int = 5,
- chunk_size: int = 512,
- threshold: float = 0.2,
- cp_isolation: Optional[List[str]] = None,
- cp_exclusion: Optional[List[str]] = None,
- preemptive_behaviour: bool = True,
- explain: bool = False,
- language_threshold: float = 0.1,
-) -> CharsetMatches:
- """
- Given a raw bytes sequence, return the best possibles charset usable to render str objects.
- If there is no results, it is a strong indicator that the source is binary/not text.
- By default, the process will extract 5 blocks of 512o each to assess the mess and coherence of a given sequence.
- And will give up a particular code page after 20% of measured mess. Those criteria are customizable at will.
-
- The preemptive behavior DOES NOT replace the traditional detection workflow, it prioritize a particular code page
- but never take it for granted. Can improve the performance.
-
- You may want to focus your attention to some code page or/and not others, use cp_isolation and cp_exclusion for that
- purpose.
-
- This function will strip the SIG in the payload/sequence every time except on UTF-16, UTF-32.
- By default the library does not setup any handler other than the NullHandler, if you choose to set the 'explain'
- toggle to True it will alter the logger configuration to add a StreamHandler that is suitable for debugging.
- Custom logging format and handler can be set manually.
- """
-
- if not isinstance(sequences, (bytearray, bytes)):
- raise TypeError(
- "Expected object of type bytes or bytearray, got: {0}".format(
- type(sequences)
- )
- )
-
- if explain:
- previous_logger_level: int = logger.level
- logger.addHandler(explain_handler)
- logger.setLevel(TRACE)
-
- length: int = len(sequences)
-
- if length == 0:
- logger.debug("Encoding detection on empty bytes, assuming utf_8 intention.")
- if explain:
- logger.removeHandler(explain_handler)
- logger.setLevel(previous_logger_level or logging.WARNING)
- return CharsetMatches([CharsetMatch(sequences, "utf_8", 0.0, False, [], "")])
-
- if cp_isolation is not None:
- logger.log(
- TRACE,
- "cp_isolation is set. use this flag for debugging purpose. "
- "limited list of encoding allowed : %s.",
- ", ".join(cp_isolation),
- )
- cp_isolation = [iana_name(cp, False) for cp in cp_isolation]
- else:
- cp_isolation = []
-
- if cp_exclusion is not None:
- logger.log(
- TRACE,
- "cp_exclusion is set. use this flag for debugging purpose. "
- "limited list of encoding excluded : %s.",
- ", ".join(cp_exclusion),
- )
- cp_exclusion = [iana_name(cp, False) for cp in cp_exclusion]
- else:
- cp_exclusion = []
-
- if length <= (chunk_size * steps):
- logger.log(
- TRACE,
- "override steps (%i) and chunk_size (%i) as content does not fit (%i byte(s) given) parameters.",
- steps,
- chunk_size,
- length,
- )
- steps = 1
- chunk_size = length
-
- if steps > 1 and length / steps < chunk_size:
- chunk_size = int(length / steps)
-
- is_too_small_sequence: bool = len(sequences) < TOO_SMALL_SEQUENCE
- is_too_large_sequence: bool = len(sequences) >= TOO_BIG_SEQUENCE
-
- if is_too_small_sequence:
- logger.log(
- TRACE,
- "Trying to detect encoding from a tiny portion of ({}) byte(s).".format(
- length
- ),
- )
- elif is_too_large_sequence:
- logger.log(
- TRACE,
- "Using lazy str decoding because the payload is quite large, ({}) byte(s).".format(
- length
- ),
- )
-
- prioritized_encodings: List[str] = []
-
- specified_encoding: Optional[str] = (
- any_specified_encoding(sequences) if preemptive_behaviour else None
- )
-
- if specified_encoding is not None:
- prioritized_encodings.append(specified_encoding)
- logger.log(
- TRACE,
- "Detected declarative mark in sequence. Priority +1 given for %s.",
- specified_encoding,
- )
-
- tested: Set[str] = set()
- tested_but_hard_failure: List[str] = []
- tested_but_soft_failure: List[str] = []
-
- fallback_ascii: Optional[CharsetMatch] = None
- fallback_u8: Optional[CharsetMatch] = None
- fallback_specified: Optional[CharsetMatch] = None
-
- results: CharsetMatches = CharsetMatches()
-
- sig_encoding, sig_payload = identify_sig_or_bom(sequences)
-
- if sig_encoding is not None:
- prioritized_encodings.append(sig_encoding)
- logger.log(
- TRACE,
- "Detected a SIG or BOM mark on first %i byte(s). Priority +1 given for %s.",
- len(sig_payload),
- sig_encoding,
- )
-
- prioritized_encodings.append("ascii")
-
- if "utf_8" not in prioritized_encodings:
- prioritized_encodings.append("utf_8")
-
- for encoding_iana in prioritized_encodings + IANA_SUPPORTED:
- if cp_isolation and encoding_iana not in cp_isolation:
- continue
-
- if cp_exclusion and encoding_iana in cp_exclusion:
- continue
-
- if encoding_iana in tested:
- continue
-
- tested.add(encoding_iana)
-
- decoded_payload: Optional[str] = None
- bom_or_sig_available: bool = sig_encoding == encoding_iana
- strip_sig_or_bom: bool = bom_or_sig_available and should_strip_sig_or_bom(
- encoding_iana
- )
-
- if encoding_iana in {"utf_16", "utf_32"} and not bom_or_sig_available:
- logger.log(
- TRACE,
- "Encoding %s won't be tested as-is because it require a BOM. Will try some sub-encoder LE/BE.",
- encoding_iana,
- )
- continue
- if encoding_iana in {"utf_7"} and not bom_or_sig_available:
- logger.log(
- TRACE,
- "Encoding %s won't be tested as-is because detection is unreliable without BOM/SIG.",
- encoding_iana,
- )
- continue
-
- try:
- is_multi_byte_decoder: bool = is_multi_byte_encoding(encoding_iana)
- except (ModuleNotFoundError, ImportError):
- logger.log(
- TRACE,
- "Encoding %s does not provide an IncrementalDecoder",
- encoding_iana,
- )
- continue
-
- try:
- if is_too_large_sequence and is_multi_byte_decoder is False:
- str(
- sequences[: int(50e4)]
- if strip_sig_or_bom is False
- else sequences[len(sig_payload) : int(50e4)],
- encoding=encoding_iana,
- )
- else:
- decoded_payload = str(
- sequences
- if strip_sig_or_bom is False
- else sequences[len(sig_payload) :],
- encoding=encoding_iana,
- )
- except (UnicodeDecodeError, LookupError) as e:
- if not isinstance(e, LookupError):
- logger.log(
- TRACE,
- "Code page %s does not fit given bytes sequence at ALL. %s",
- encoding_iana,
- str(e),
- )
- tested_but_hard_failure.append(encoding_iana)
- continue
-
- similar_soft_failure_test: bool = False
-
- for encoding_soft_failed in tested_but_soft_failure:
- if is_cp_similar(encoding_iana, encoding_soft_failed):
- similar_soft_failure_test = True
- break
-
- if similar_soft_failure_test:
- logger.log(
- TRACE,
- "%s is deemed too similar to code page %s and was consider unsuited already. Continuing!",
- encoding_iana,
- encoding_soft_failed,
- )
- continue
-
- r_ = range(
- 0 if not bom_or_sig_available else len(sig_payload),
- length,
- int(length / steps),
- )
-
- multi_byte_bonus: bool = (
- is_multi_byte_decoder
- and decoded_payload is not None
- and len(decoded_payload) < length
- )
-
- if multi_byte_bonus:
- logger.log(
- TRACE,
- "Code page %s is a multi byte encoding table and it appear that at least one character "
- "was encoded using n-bytes.",
- encoding_iana,
- )
-
- max_chunk_gave_up: int = int(len(r_) / 4)
-
- max_chunk_gave_up = max(max_chunk_gave_up, 2)
- early_stop_count: int = 0
- lazy_str_hard_failure = False
-
- md_chunks: List[str] = []
- md_ratios = []
-
- try:
- for chunk in cut_sequence_chunks(
- sequences,
- encoding_iana,
- r_,
- chunk_size,
- bom_or_sig_available,
- strip_sig_or_bom,
- sig_payload,
- is_multi_byte_decoder,
- decoded_payload,
- ):
- md_chunks.append(chunk)
-
- md_ratios.append(
- mess_ratio(
- chunk,
- threshold,
- explain is True and 1 <= len(cp_isolation) <= 2,
- )
- )
-
- if md_ratios[-1] >= threshold:
- early_stop_count += 1
-
- if (early_stop_count >= max_chunk_gave_up) or (
- bom_or_sig_available and strip_sig_or_bom is False
- ):
- break
- except (
- UnicodeDecodeError
- ) as e: # Lazy str loading may have missed something there
- logger.log(
- TRACE,
- "LazyStr Loading: After MD chunk decode, code page %s does not fit given bytes sequence at ALL. %s",
- encoding_iana,
- str(e),
- )
- early_stop_count = max_chunk_gave_up
- lazy_str_hard_failure = True
-
- # We might want to check the sequence again with the whole content
- # Only if initial MD tests passes
- if (
- not lazy_str_hard_failure
- and is_too_large_sequence
- and not is_multi_byte_decoder
- ):
- try:
- sequences[int(50e3) :].decode(encoding_iana, errors="strict")
- except UnicodeDecodeError as e:
- logger.log(
- TRACE,
- "LazyStr Loading: After final lookup, code page %s does not fit given bytes sequence at ALL. %s",
- encoding_iana,
- str(e),
- )
- tested_but_hard_failure.append(encoding_iana)
- continue
-
- mean_mess_ratio: float = sum(md_ratios) / len(md_ratios) if md_ratios else 0.0
- if mean_mess_ratio >= threshold or early_stop_count >= max_chunk_gave_up:
- tested_but_soft_failure.append(encoding_iana)
- logger.log(
- TRACE,
- "%s was excluded because of initial chaos probing. Gave up %i time(s). "
- "Computed mean chaos is %f %%.",
- encoding_iana,
- early_stop_count,
- round(mean_mess_ratio * 100, ndigits=3),
- )
- # Preparing those fallbacks in case we got nothing.
- if (
- encoding_iana in ["ascii", "utf_8", specified_encoding]
- and not lazy_str_hard_failure
- ):
- fallback_entry = CharsetMatch(
- sequences, encoding_iana, threshold, False, [], decoded_payload
- )
- if encoding_iana == specified_encoding:
- fallback_specified = fallback_entry
- elif encoding_iana == "ascii":
- fallback_ascii = fallback_entry
- else:
- fallback_u8 = fallback_entry
- continue
-
- logger.log(
- TRACE,
- "%s passed initial chaos probing. Mean measured chaos is %f %%",
- encoding_iana,
- round(mean_mess_ratio * 100, ndigits=3),
- )
-
- if not is_multi_byte_decoder:
- target_languages: List[str] = encoding_languages(encoding_iana)
- else:
- target_languages = mb_encoding_languages(encoding_iana)
-
- if target_languages:
- logger.log(
- TRACE,
- "{} should target any language(s) of {}".format(
- encoding_iana, str(target_languages)
- ),
- )
-
- cd_ratios = []
-
- # We shall skip the CD when its about ASCII
- # Most of the time its not relevant to run "language-detection" on it.
- if encoding_iana != "ascii":
- for chunk in md_chunks:
- chunk_languages = coherence_ratio(
- chunk,
- language_threshold,
- ",".join(target_languages) if target_languages else None,
- )
-
- cd_ratios.append(chunk_languages)
-
- cd_ratios_merged = merge_coherence_ratios(cd_ratios)
-
- if cd_ratios_merged:
- logger.log(
- TRACE,
- "We detected language {} using {}".format(
- cd_ratios_merged, encoding_iana
- ),
- )
-
- results.append(
- CharsetMatch(
- sequences,
- encoding_iana,
- mean_mess_ratio,
- bom_or_sig_available,
- cd_ratios_merged,
- decoded_payload,
- )
- )
-
- if (
- encoding_iana in [specified_encoding, "ascii", "utf_8"]
- and mean_mess_ratio < 0.1
- ):
- logger.debug(
- "Encoding detection: %s is most likely the one.", encoding_iana
- )
- if explain:
- logger.removeHandler(explain_handler)
- logger.setLevel(previous_logger_level)
- return CharsetMatches([results[encoding_iana]])
-
- if encoding_iana == sig_encoding:
- logger.debug(
- "Encoding detection: %s is most likely the one as we detected a BOM or SIG within "
- "the beginning of the sequence.",
- encoding_iana,
- )
- if explain:
- logger.removeHandler(explain_handler)
- logger.setLevel(previous_logger_level)
- return CharsetMatches([results[encoding_iana]])
-
- if len(results) == 0:
- if fallback_u8 or fallback_ascii or fallback_specified:
- logger.log(
- TRACE,
- "Nothing got out of the detection process. Using ASCII/UTF-8/Specified fallback.",
- )
-
- if fallback_specified:
- logger.debug(
- "Encoding detection: %s will be used as a fallback match",
- fallback_specified.encoding,
- )
- results.append(fallback_specified)
- elif (
- (fallback_u8 and fallback_ascii is None)
- or (
- fallback_u8
- and fallback_ascii
- and fallback_u8.fingerprint != fallback_ascii.fingerprint
- )
- or (fallback_u8 is not None)
- ):
- logger.debug("Encoding detection: utf_8 will be used as a fallback match")
- results.append(fallback_u8)
- elif fallback_ascii:
- logger.debug("Encoding detection: ascii will be used as a fallback match")
- results.append(fallback_ascii)
-
- if results:
- logger.debug(
- "Encoding detection: Found %s as plausible (best-candidate) for content. With %i alternatives.",
- results.best().encoding, # type: ignore
- len(results) - 1,
- )
- else:
- logger.debug("Encoding detection: Unable to determine any suitable charset.")
-
- if explain:
- logger.removeHandler(explain_handler)
- logger.setLevel(previous_logger_level)
-
- return results
-
-
-def from_fp(
- fp: BinaryIO,
- steps: int = 5,
- chunk_size: int = 512,
- threshold: float = 0.20,
- cp_isolation: Optional[List[str]] = None,
- cp_exclusion: Optional[List[str]] = None,
- preemptive_behaviour: bool = True,
- explain: bool = False,
- language_threshold: float = 0.1,
-) -> CharsetMatches:
- """
- Same thing than the function from_bytes but using a file pointer that is already ready.
- Will not close the file pointer.
- """
- return from_bytes(
- fp.read(),
- steps,
- chunk_size,
- threshold,
- cp_isolation,
- cp_exclusion,
- preemptive_behaviour,
- explain,
- language_threshold,
- )
-
-
-def from_path(
- path: "PathLike[Any]",
- steps: int = 5,
- chunk_size: int = 512,
- threshold: float = 0.20,
- cp_isolation: Optional[List[str]] = None,
- cp_exclusion: Optional[List[str]] = None,
- preemptive_behaviour: bool = True,
- explain: bool = False,
- language_threshold: float = 0.1,
-) -> CharsetMatches:
- """
- Same thing than the function from_bytes but with one extra step. Opening and reading given file path in binary mode.
- Can raise IOError.
- """
- with open(path, "rb") as fp:
- return from_fp(
- fp,
- steps,
- chunk_size,
- threshold,
- cp_isolation,
- cp_exclusion,
- preemptive_behaviour,
- explain,
- language_threshold,
- )
diff --git a/spaces/TNR-5/Stable-Diffusion-Protogen-x3.4-webui/app.py b/spaces/TNR-5/Stable-Diffusion-Protogen-x3.4-webui/app.py
deleted file mode 100644
index be111e59a9c0f40769c871659999c100caa38561..0000000000000000000000000000000000000000
--- a/spaces/TNR-5/Stable-Diffusion-Protogen-x3.4-webui/app.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import os
-from subprocess import getoutput
-
-os.system(f"git clone -b v1.5 https://github.com/camenduru/stable-diffusion-webui /home/user/app/stable-diffusion-webui")
-os.chdir("/home/user/app/stable-diffusion-webui")
-
-os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py")
-os.system(f"sed -i '$a fastapi==0.90.0' /home/user/app/stable-diffusion-webui/requirements_versions.txt")
-os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''')
-os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py")
-os.system(f"sed -i -e 's/ outputs=\[/queue=False, &/g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e 's/ queue=False, / /g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-
-# ----------------------------Please duplicate this space and delete this block if you don't want to see the extra header----------------------------
-os.system(f"wget -q https://raw.githubusercontent.com/darkstorm2150/webui/main/OpenGen_header_patch.py -O /home/user/app/header_patch.py")
-os.system(f"sed -i -e '/demo:/r /home/user/app/header_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py")
-# ---------------------------------------------------------------------------------------------------------------------------------------------------
-
-if "IS_SHARED_UI" in os.environ:
- os.system(f"rm -rfv /home/user/app/stable-diffusion-webui/scripts/")
-
- os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json")
- os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json")
-
- os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}")
- os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}")
- os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}")
-
- os.system(f"python launch.py --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding")
-else:
- os.system(f"rm -rfv /home/user/app/stable-diffusion-webui/scripts/")
-
- os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json")
- os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json")
-
- # Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py")
- #os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py")
-
- # Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME")
- #os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui-artists-to-study /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-artists-to-study")
- #os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser")
- #os.system(f"git clone https://github.com/deforum-art/deforum-for-automatic1111-webui /home/user/app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui")
-
- # Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt")
- #os.system(f"wget -q https://huggingface.co/nitrosocke/Arcane-Diffusion/resolve/main/arcane-diffusion-v3.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/arcane-diffusion-v3.ckpt")
- #os.system(f"wget -q https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/Cyberpunk-Anime-Diffusion.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Cyberpunk-Anime-Diffusion.ckpt")
- #os.system(f"wget -q https://huggingface.co/prompthero/midjourney-v4-diffusion/resolve/main/mdjrny-v4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/mdjrny-v4.ckpt")
- #os.system(f"wget -q https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/moDi-v1-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/moDi-v1-pruned.ckpt")
- #os.system(f"wget -q https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/resolve/main/PaperCut_v1.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PaperCut_v1.ckpt")
- #os.system(f"wget -q https://huggingface.co/lilpotat/sa/resolve/main/samdoesarts_style.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/samdoesarts_style.ckpt")
- #os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-float32.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-float32.ckpt")
- #os.system(f"wget -q https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt")
- #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt")
- #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt")
-
- #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.ckpt")
- #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.vae.pt")
-
- #os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt")
- #os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml")
-
- # ----------------------------Protogen Models----------------------------
- #os.system(f"wget -q https://huggingface.co/darkstorm2150/Protogen_v2.2_Official_Release/resolve/main/Protogen_V2.2.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Protogen_V2.2.safetensors")
- os.system(f"wget -q https://huggingface.co/darkstorm2150/Protogen_x3.4_Official_Release/resolve/main/ProtoGen_X3.4.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/ProtoGen_X3.4.safetensors")
- #os.system(f"wget -q https://huggingface.co/darkstorm2150/Protogen_v5.3_Official_Release/resolve/main/ProtoGen_X5.3.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/ProtoGen_X5.3.safetensors")
- #os.system(f"wget -q https://huggingface.co/darkstorm2150/Protogen_v5.8_Official_Release/resolve/main/ProtoGen_X5.8.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/ProtoGen_X5.8.safetensors")
- #os.system(f"wget -q https://huggingface.co/darkstorm2150/Protogen_Dragon_Official_Release/resolve/main/ProtoGen_Dragon.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/ProtoGen_Dragon.safetensors")
- # ----------------------------Protogen Models----------------------------
- #os.system(f"python launch.py --force-enable-xformers --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --disable-console-progressbars --enable-console-prompts --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --api --skip-torch-cuda-test")
- os.system(f"python launch.py --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding")
\ No newline at end of file
diff --git a/spaces/TRI-ML/risk_biased_prediction/tests/risk_biased/predictors/test_predictor.py b/spaces/TRI-ML/risk_biased_prediction/tests/risk_biased/predictors/test_predictor.py
deleted file mode 100644
index 987adafc19dea7e003f43c3035402d12b51f5e61..0000000000000000000000000000000000000000
--- a/spaces/TRI-ML/risk_biased_prediction/tests/risk_biased/predictors/test_predictor.py
+++ /dev/null
@@ -1,128 +0,0 @@
-import atexit
-
-from mmcv import Config
-import os
-import pytest
-from pytorch_lightning import seed_everything
-import shutil
-import torch
-
-from risk_biased.scene_dataset.scene import load_create_dataset
-from risk_biased.predictors.biased_predictor import (
- LitTrajectoryPredictor,
- LitTrajectoryPredictorParams,
-)
-from risk_biased.utils.cost import TTCCostParams
-from risk_biased.scene_dataset.loaders import SceneDataLoaders
-
-
-def clean_up_dataset_dir():
- """
- This function is designed to delete the directories
- that might have created even if the test fails early
- by being called on exit.
- """
- current_dir = os.path.dirname(os.path.realpath(__file__))
- dataset_dir0 = os.path.join(current_dir, "scene_dataset_000")
- if os.path.exists(dataset_dir0):
- shutil.rmtree(dataset_dir0)
- dataset_dir1 = os.path.join(current_dir, "scene_dataset_001")
- if os.path.exists(dataset_dir1):
- shutil.rmtree(dataset_dir1)
-
-
-atexit.register(clean_up_dataset_dir)
-
-
-@pytest.fixture(scope="module")
-def params():
- seed_everything(0)
- current_dir = os.path.dirname(os.path.realpath(__file__))
- cfg = Config()
- cfg.batch_size = 4
- cfg.time_scene = 5.0
- cfg.dt = 0.1
- cfg.sample_times = [t * cfg.dt for t in range(0, int(cfg.time_scene / cfg.dt))]
- cfg.ego_ref_speed = 14
- cfg.ego_speed_init_low = 4.0
- cfg.ego_speed_init_high = 16.0
- cfg.ego_acceleration_mean_low = -1.5
- cfg.ego_acceleration_mean_high = 1.5
- cfg.ego_acceleration_std = 1.5
- cfg.ego_length = 4
- cfg.ego_width = 1.75
- cfg.fast_speed = 2.0
- cfg.slow_speed = 1.0
- cfg.p_change_pace = 0.2
- cfg.proportion_fast = 0.5
- cfg.perception_noise_std = 0.03
- cfg.state_dim = 2
- cfg.num_steps = 3
- cfg.num_steps_future = len(cfg.sample_times) - cfg.num_steps
- cfg.file_name = "test_scene_data"
- cfg.datasets_sizes = {"train": 100, "val": 10, "test": 30}
- cfg.datasets = list(cfg.datasets_sizes.keys())
- cfg.num_workers = 2
- cfg.dataset_parameters = {
- "dt": cfg.dt,
- "time_scene": cfg.time_scene,
- "sample_times": cfg.sample_times,
- "ego_ref_speed": cfg.ego_ref_speed,
- "ego_speed_init_low": cfg.ego_speed_init_low,
- "ego_speed_init_high": cfg.ego_speed_init_high,
- "ego_acceleration_mean_low": cfg.ego_acceleration_mean_low,
- "ego_acceleration_mean_high": cfg.ego_acceleration_mean_high,
- "ego_acceleration_std": cfg.ego_acceleration_std,
- "fast_speed": cfg.fast_speed,
- "slow_speed": cfg.slow_speed,
- "p_change_pace": cfg.p_change_pace,
- "proportion_fast": cfg.proportion_fast,
- "file_name": cfg.file_name,
- "datasets_sizes": cfg.datasets_sizes,
- "state_dim": cfg.state_dim,
- "num_steps": cfg.num_steps,
- "num_steps_future": cfg.num_steps_future,
- "perception_noise_std": cfg.perception_noise_std,
- }
- [data_train, data_val, data_test] = load_create_dataset(cfg, current_dir)
- loaders = SceneDataLoaders(
- cfg.state_dim,
- cfg.num_steps,
- cfg.num_steps_future,
- cfg.batch_size,
- data_train=data_train,
- data_val=data_val,
- data_test=data_test,
- num_workers=cfg.num_workers,
- )
- return cfg, loaders
-
-
-class TestPredictor:
- @pytest.fixture(autouse=True)
- def setup(self, params):
- cfg, loaders = params
- current_dir = os.path.dirname(os.path.realpath(__file__))
- # Should create directory and datasets
- [train_set, val_set, test_set] = load_create_dataset(cfg, base_dir=current_dir)
- params = LitTrajectoryPredictorParams.from_config(cfg)
- cost_params = TTCCostParams.from_config(cfg)
- self.predictor = LitTrajectoryPredictor(
- params, cost_params, loaders.unnormalize_trajectory
- )
- assert not os.path.exists(os.path.join(current_dir, "scene_dataset_001"))
- self.batch = torch.rand(
- cfg.batch_size,
- 1,
- cfg.num_steps + cfg.num_steps_future,
- cfg.state_dim,
- )
- self.normalized_batch, self.offset = loaders.normalize_trajectory(self.batch)
- (
- self.normalized_batch_past,
- self.normalized_batch_future,
- ) = loaders.split_trajectory(self.normalized_batch)
-
- # Remove after use
- dataset_dir = os.path.join(current_dir, "scene_dataset_000")
- shutil.rmtree(dataset_dir)
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/util/response.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/util/response.py
deleted file mode 100644
index 5ea609ccedf18eb4ab70f8fc6990448eb6407237..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/util/response.py
+++ /dev/null
@@ -1,107 +0,0 @@
-from __future__ import absolute_import
-
-from email.errors import MultipartInvariantViolationDefect, StartBoundaryNotFoundDefect
-
-from ..exceptions import HeaderParsingError
-from ..packages.six.moves import http_client as httplib
-
-
-def is_fp_closed(obj):
- """
- Checks whether a given file-like object is closed.
-
- :param obj:
- The file-like object to check.
- """
-
- try:
- # Check `isclosed()` first, in case Python3 doesn't set `closed`.
- # GH Issue #928
- return obj.isclosed()
- except AttributeError:
- pass
-
- try:
- # Check via the official file-like-object way.
- return obj.closed
- except AttributeError:
- pass
-
- try:
- # Check if the object is a container for another file-like object that
- # gets released on exhaustion (e.g. HTTPResponse).
- return obj.fp is None
- except AttributeError:
- pass
-
- raise ValueError("Unable to determine whether fp is closed.")
-
-
-def assert_header_parsing(headers):
- """
- Asserts whether all headers have been successfully parsed.
- Extracts encountered errors from the result of parsing headers.
-
- Only works on Python 3.
-
- :param http.client.HTTPMessage headers: Headers to verify.
-
- :raises urllib3.exceptions.HeaderParsingError:
- If parsing errors are found.
- """
-
- # This will fail silently if we pass in the wrong kind of parameter.
- # To make debugging easier add an explicit check.
- if not isinstance(headers, httplib.HTTPMessage):
- raise TypeError("expected httplib.Message, got {0}.".format(type(headers)))
-
- defects = getattr(headers, "defects", None)
- get_payload = getattr(headers, "get_payload", None)
-
- unparsed_data = None
- if get_payload:
- # get_payload is actually email.message.Message.get_payload;
- # we're only interested in the result if it's not a multipart message
- if not headers.is_multipart():
- payload = get_payload()
-
- if isinstance(payload, (bytes, str)):
- unparsed_data = payload
- if defects:
- # httplib is assuming a response body is available
- # when parsing headers even when httplib only sends
- # header data to parse_headers() This results in
- # defects on multipart responses in particular.
- # See: https://github.com/urllib3/urllib3/issues/800
-
- # So we ignore the following defects:
- # - StartBoundaryNotFoundDefect:
- # The claimed start boundary was never found.
- # - MultipartInvariantViolationDefect:
- # A message claimed to be a multipart but no subparts were found.
- defects = [
- defect
- for defect in defects
- if not isinstance(
- defect, (StartBoundaryNotFoundDefect, MultipartInvariantViolationDefect)
- )
- ]
-
- if defects or unparsed_data:
- raise HeaderParsingError(defects=defects, unparsed_data=unparsed_data)
-
-
-def is_response_to_head(response):
- """
- Checks whether the request of a response has been a HEAD-request.
- Handles the quirks of AppEngine.
-
- :param http.client.HTTPResponse response:
- Response to check if the originating request
- used 'HEAD' as a method.
- """
- # FIXME: Can we do this somehow without accessing private httplib _method?
- method = response._method
- if isinstance(method, int): # Platform-specific: Appengine
- return method == 3
- return method.upper() == "HEAD"
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/editable_wheel.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/editable_wheel.py
deleted file mode 100644
index ffcc2cc0e6f49414b32c17d2fca54698cf9b3d60..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/editable_wheel.py
+++ /dev/null
@@ -1,857 +0,0 @@
-"""
-Create a wheel that, when installed, will make the source package 'editable'
-(add it to the interpreter's path, including metadata) per PEP 660. Replaces
-'setup.py develop'.
-
-.. note::
- One of the mechanisms briefly mentioned in PEP 660 to implement editable installs is
- to create a separated directory inside ``build`` and use a .pth file to point to that
- directory. In the context of this file such directory is referred as
- *auxiliary build directory* or ``auxiliary_dir``.
-"""
-
-import logging
-import os
-import shutil
-import sys
-import traceback
-from contextlib import suppress
-from enum import Enum
-from inspect import cleandoc
-from itertools import chain
-from pathlib import Path
-from tempfile import TemporaryDirectory
-from typing import (
- TYPE_CHECKING,
- Dict,
- Iterable,
- Iterator,
- List,
- Mapping,
- Optional,
- Tuple,
- TypeVar,
- Union,
-)
-
-from .. import (
- Command,
- _normalization,
- _path,
- errors,
- namespaces,
-)
-from ..discovery import find_package_path
-from ..dist import Distribution
-from ..warnings import (
- InformationOnly,
- SetuptoolsDeprecationWarning,
- SetuptoolsWarning,
-)
-from .build_py import build_py as build_py_cls
-
-if TYPE_CHECKING:
- from wheel.wheelfile import WheelFile # noqa
-
-if sys.version_info >= (3, 8):
- from typing import Protocol
-elif TYPE_CHECKING:
- from typing_extensions import Protocol
-else:
- from abc import ABC as Protocol
-
-_Path = Union[str, Path]
-_P = TypeVar("_P", bound=_Path)
-_logger = logging.getLogger(__name__)
-
-
-class _EditableMode(Enum):
- """
- Possible editable installation modes:
- `lenient` (new files automatically added to the package - DEFAULT);
- `strict` (requires a new installation when files are added/removed); or
- `compat` (attempts to emulate `python setup.py develop` - DEPRECATED).
- """
-
- STRICT = "strict"
- LENIENT = "lenient"
- COMPAT = "compat" # TODO: Remove `compat` after Dec/2022.
-
- @classmethod
- def convert(cls, mode: Optional[str]) -> "_EditableMode":
- if not mode:
- return _EditableMode.LENIENT # default
-
- _mode = mode.upper()
- if _mode not in _EditableMode.__members__:
- raise errors.OptionError(f"Invalid editable mode: {mode!r}. Try: 'strict'.")
-
- if _mode == "COMPAT":
- SetuptoolsDeprecationWarning.emit(
- "Compat editable installs",
- """
- The 'compat' editable mode is transitional and will be removed
- in future versions of `setuptools`.
- Please adapt your code accordingly to use either the 'strict' or the
- 'lenient' modes.
- """,
- see_docs="userguide/development_mode.html",
- # TODO: define due_date
- # There is a series of shortcomings with the available editable install
- # methods, and they are very controversial. This is something that still
- # needs work.
- # Moreover, `pip` is still hiding this warning, so users are not aware.
- )
-
- return _EditableMode[_mode]
-
-
-_STRICT_WARNING = """
-New or renamed files may not be automatically picked up without a new installation.
-"""
-
-_LENIENT_WARNING = """
-Options like `package-data`, `include/exclude-package-data` or
-`packages.find.exclude/include` may have no effect.
-"""
-
-
-class editable_wheel(Command):
- """Build 'editable' wheel for development.
- This command is private and reserved for internal use of setuptools,
- users should rely on ``setuptools.build_meta`` APIs.
- """
-
- description = "DO NOT CALL DIRECTLY, INTERNAL ONLY: create PEP 660 editable wheel"
-
- user_options = [
- ("dist-dir=", "d", "directory to put final built distributions in"),
- ("dist-info-dir=", "I", "path to a pre-build .dist-info directory"),
- ("mode=", None, cleandoc(_EditableMode.__doc__ or "")),
- ]
-
- def initialize_options(self):
- self.dist_dir = None
- self.dist_info_dir = None
- self.project_dir = None
- self.mode = None
-
- def finalize_options(self):
- dist = self.distribution
- self.project_dir = dist.src_root or os.curdir
- self.package_dir = dist.package_dir or {}
- self.dist_dir = Path(self.dist_dir or os.path.join(self.project_dir, "dist"))
-
- def run(self):
- try:
- self.dist_dir.mkdir(exist_ok=True)
- self._ensure_dist_info()
-
- # Add missing dist_info files
- self.reinitialize_command("bdist_wheel")
- bdist_wheel = self.get_finalized_command("bdist_wheel")
- bdist_wheel.write_wheelfile(self.dist_info_dir)
-
- self._create_wheel_file(bdist_wheel)
- except Exception:
- traceback.print_exc()
- project = self.distribution.name or self.distribution.get_name()
- _DebuggingTips.emit(project=project)
- raise
-
- def _ensure_dist_info(self):
- if self.dist_info_dir is None:
- dist_info = self.reinitialize_command("dist_info")
- dist_info.output_dir = self.dist_dir
- dist_info.ensure_finalized()
- dist_info.run()
- self.dist_info_dir = dist_info.dist_info_dir
- else:
- assert str(self.dist_info_dir).endswith(".dist-info")
- assert Path(self.dist_info_dir, "METADATA").exists()
-
- def _install_namespaces(self, installation_dir, pth_prefix):
- # XXX: Only required to support the deprecated namespace practice
- dist = self.distribution
- if not dist.namespace_packages:
- return
-
- src_root = Path(self.project_dir, self.package_dir.get("", ".")).resolve()
- installer = _NamespaceInstaller(dist, installation_dir, pth_prefix, src_root)
- installer.install_namespaces()
-
- def _find_egg_info_dir(self) -> Optional[str]:
- parent_dir = Path(self.dist_info_dir).parent if self.dist_info_dir else Path()
- candidates = map(str, parent_dir.glob("*.egg-info"))
- return next(candidates, None)
-
- def _configure_build(
- self, name: str, unpacked_wheel: _Path, build_lib: _Path, tmp_dir: _Path
- ):
- """Configure commands to behave in the following ways:
-
- - Build commands can write to ``build_lib`` if they really want to...
- (but this folder is expected to be ignored and modules are expected to live
- in the project directory...)
- - Binary extensions should be built in-place (editable_mode = True)
- - Data/header/script files are not part of the "editable" specification
- so they are written directly to the unpacked_wheel directory.
- """
- # Non-editable files (data, headers, scripts) are written directly to the
- # unpacked_wheel
-
- dist = self.distribution
- wheel = str(unpacked_wheel)
- build_lib = str(build_lib)
- data = str(Path(unpacked_wheel, f"{name}.data", "data"))
- headers = str(Path(unpacked_wheel, f"{name}.data", "headers"))
- scripts = str(Path(unpacked_wheel, f"{name}.data", "scripts"))
-
- # egg-info may be generated again to create a manifest (used for package data)
- egg_info = dist.reinitialize_command("egg_info", reinit_subcommands=True)
- egg_info.egg_base = str(tmp_dir)
- egg_info.ignore_egg_info_in_manifest = True
-
- build = dist.reinitialize_command("build", reinit_subcommands=True)
- install = dist.reinitialize_command("install", reinit_subcommands=True)
-
- build.build_platlib = build.build_purelib = build.build_lib = build_lib
- install.install_purelib = install.install_platlib = install.install_lib = wheel
- install.install_scripts = build.build_scripts = scripts
- install.install_headers = headers
- install.install_data = data
-
- install_scripts = dist.get_command_obj("install_scripts")
- install_scripts.no_ep = True
-
- build.build_temp = str(tmp_dir)
-
- build_py = dist.get_command_obj("build_py")
- build_py.compile = False
- build_py.existing_egg_info_dir = self._find_egg_info_dir()
-
- self._set_editable_mode()
-
- build.ensure_finalized()
- install.ensure_finalized()
-
- def _set_editable_mode(self):
- """Set the ``editable_mode`` flag in the build sub-commands"""
- dist = self.distribution
- build = dist.get_command_obj("build")
- for cmd_name in build.get_sub_commands():
- cmd = dist.get_command_obj(cmd_name)
- if hasattr(cmd, "editable_mode"):
- cmd.editable_mode = True
- elif hasattr(cmd, "inplace"):
- cmd.inplace = True # backward compatibility with distutils
-
- def _collect_build_outputs(self) -> Tuple[List[str], Dict[str, str]]:
- files: List[str] = []
- mapping: Dict[str, str] = {}
- build = self.get_finalized_command("build")
-
- for cmd_name in build.get_sub_commands():
- cmd = self.get_finalized_command(cmd_name)
- if hasattr(cmd, "get_outputs"):
- files.extend(cmd.get_outputs() or [])
- if hasattr(cmd, "get_output_mapping"):
- mapping.update(cmd.get_output_mapping() or {})
-
- return files, mapping
-
- def _run_build_commands(
- self, dist_name: str, unpacked_wheel: _Path, build_lib: _Path, tmp_dir: _Path
- ) -> Tuple[List[str], Dict[str, str]]:
- self._configure_build(dist_name, unpacked_wheel, build_lib, tmp_dir)
- self._run_build_subcommands()
- files, mapping = self._collect_build_outputs()
- self._run_install("headers")
- self._run_install("scripts")
- self._run_install("data")
- return files, mapping
-
- def _run_build_subcommands(self):
- """
- Issue #3501 indicates that some plugins/customizations might rely on:
-
- 1. ``build_py`` not running
- 2. ``build_py`` always copying files to ``build_lib``
-
- However both these assumptions may be false in editable_wheel.
- This method implements a temporary workaround to support the ecosystem
- while the implementations catch up.
- """
- # TODO: Once plugins/customisations had the chance to catch up, replace
- # `self._run_build_subcommands()` with `self.run_command("build")`.
- # Also remove _safely_run, TestCustomBuildPy. Suggested date: Aug/2023.
- build: Command = self.get_finalized_command("build")
- for name in build.get_sub_commands():
- cmd = self.get_finalized_command(name)
- if name == "build_py" and type(cmd) != build_py_cls:
- self._safely_run(name)
- else:
- self.run_command(name)
-
- def _safely_run(self, cmd_name: str):
- try:
- return self.run_command(cmd_name)
- except Exception:
- SetuptoolsDeprecationWarning.emit(
- "Customization incompatible with editable install",
- f"""
- {traceback.format_exc()}
-
- If you are seeing this warning it is very likely that a setuptools
- plugin or customization overrides the `{cmd_name}` command, without
- taking into consideration how editable installs run build steps
- starting from setuptools v64.0.0.
-
- Plugin authors and developers relying on custom build steps are
- encouraged to update their `{cmd_name}` implementation considering the
- information about editable installs in
- https://setuptools.pypa.io/en/latest/userguide/extension.html.
-
- For the time being `setuptools` will silence this error and ignore
- the faulty command, but this behaviour will change in future versions.
- """,
- # TODO: define due_date
- # There is a series of shortcomings with the available editable install
- # methods, and they are very controversial. This is something that still
- # needs work.
- )
-
- def _create_wheel_file(self, bdist_wheel):
- from wheel.wheelfile import WheelFile
-
- dist_info = self.get_finalized_command("dist_info")
- dist_name = dist_info.name
- tag = "-".join(bdist_wheel.get_tag())
- build_tag = "0.editable" # According to PEP 427 needs to start with digit
- archive_name = f"{dist_name}-{build_tag}-{tag}.whl"
- wheel_path = Path(self.dist_dir, archive_name)
- if wheel_path.exists():
- wheel_path.unlink()
-
- unpacked_wheel = TemporaryDirectory(suffix=archive_name)
- build_lib = TemporaryDirectory(suffix=".build-lib")
- build_tmp = TemporaryDirectory(suffix=".build-temp")
-
- with unpacked_wheel as unpacked, build_lib as lib, build_tmp as tmp:
- unpacked_dist_info = Path(unpacked, Path(self.dist_info_dir).name)
- shutil.copytree(self.dist_info_dir, unpacked_dist_info)
- self._install_namespaces(unpacked, dist_info.name)
- files, mapping = self._run_build_commands(dist_name, unpacked, lib, tmp)
- strategy = self._select_strategy(dist_name, tag, lib)
- with strategy, WheelFile(wheel_path, "w") as wheel_obj:
- strategy(wheel_obj, files, mapping)
- wheel_obj.write_files(unpacked)
-
- return wheel_path
-
- def _run_install(self, category: str):
- has_category = getattr(self.distribution, f"has_{category}", None)
- if has_category and has_category():
- _logger.info(f"Installing {category} as non editable")
- self.run_command(f"install_{category}")
-
- def _select_strategy(
- self,
- name: str,
- tag: str,
- build_lib: _Path,
- ) -> "EditableStrategy":
- """Decides which strategy to use to implement an editable installation."""
- build_name = f"__editable__.{name}-{tag}"
- project_dir = Path(self.project_dir)
- mode = _EditableMode.convert(self.mode)
-
- if mode is _EditableMode.STRICT:
- auxiliary_dir = _empty_dir(Path(self.project_dir, "build", build_name))
- return _LinkTree(self.distribution, name, auxiliary_dir, build_lib)
-
- packages = _find_packages(self.distribution)
- has_simple_layout = _simple_layout(packages, self.package_dir, project_dir)
- is_compat_mode = mode is _EditableMode.COMPAT
- if set(self.package_dir) == {""} and has_simple_layout or is_compat_mode:
- # src-layout(ish) is relatively safe for a simple pth file
- src_dir = self.package_dir.get("", ".")
- return _StaticPth(self.distribution, name, [Path(project_dir, src_dir)])
-
- # Use a MetaPathFinder to avoid adding accidental top-level packages/modules
- return _TopLevelFinder(self.distribution, name)
-
-
-class EditableStrategy(Protocol):
- def __call__(self, wheel: "WheelFile", files: List[str], mapping: Dict[str, str]):
- ...
-
- def __enter__(self):
- ...
-
- def __exit__(self, _exc_type, _exc_value, _traceback):
- ...
-
-
-class _StaticPth:
- def __init__(self, dist: Distribution, name: str, path_entries: List[Path]):
- self.dist = dist
- self.name = name
- self.path_entries = path_entries
-
- def __call__(self, wheel: "WheelFile", files: List[str], mapping: Dict[str, str]):
- entries = "\n".join((str(p.resolve()) for p in self.path_entries))
- contents = bytes(f"{entries}\n", "utf-8")
- wheel.writestr(f"__editable__.{self.name}.pth", contents)
-
- def __enter__(self):
- msg = f"""
- Editable install will be performed using .pth file to extend `sys.path` with:
- {list(map(os.fspath, self.path_entries))!r}
- """
- _logger.warning(msg + _LENIENT_WARNING)
- return self
-
- def __exit__(self, _exc_type, _exc_value, _traceback):
- ...
-
-
-class _LinkTree(_StaticPth):
- """
- Creates a ``.pth`` file that points to a link tree in the ``auxiliary_dir``.
-
- This strategy will only link files (not dirs), so it can be implemented in
- any OS, even if that means using hardlinks instead of symlinks.
-
- By collocating ``auxiliary_dir`` and the original source code, limitations
- with hardlinks should be avoided.
- """
- def __init__(
- self, dist: Distribution,
- name: str,
- auxiliary_dir: _Path,
- build_lib: _Path,
- ):
- self.auxiliary_dir = Path(auxiliary_dir)
- self.build_lib = Path(build_lib).resolve()
- self._file = dist.get_command_obj("build_py").copy_file
- super().__init__(dist, name, [self.auxiliary_dir])
-
- def __call__(self, wheel: "WheelFile", files: List[str], mapping: Dict[str, str]):
- self._create_links(files, mapping)
- super().__call__(wheel, files, mapping)
-
- def _normalize_output(self, file: str) -> Optional[str]:
- # Files relative to build_lib will be normalized to None
- with suppress(ValueError):
- path = Path(file).resolve().relative_to(self.build_lib)
- return str(path).replace(os.sep, '/')
- return None
-
- def _create_file(self, relative_output: str, src_file: str, link=None):
- dest = self.auxiliary_dir / relative_output
- if not dest.parent.is_dir():
- dest.parent.mkdir(parents=True)
- self._file(src_file, dest, link=link)
-
- def _create_links(self, outputs, output_mapping):
- self.auxiliary_dir.mkdir(parents=True, exist_ok=True)
- link_type = "sym" if _can_symlink_files(self.auxiliary_dir) else "hard"
- mappings = {
- self._normalize_output(k): v
- for k, v in output_mapping.items()
- }
- mappings.pop(None, None) # remove files that are not relative to build_lib
-
- for output in outputs:
- relative = self._normalize_output(output)
- if relative and relative not in mappings:
- self._create_file(relative, output)
-
- for relative, src in mappings.items():
- self._create_file(relative, src, link=link_type)
-
- def __enter__(self):
- msg = "Strict editable install will be performed using a link tree.\n"
- _logger.warning(msg + _STRICT_WARNING)
- return self
-
- def __exit__(self, _exc_type, _exc_value, _traceback):
- msg = f"""\n
- Strict editable installation performed using the auxiliary directory:
- {self.auxiliary_dir}
-
- Please be careful to not remove this directory, otherwise you might not be able
- to import/use your package.
- """
- InformationOnly.emit("Editable installation.", msg)
-
-
-class _TopLevelFinder:
- def __init__(self, dist: Distribution, name: str):
- self.dist = dist
- self.name = name
-
- def __call__(self, wheel: "WheelFile", files: List[str], mapping: Dict[str, str]):
- src_root = self.dist.src_root or os.curdir
- top_level = chain(_find_packages(self.dist), _find_top_level_modules(self.dist))
- package_dir = self.dist.package_dir or {}
- roots = _find_package_roots(top_level, package_dir, src_root)
-
- namespaces_: Dict[str, List[str]] = dict(chain(
- _find_namespaces(self.dist.packages or [], roots),
- ((ns, []) for ns in _find_virtual_namespaces(roots)),
- ))
-
- name = f"__editable__.{self.name}.finder"
- finder = _normalization.safe_identifier(name)
- content = bytes(_finder_template(name, roots, namespaces_), "utf-8")
- wheel.writestr(f"{finder}.py", content)
-
- content = bytes(f"import {finder}; {finder}.install()", "utf-8")
- wheel.writestr(f"__editable__.{self.name}.pth", content)
-
- def __enter__(self):
- msg = "Editable install will be performed using a meta path finder.\n"
- _logger.warning(msg + _LENIENT_WARNING)
- return self
-
- def __exit__(self, _exc_type, _exc_value, _traceback):
- msg = """\n
- Please be careful with folders in your working directory with the same
- name as your package as they may take precedence during imports.
- """
- InformationOnly.emit("Editable installation.", msg)
-
-
-def _can_symlink_files(base_dir: Path) -> bool:
- with TemporaryDirectory(dir=str(base_dir.resolve())) as tmp:
- path1, path2 = Path(tmp, "file1.txt"), Path(tmp, "file2.txt")
- path1.write_text("file1", encoding="utf-8")
- with suppress(AttributeError, NotImplementedError, OSError):
- os.symlink(path1, path2)
- if path2.is_symlink() and path2.read_text(encoding="utf-8") == "file1":
- return True
-
- try:
- os.link(path1, path2) # Ensure hard links can be created
- except Exception as ex:
- msg = (
- "File system does not seem to support either symlinks or hard links. "
- "Strict editable installs require one of them to be supported."
- )
- raise LinksNotSupported(msg) from ex
- return False
-
-
-def _simple_layout(
- packages: Iterable[str], package_dir: Dict[str, str], project_dir: Path
-) -> bool:
- """Return ``True`` if:
- - all packages are contained by the same parent directory, **and**
- - all packages become importable if the parent directory is added to ``sys.path``.
-
- >>> _simple_layout(['a'], {"": "src"}, "/tmp/myproj")
- True
- >>> _simple_layout(['a', 'a.b'], {"": "src"}, "/tmp/myproj")
- True
- >>> _simple_layout(['a', 'a.b'], {}, "/tmp/myproj")
- True
- >>> _simple_layout(['a', 'a.a1', 'a.a1.a2', 'b'], {"": "src"}, "/tmp/myproj")
- True
- >>> _simple_layout(['a', 'a.a1', 'a.a1.a2', 'b'], {"a": "a", "b": "b"}, ".")
- True
- >>> _simple_layout(['a', 'a.a1', 'a.a1.a2', 'b'], {"a": "_a", "b": "_b"}, ".")
- False
- >>> _simple_layout(['a', 'a.a1', 'a.a1.a2', 'b'], {"a": "_a"}, "/tmp/myproj")
- False
- >>> _simple_layout(['a', 'a.a1', 'a.a1.a2', 'b'], {"a.a1.a2": "_a2"}, ".")
- False
- >>> _simple_layout(['a', 'a.b'], {"": "src", "a.b": "_ab"}, "/tmp/myproj")
- False
- >>> # Special cases, no packages yet:
- >>> _simple_layout([], {"": "src"}, "/tmp/myproj")
- True
- >>> _simple_layout([], {"a": "_a", "": "src"}, "/tmp/myproj")
- False
- """
- layout = {
- pkg: find_package_path(pkg, package_dir, project_dir)
- for pkg in packages
- }
- if not layout:
- return set(package_dir) in ({}, {""})
- parent = os.path.commonpath([_parent_path(k, v) for k, v in layout.items()])
- return all(
- _path.same_path(Path(parent, *key.split('.')), value)
- for key, value in layout.items()
- )
-
-
-def _parent_path(pkg, pkg_path):
- """Infer the parent path containing a package, that if added to ``sys.path`` would
- allow importing that package.
- When ``pkg`` is directly mapped into a directory with a different name, return its
- own path.
- >>> _parent_path("a", "src/a")
- 'src'
- >>> _parent_path("b", "src/c")
- 'src/c'
- """
- parent = pkg_path[:-len(pkg)] if pkg_path.endswith(pkg) else pkg_path
- return parent.rstrip("/" + os.sep)
-
-
-def _find_packages(dist: Distribution) -> Iterator[str]:
- yield from iter(dist.packages or [])
-
- py_modules = dist.py_modules or []
- nested_modules = [mod for mod in py_modules if "." in mod]
- if dist.ext_package:
- yield dist.ext_package
- else:
- ext_modules = dist.ext_modules or []
- nested_modules += [x.name for x in ext_modules if "." in x.name]
-
- for module in nested_modules:
- package, _, _ = module.rpartition(".")
- yield package
-
-
-def _find_top_level_modules(dist: Distribution) -> Iterator[str]:
- py_modules = dist.py_modules or []
- yield from (mod for mod in py_modules if "." not in mod)
-
- if not dist.ext_package:
- ext_modules = dist.ext_modules or []
- yield from (x.name for x in ext_modules if "." not in x.name)
-
-
-def _find_package_roots(
- packages: Iterable[str],
- package_dir: Mapping[str, str],
- src_root: _Path,
-) -> Dict[str, str]:
- pkg_roots: Dict[str, str] = {
- pkg: _absolute_root(find_package_path(pkg, package_dir, src_root))
- for pkg in sorted(packages)
- }
-
- return _remove_nested(pkg_roots)
-
-
-def _absolute_root(path: _Path) -> str:
- """Works for packages and top-level modules"""
- path_ = Path(path)
- parent = path_.parent
-
- if path_.exists():
- return str(path_.resolve())
- else:
- return str(parent.resolve() / path_.name)
-
-
-def _find_virtual_namespaces(pkg_roots: Dict[str, str]) -> Iterator[str]:
- """By carefully designing ``package_dir``, it is possible to implement the logical
- structure of PEP 420 in a package without the corresponding directories.
-
- Moreover a parent package can be purposefully/accidentally skipped in the discovery
- phase (e.g. ``find_packages(include=["mypkg.*"])``, when ``mypkg.foo`` is included
- by ``mypkg`` itself is not).
- We consider this case to also be a virtual namespace (ignoring the original
- directory) to emulate a non-editable installation.
-
- This function will try to find these kinds of namespaces.
- """
- for pkg in pkg_roots:
- if "." not in pkg:
- continue
- parts = pkg.split(".")
- for i in range(len(parts) - 1, 0, -1):
- partial_name = ".".join(parts[:i])
- path = Path(find_package_path(partial_name, pkg_roots, ""))
- if not path.exists() or partial_name not in pkg_roots:
- # partial_name not in pkg_roots ==> purposefully/accidentally skipped
- yield partial_name
-
-
-def _find_namespaces(
- packages: List[str], pkg_roots: Dict[str, str]
-) -> Iterator[Tuple[str, List[str]]]:
- for pkg in packages:
- path = find_package_path(pkg, pkg_roots, "")
- if Path(path).exists() and not Path(path, "__init__.py").exists():
- yield (pkg, [path])
-
-
-def _remove_nested(pkg_roots: Dict[str, str]) -> Dict[str, str]:
- output = dict(pkg_roots.copy())
-
- for pkg, path in reversed(list(pkg_roots.items())):
- if any(
- pkg != other and _is_nested(pkg, path, other, other_path)
- for other, other_path in pkg_roots.items()
- ):
- output.pop(pkg)
-
- return output
-
-
-def _is_nested(pkg: str, pkg_path: str, parent: str, parent_path: str) -> bool:
- """
- Return ``True`` if ``pkg`` is nested inside ``parent`` both logically and in the
- file system.
- >>> _is_nested("a.b", "path/a/b", "a", "path/a")
- True
- >>> _is_nested("a.b", "path/a/b", "a", "otherpath/a")
- False
- >>> _is_nested("a.b", "path/a/b", "c", "path/c")
- False
- >>> _is_nested("a.a", "path/a/a", "a", "path/a")
- True
- >>> _is_nested("b.a", "path/b/a", "a", "path/a")
- False
- """
- norm_pkg_path = _path.normpath(pkg_path)
- rest = pkg.replace(parent, "", 1).strip(".").split(".")
- return (
- pkg.startswith(parent)
- and norm_pkg_path == _path.normpath(Path(parent_path, *rest))
- )
-
-
-def _empty_dir(dir_: _P) -> _P:
- """Create a directory ensured to be empty. Existing files may be removed."""
- shutil.rmtree(dir_, ignore_errors=True)
- os.makedirs(dir_)
- return dir_
-
-
-class _NamespaceInstaller(namespaces.Installer):
- def __init__(self, distribution, installation_dir, editable_name, src_root):
- self.distribution = distribution
- self.src_root = src_root
- self.installation_dir = installation_dir
- self.editable_name = editable_name
- self.outputs = []
- self.dry_run = False
-
- def _get_target(self):
- """Installation target."""
- return os.path.join(self.installation_dir, self.editable_name)
-
- def _get_root(self):
- """Where the modules/packages should be loaded from."""
- return repr(str(self.src_root))
-
-
-_FINDER_TEMPLATE = """\
-import sys
-from importlib.machinery import ModuleSpec
-from importlib.machinery import all_suffixes as module_suffixes
-from importlib.util import spec_from_file_location
-from itertools import chain
-from pathlib import Path
-
-MAPPING = {mapping!r}
-NAMESPACES = {namespaces!r}
-PATH_PLACEHOLDER = {name!r} + ".__path_hook__"
-
-
-class _EditableFinder: # MetaPathFinder
- @classmethod
- def find_spec(cls, fullname, path=None, target=None):
- for pkg, pkg_path in reversed(list(MAPPING.items())):
- if fullname == pkg or fullname.startswith(f"{{pkg}}."):
- rest = fullname.replace(pkg, "", 1).strip(".").split(".")
- return cls._find_spec(fullname, Path(pkg_path, *rest))
-
- return None
-
- @classmethod
- def _find_spec(cls, fullname, candidate_path):
- init = candidate_path / "__init__.py"
- candidates = (candidate_path.with_suffix(x) for x in module_suffixes())
- for candidate in chain([init], candidates):
- if candidate.exists():
- return spec_from_file_location(fullname, candidate)
-
-
-class _EditableNamespaceFinder: # PathEntryFinder
- @classmethod
- def _path_hook(cls, path):
- if path == PATH_PLACEHOLDER:
- return cls
- raise ImportError
-
- @classmethod
- def _paths(cls, fullname):
- # Ensure __path__ is not empty for the spec to be considered a namespace.
- return NAMESPACES[fullname] or MAPPING.get(fullname) or [PATH_PLACEHOLDER]
-
- @classmethod
- def find_spec(cls, fullname, target=None):
- if fullname in NAMESPACES:
- spec = ModuleSpec(fullname, None, is_package=True)
- spec.submodule_search_locations = cls._paths(fullname)
- return spec
- return None
-
- @classmethod
- def find_module(cls, fullname):
- return None
-
-
-def install():
- if not any(finder == _EditableFinder for finder in sys.meta_path):
- sys.meta_path.append(_EditableFinder)
-
- if not NAMESPACES:
- return
-
- if not any(hook == _EditableNamespaceFinder._path_hook for hook in sys.path_hooks):
- # PathEntryFinder is needed to create NamespaceSpec without private APIS
- sys.path_hooks.append(_EditableNamespaceFinder._path_hook)
- if PATH_PLACEHOLDER not in sys.path:
- sys.path.append(PATH_PLACEHOLDER) # Used just to trigger the path hook
-"""
-
-
-def _finder_template(
- name: str, mapping: Mapping[str, str], namespaces: Dict[str, List[str]]
-) -> str:
- """Create a string containing the code for the``MetaPathFinder`` and
- ``PathEntryFinder``.
- """
- mapping = dict(sorted(mapping.items(), key=lambda p: p[0]))
- return _FINDER_TEMPLATE.format(name=name, mapping=mapping, namespaces=namespaces)
-
-
-class LinksNotSupported(errors.FileError):
- """File system does not seem to support either symlinks or hard links."""
-
-
-class _DebuggingTips(SetuptoolsWarning):
- _SUMMARY = "Problem in editable installation."
- _DETAILS = """
- An error happened while installing `{project}` in editable mode.
-
- The following steps are recommended to help debug this problem:
-
- - Try to install the project normally, without using the editable mode.
- Does the error still persist?
- (If it does, try fixing the problem before attempting the editable mode).
- - If you are using binary extensions, make sure you have all OS-level
- dependencies installed (e.g. compilers, toolchains, binary libraries, ...).
- - Try the latest version of setuptools (maybe the error was already fixed).
- - If you (or your project dependencies) are using any setuptools extension
- or customization, make sure they support the editable mode.
-
- After following the steps above, if the problem still persists and
- you think this is related to how setuptools handles editable installations,
- please submit a reproducible example
- (see https://stackoverflow.com/help/minimal-reproducible-example) to:
-
- https://github.com/pypa/setuptools/issues
- """
- _SEE_DOCS = "userguide/development_mode.html"
diff --git a/spaces/TempoFunk/makeavid-sd-jax/makeavid_sd/inference.py b/spaces/TempoFunk/makeavid-sd-jax/makeavid_sd/inference.py
deleted file mode 100644
index a21359ab7a6f6de693f996693348ea8859a6ff09..0000000000000000000000000000000000000000
--- a/spaces/TempoFunk/makeavid-sd-jax/makeavid_sd/inference.py
+++ /dev/null
@@ -1,534 +0,0 @@
-
-from typing import Any, Union, Optional, Tuple, List, Dict
-import os
-import gc
-from functools import partial
-
-import jax
-import jax.numpy as jnp
-import numpy as np
-
-from flax.core.frozen_dict import FrozenDict
-from flax import jax_utils
-from flax.training.common_utils import shard
-from PIL import Image
-import einops
-
-from diffusers import FlaxAutoencoderKL, FlaxUNet2DConditionModel
-from diffusers import (
- FlaxDDIMScheduler,
- FlaxPNDMScheduler,
- FlaxLMSDiscreteScheduler,
- FlaxDPMSolverMultistepScheduler,
-)
-from diffusers.schedulers.scheduling_ddim_flax import DDIMSchedulerState
-from diffusers.schedulers.scheduling_pndm_flax import PNDMSchedulerState
-from diffusers.schedulers.scheduling_lms_discrete_flax import LMSDiscreteSchedulerState
-from diffusers.schedulers.scheduling_dpmsolver_multistep_flax import DPMSolverMultistepSchedulerState
-
-from transformers import FlaxCLIPTextModel, CLIPTokenizer
-
-from .flax_impl.flax_unet_pseudo3d_condition import UNetPseudo3DConditionModel
-
-SchedulerType = Union[
- FlaxDDIMScheduler,
- FlaxPNDMScheduler,
- FlaxLMSDiscreteScheduler,
- FlaxDPMSolverMultistepScheduler,
-]
-
-SchedulerStateType = Union[
- DDIMSchedulerState,
- PNDMSchedulerState,
- LMSDiscreteSchedulerState,
- DPMSolverMultistepSchedulerState,
-]
-
-SCHEDULERS: Dict[str, SchedulerType] = {
- 'dpm': FlaxDPMSolverMultistepScheduler, # husbando
- 'ddim': FlaxDDIMScheduler,
- #'PLMS': FlaxPNDMScheduler, # its not correctly implemented in diffusers, output is bad, but at least it "works"
- #'LMS': FlaxLMSDiscreteScheduler, # borked
- # image_latents, image_scheduler_state = scheduler.step(
- # File "/mnt/work1/make_a_vid/makeavid-space/.venv/lib/python3.10/site-packages/diffusers/schedulers/scheduling_lms_discrete_flax.py", line 255, in step
- # order = min(timestep + 1, order)
- # jax._src.errors.ConcretizationTypeError: Abstract tracer value encountered where concrete value is expected: Tracedwith
- # The problem arose with the `bool` function.
- # The error occurred while tracing the function scanned_fun at /mnt/work1/make_a_vid/makeavid-space/.venv/lib/python3.10/site-packages/jax/_src/lax/control_flow/loops.py:1668 for scan. This concrete value was not available in Python because it depends on the values of the arguments loop_carry[0] and loop_carry[1][1].timesteps
-}
-
-def dtypestr(x: jnp.dtype):
- if x == jnp.float32: return 'float32'
- elif x == jnp.float16: return 'float16'
- elif x == jnp.bfloat16: return 'bfloat16'
- else: raise
-def castto(dtype, m, x):
- if dtype == jnp.float32: return m.to_fp32(x)
- elif dtype == jnp.float16: return m.to_fp16(x)
- elif dtype == jnp.bfloat16: return m.to_bf16(x)
- else: raise
-
-class InferenceUNetPseudo3D:
- def __init__(self,
- model_path: str,
- dtype: jnp.dtype = jnp.float16,
- hf_auth_token: Union[str, None] = None
- ) -> None:
- self.dtype = dtype
- self.model_path = model_path
- self.hf_auth_token = hf_auth_token
-
- self.params: Dict[str, FrozenDict[str, Any]] = {}
- try:
- import traceback
- print('initializing unet')
- unet, unet_params = UNetPseudo3DConditionModel.from_pretrained(
- self.model_path,
- subfolder = 'unet',
- from_pt = False,
- sample_size = (64, 64),
- dtype = self.dtype,
- param_dtype = dtypestr(self.dtype),
- use_memory_efficient_attention = True,
- use_auth_token = self.hf_auth_token
- )
- self.unet: UNetPseudo3DConditionModel = unet
- print('casting unet params')
- unet_params = castto(self.dtype, self.unet, unet_params)
- print('storing unet params')
- self.params['unet'] = FrozenDict(unet_params)
- print('deleting unet params')
- del unet_params
- except Exception as e:
- print(e)
- self.failed = ''.join(traceback.format_exception(None, e, e.__traceback__))
- traceback.print_exc()
- return
- self.failed = False
- vae, vae_params = FlaxAutoencoderKL.from_pretrained(
- self.model_path,
- subfolder = 'vae',
- from_pt = True,
- dtype = self.dtype,
- use_auth_token = self.hf_auth_token
- )
- self.vae: FlaxAutoencoderKL = vae
- vae_params = castto(self.dtype, self.vae, vae_params)
- self.params['vae'] = FrozenDict(vae_params)
- del vae_params
- text_encoder = FlaxCLIPTextModel.from_pretrained(
- self.model_path,
- subfolder = 'text_encoder',
- from_pt = True,
- dtype = self.dtype,
- use_auth_token = self.hf_auth_token
- )
- text_encoder_params = text_encoder.params
- del text_encoder._params
- text_encoder_params = castto(self.dtype, text_encoder, text_encoder_params)
- self.text_encoder: FlaxCLIPTextModel = text_encoder
- self.params['text_encoder'] = FrozenDict(text_encoder_params)
- del text_encoder_params
- imunet, imunet_params = FlaxUNet2DConditionModel.from_pretrained(
- 'runwayml/stable-diffusion-v1-5',
- subfolder = 'unet',
- from_pt = True,
- dtype = self.dtype,
- use_memory_efficient_attention = True,
- use_auth_token = self.hf_auth_token
- )
- imunet_params = castto(self.dtype, imunet, imunet_params)
- self.imunet: FlaxUNet2DConditionModel = imunet
- self.params['imunet'] = FrozenDict(imunet_params)
- del imunet_params
- self.tokenizer: CLIPTokenizer = CLIPTokenizer.from_pretrained(
- self.model_path,
- subfolder = 'tokenizer',
- use_auth_token = self.hf_auth_token
- )
- self.schedulers: Dict[str, Dict[str, SchedulerType]] = {}
- for scheduler_name in SCHEDULERS:
- if scheduler_name not in ['KarrasVe', 'SDEVe']:
- scheduler, scheduler_state = SCHEDULERS[scheduler_name].from_pretrained(
- self.model_path,
- subfolder = 'scheduler',
- dtype = jnp.float32,
- use_auth_token = self.hf_auth_token
- )
- else:
- scheduler, scheduler_state = SCHEDULERS[scheduler_name].from_pretrained(
- self.model_path,
- subfolder = 'scheduler',
- use_auth_token = self.hf_auth_token
- )
- self.schedulers[scheduler_name] = scheduler
- self.params[scheduler_name] = scheduler_state
- self.vae_scale_factor: int = int(2 ** (len(self.vae.config.block_out_channels) - 1))
- self.device_count = jax.device_count()
- gc.collect()
-
- def prepare_inputs(self,
- prompt: List[str],
- neg_prompt: List[str],
- hint_image: List[Image.Image],
- mask_image: List[Image.Image],
- width: int,
- height: int
- ) -> Tuple[jnp.ndarray, jnp.ndarray, jnp.ndarray, jnp.ndarray]: # prompt, neg_prompt, hint_image, mask_image
- tokens = self.tokenizer(
- prompt,
- truncation = True,
- return_overflowing_tokens = False,
- max_length = 77, #self.text_encoder.config.max_length defaults to 20 if its not in the config smh
- padding = 'max_length',
- return_tensors = 'np'
- ).input_ids
- tokens = jnp.array(tokens, dtype = jnp.int32)
- neg_tokens = self.tokenizer(
- neg_prompt,
- truncation = True,
- return_overflowing_tokens = False,
- max_length = 77,
- padding = 'max_length',
- return_tensors = 'np'
- ).input_ids
- neg_tokens = jnp.array(neg_tokens, dtype = jnp.int32)
- for i,im in enumerate(hint_image):
- if im.size != (width, height):
- hint_image[i] = hint_image[i].resize((width, height), resample = Image.Resampling.LANCZOS)
- for i,im in enumerate(mask_image):
- if im.size != (width, height):
- mask_image[i] = mask_image[i].resize((width, height), resample = Image.Resampling.LANCZOS)
- # b,h,w,c | c == 3
- hint = jnp.concatenate(
- [ jnp.expand_dims(np.asarray(x.convert('RGB')), axis = 0) for x in hint_image ],
- axis = 0
- ).astype(jnp.float32)
- # scale -1,1
- hint = (hint / 255) * 2 - 1
- # b,h,w,c | c == 1
- mask = jnp.concatenate(
- [ jnp.expand_dims(np.asarray(x.convert('L')), axis = (0, -1)) for x in mask_image ],
- axis = 0
- ).astype(jnp.float32)
- # scale -1,1
- mask = (mask / 255) * 2 - 1
- # binarize mask
- mask = mask.at[mask < 0.5].set(0)
- mask = mask.at[mask >= 0.5].set(1)
- # mask
- hint = hint * (mask < 0.5)
- # b,h,w,c -> b,c,h,w
- hint = hint.transpose((0,3,1,2))
- mask = mask.transpose((0,3,1,2))
- return tokens, neg_tokens, hint, mask
-
- def generate(self,
- prompt: Union[str, List[str]] = '',
- inference_steps: int = 20,
- hint_image: Union[Image.Image, List[Image.Image], None] = None,
- mask_image: Union[Image.Image, List[Image.Image], None] = None,
- neg_prompt: Union[str, List[str]] = '',
- cfg: float = 15.0,
- cfg_image: Optional[float] = None,
- num_frames: int = 24,
- width: int = 512,
- height: int = 512,
- seed: int = 0,
- scheduler_type: str = 'dpm'
- ) -> List[List[Image.Image]]:
- assert inference_steps > 0, f'number of inference steps must be > 0 but is {inference_steps}'
- assert num_frames > 0, f'number of frames must be > 0 but is {num_frames}'
- assert width % 32 == 0, f'width must be divisible by 32 but is {width}'
- assert height % 32 == 0, f'height must be divisible by 32 but is {height}'
- if isinstance(prompt, str):
- prompt = [ prompt ]
- batch_size = len(prompt)
- assert batch_size % self.device_count == 0, f'batch size must be multiple of {self.device_count}'
- if hint_image is None:
- hint_image = Image.new('RGB', (width, height), color = (0,0,0))
- use_imagegen = True
- else:
- use_imagegen = False
- if isinstance(hint_image, Image.Image):
- hint_image = [ hint_image ] * batch_size
- assert len(hint_image) == batch_size, f'number of hint images must be equal to batch size {batch_size} but is {len(hint_image)}'
- if mask_image is None:
- mask_image = Image.new('L', hint_image[0].size, color = 0)
- if isinstance(mask_image, Image.Image):
- mask_image = [ mask_image ] * batch_size
- assert len(mask_image) == batch_size, f'number of mask images must be equal to batch size {batch_size} but is {len(mask_image)}'
- if isinstance(neg_prompt, str):
- neg_prompt = [ neg_prompt ] * batch_size
- assert len(neg_prompt) == batch_size, f'number of negative prompts must be equal to batch size {batch_size} but is {len(neg_prompt)}'
- assert scheduler_type in SCHEDULERS, f'unknown type of noise scheduler: {scheduler_type}, must be one of {list(SCHEDULERS.keys())}'
- tokens, neg_tokens, hint, mask = self.prepare_inputs(
- prompt = prompt,
- neg_prompt = neg_prompt,
- hint_image = hint_image,
- mask_image = mask_image,
- width = width,
- height = height
- )
- if cfg_image is None:
- cfg_image = cfg
- #params['scheduler'] = scheduler_state
- # NOTE splitting rngs is not deterministic,
- # running on different device counts gives different seeds
- #rng = jax.random.PRNGKey(seed)
- #rngs = jax.random.split(rng, self.device_count)
- # manually assign seeded RNGs to devices for reproducability
- rngs = jnp.array([ jax.random.PRNGKey(seed + i) for i in range(self.device_count) ])
- params = jax_utils.replicate(self.params)
- tokens = shard(tokens)
- neg_tokens = shard(neg_tokens)
- hint = shard(hint)
- mask = shard(mask)
- images = _p_generate(self,
- tokens,
- neg_tokens,
- hint,
- mask,
- inference_steps,
- num_frames,
- height,
- width,
- cfg,
- cfg_image,
- rngs,
- params,
- use_imagegen,
- scheduler_type,
- )
- if images.ndim == 5:
- images = einops.rearrange(images, 'd f c h w -> (d f) h w c')
- else:
- images = einops.rearrange(images, 'f c h w -> f h w c')
- # to cpu
- images = np.array(images)
- images = [ Image.fromarray(x) for x in images ]
- return images
-
- def _generate(self,
- tokens: jnp.ndarray,
- neg_tokens: jnp.ndarray,
- hint: jnp.ndarray,
- mask: jnp.ndarray,
- inference_steps: int,
- num_frames,
- height,
- width,
- cfg: float,
- cfg_image: float,
- rng: jax.random.KeyArray,
- params: Union[Dict[str, Any], FrozenDict[str, Any]],
- use_imagegen: bool,
- scheduler_type: str
- ) -> List[Image.Image]:
- batch_size = tokens.shape[0]
- latent_h = height // self.vae_scale_factor
- latent_w = width // self.vae_scale_factor
- latent_shape = (
- batch_size,
- self.vae.config.latent_channels,
- num_frames,
- latent_h,
- latent_w
- )
- encoded_prompt = self.text_encoder(tokens, params = params['text_encoder'])[0]
- encoded_neg_prompt = self.text_encoder(neg_tokens, params = params['text_encoder'])[0]
-
- scheduler = self.schedulers[scheduler_type]
- scheduler_state = params[scheduler_type]
-
- if use_imagegen:
- image_latent_shape = (batch_size, self.vae.config.latent_channels, latent_h, latent_w)
- image_latents = jax.random.normal(
- rng,
- shape = image_latent_shape,
- dtype = jnp.float32
- ) * scheduler_state.init_noise_sigma
- image_scheduler_state = scheduler.set_timesteps(
- scheduler_state,
- num_inference_steps = inference_steps,
- shape = image_latents.shape
- )
- def image_sample_loop(step, args):
- image_latents, image_scheduler_state = args
- t = image_scheduler_state.timesteps[step]
- tt = jnp.broadcast_to(t, image_latents.shape[0])
- latents_input = scheduler.scale_model_input(image_scheduler_state, image_latents, t)
- noise_pred = self.imunet.apply(
- { 'params': params['imunet']} ,
- latents_input,
- tt,
- encoder_hidden_states = encoded_prompt
- ).sample
- noise_pred_uncond = self.imunet.apply(
- { 'params': params['imunet'] },
- latents_input,
- tt,
- encoder_hidden_states = encoded_neg_prompt
- ).sample
- noise_pred = noise_pred_uncond + cfg_image * (noise_pred - noise_pred_uncond)
- image_latents, image_scheduler_state = scheduler.step(
- image_scheduler_state,
- noise_pred.astype(jnp.float32),
- t,
- image_latents
- ).to_tuple()
- return image_latents, image_scheduler_state
- image_latents, _ = jax.lax.fori_loop(
- 0, inference_steps,
- image_sample_loop,
- (image_latents, image_scheduler_state)
- )
- hint = image_latents
- else:
- hint = self.vae.apply(
- { 'params': params['vae'] },
- hint,
- method = self.vae.encode
- ).latent_dist.mean * self.vae.config.scaling_factor
- # NOTE vae keeps channels last for encode, but rearranges to channels first for decode
- # b0 h1 w2 c3 -> b0 c3 h1 w2
- hint = hint.transpose((0, 3, 1, 2))
-
- hint = jnp.expand_dims(hint, axis = 2).repeat(num_frames, axis = 2)
- mask = jax.image.resize(mask, (*mask.shape[:-2], *hint.shape[-2:]), method = 'nearest')
- mask = jnp.expand_dims(mask, axis = 2).repeat(num_frames, axis = 2)
- # NOTE jax normal distribution is shit with float16 + bfloat16
- # SEE https://github.com/google/jax/discussions/13798
- # generate random at float32
- latents = jax.random.normal(
- rng,
- shape = latent_shape,
- dtype = jnp.float32
- ) * scheduler_state.init_noise_sigma
- scheduler_state = scheduler.set_timesteps(
- scheduler_state,
- num_inference_steps = inference_steps,
- shape = latents.shape
- )
-
- def sample_loop(step, args):
- latents, scheduler_state = args
- t = scheduler_state.timesteps[step]#jnp.array(scheduler_state.timesteps, dtype = jnp.int32)[step]
- tt = jnp.broadcast_to(t, latents.shape[0])
- latents_input = scheduler.scale_model_input(scheduler_state, latents, t)
- latents_input = jnp.concatenate([latents_input, mask, hint], axis = 1)
- noise_pred = self.unet.apply(
- { 'params': params['unet'] },
- latents_input,
- tt,
- encoded_prompt
- ).sample
- noise_pred_uncond = self.unet.apply(
- { 'params': params['unet'] },
- latents_input,
- tt,
- encoded_neg_prompt
- ).sample
- noise_pred = noise_pred_uncond + cfg * (noise_pred - noise_pred_uncond)
- latents, scheduler_state = scheduler.step(
- scheduler_state,
- noise_pred.astype(jnp.float32),
- t,
- latents
- ).to_tuple()
- return latents, scheduler_state
-
- latents, _ = jax.lax.fori_loop(
- 0, inference_steps,
- sample_loop,
- (latents, scheduler_state)
- )
- latents = 1 / self.vae.config.scaling_factor * latents
- latents = einops.rearrange(latents, 'b c f h w -> (b f) c h w')
- num_images = len(latents)
- images_out = jnp.zeros(
- (
- num_images,
- self.vae.config.out_channels,
- height,
- width
- ),
- dtype = self.dtype
- )
- def decode_loop(step, images_out):
- # NOTE vae keeps channels last for encode, but rearranges to channels first for decode
- im = self.vae.apply(
- { 'params': params['vae'] },
- jnp.expand_dims(latents[step], axis = 0),
- method = self.vae.decode
- ).sample
- images_out = images_out.at[step].set(im[0])
- return images_out
- images_out = jax.lax.fori_loop(0, num_images, decode_loop, images_out)
- images_out = ((images_out / 2 + 0.5) * 255).round().clip(0, 255).astype(jnp.uint8)
- return images_out
-
-
-@partial(
- jax.pmap,
- in_axes = ( # 0 -> split across batch dim, None -> duplicate
- None, # 0 inference_class
- 0, # 1 tokens
- 0, # 2 neg_tokens
- 0, # 3 hint
- 0, # 4 mask
- None, # 5 inference_steps
- None, # 6 num_frames
- None, # 7 height
- None, # 8 width
- None, # 9 cfg
- None, # 10 cfg_image
- 0, # 11 rng
- 0, # 12 params
- None, # 13 use_imagegen
- None, # 14 scheduler_type
- ),
- static_broadcasted_argnums = ( # trigger recompilation on change
- 0, # inference_class
- 5, # inference_steps
- 6, # num_frames
- 7, # height
- 8, # width
- 13, # use_imagegen
- 14, # scheduler_type
- )
-)
-def _p_generate(
- inference_class: InferenceUNetPseudo3D,
- tokens,
- neg_tokens,
- hint,
- mask,
- inference_steps: int,
- num_frames: int,
- height: int,
- width: int,
- cfg: float,
- cfg_image: float,
- rng,
- params,
- use_imagegen: bool,
- scheduler_type: str
-):
- return inference_class._generate(
- tokens,
- neg_tokens,
- hint,
- mask,
- inference_steps,
- num_frames,
- height,
- width,
- cfg,
- cfg_image,
- rng,
- params,
- use_imagegen,
- scheduler_type
- )
-
diff --git a/spaces/TencentARC/VLog/models/grit_src/grit/predictor.py b/spaces/TencentARC/VLog/models/grit_src/grit/predictor.py
deleted file mode 100644
index 6c188ea2ab5fac232554d4eaaf2fb073670a70e4..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/grit_src/grit/predictor.py
+++ /dev/null
@@ -1,66 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Modified by Jialian Wu from https://github.com/facebookresearch/detectron2/blob/main/detectron2/utils/visualizer.py
-import torch
-
-from detectron2.engine.defaults import DefaultPredictor
-from detectron2.utils.visualizer import ColorMode, Visualizer
-
-
-class Visualizer_GRiT(Visualizer):
- def __init__(self, image, instance_mode=None):
- super().__init__(image, instance_mode=instance_mode)
-
- def draw_instance_predictions(self, predictions):
- boxes = predictions.pred_boxes if predictions.has("pred_boxes") else None
- scores = predictions.scores if predictions.has("scores") else None
- classes = predictions.pred_classes.tolist() if predictions.has("pred_classes") else None
- object_description = predictions.pred_object_descriptions.data
- # uncomment to output scores in visualized images
- # object_description = [c + '|' + str(round(s.item(), 1)) for c, s in zip(object_description, scores)]
-
- if self._instance_mode == ColorMode.SEGMENTATION and self.metadata.get("thing_colors"):
- colors = [
- self._jitter([x / 255 for x in self.metadata.thing_colors[c]]) for c in classes
- ]
- alpha = 0.8
- else:
- colors = None
- alpha = 0.5
-
- if self._instance_mode == ColorMode.IMAGE_BW:
- self.output.reset_image(
- self._create_grayscale_image(
- (predictions.pred_masks.any(dim=0) > 0).numpy()
- if predictions.has("pred_masks")
- else None
- )
- )
- alpha = 0.3
-
- self.overlay_instances(
- masks=None,
- boxes=boxes,
- labels=object_description,
- keypoints=None,
- assigned_colors=colors,
- alpha=alpha,
- )
- return self.output
-
-
-class VisualizationDemo(object):
- def __init__(self, cfg, instance_mode=ColorMode.IMAGE):
- self.cpu_device = torch.device("cpu")
- self.instance_mode = instance_mode
-
- self.predictor = DefaultPredictor(cfg)
-
- def run_on_image(self, image):
- predictions = self.predictor(image)
- # Convert image from OpenCV BGR format to Matplotlib RGB format.
- image = image[:, :, ::-1]
- visualizer = Visualizer_GRiT(image, instance_mode=self.instance_mode)
- instances = predictions["instances"].to(self.cpu_device)
- vis_output = visualizer.draw_instance_predictions(predictions=instances)
-
- return predictions, vis_output
\ No newline at end of file
diff --git a/spaces/Thanarit/GPT-Detection-Demo/ModelDriver.py b/spaces/Thanarit/GPT-Detection-Demo/ModelDriver.py
deleted file mode 100644
index aed873a153c770dd96596a39dc03be35aede5c74..0000000000000000000000000000000000000000
--- a/spaces/Thanarit/GPT-Detection-Demo/ModelDriver.py
+++ /dev/null
@@ -1,105 +0,0 @@
-from transformers import RobertaTokenizer, RobertaForSequenceClassification, RobertaModel
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.utils.data import TensorDataset, DataLoader
-
-
-device = torch.device("cpu")
-class MLP(nn.Module):
- def __init__(self, input_dim):
- super(MLP, self).__init__()
- self.fc1 = nn.Linear(input_dim, 256)
- self.fc2 = nn.Linear(256, 2)
- self.gelu = nn.GELU()
-
- def forward(self, x):
- x = self.gelu(self.fc1(x))
- x = self.fc2(x)
- return x
-def extract_features(text):
-
- tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
- model = RobertaModel.from_pretrained("roberta-base").to(device)
- tokenized_text = tokenizer.encode(text, truncation=True, max_length=512, return_tensors="pt")
- outputs = model(tokenized_text)
- last_hidden_states = outputs.last_hidden_state
- TClassification = last_hidden_states[:, 0, :].squeeze().detach().numpy()
- return TClassification
-
-def RobertaSentinelOpenGPTInference(input_text):
- features = extract_features(input_text)
- loaded_model = MLP(768).to(device)
- loaded_model.load_state_dict(torch.load("SentinelCheckpoint/RobertaSentinelOpenGPT.pth", map_location=device))
-
- # Define the tokenizer and model for feature extraction
- with torch.no_grad():
- inputs = torch.tensor(features).to(device)
- outputs = loaded_model(inputs.float())
- _, predicted = torch.max(outputs, 0)
-
- Probs = (F.softmax(outputs, dim=0).cpu().numpy())
-
- return Probs
-
-def RobertaSentinelCSAbstractInference(input_text):
- features = extract_features(input_text)
- loaded_model = MLP(768).to(device)
- loaded_model.load_state_dict(torch.load("SentinelCheckpoint/RobertaSentinelCSAbstract.pth", map_location=device))
-
- # Define the tokenizer and model for feature extraction
- with torch.no_grad():
- inputs = torch.tensor(features).to(device)
- outputs = loaded_model(inputs.float())
- _, predicted = torch.max(outputs, 0)
-
- Probs = (F.softmax(outputs, dim=0).cpu().numpy())
-
- return Probs
-
-
-def RobertaClassifierOpenGPTInference(input_text):
- tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
- model_path = "ClassifierCheckpoint/RobertaClassifierOpenGPT.pth"
- model = RobertaForSequenceClassification.from_pretrained('roberta-base', num_labels=2)
- model.load_state_dict(torch.load(model_path, map_location=device), strict=False)
- model = model.to(device)
- model.eval()
-
-
- tokenized_input = tokenizer(input_text, truncation=True, padding=True, max_length=512, return_tensors='pt')
- input_ids = tokenized_input['input_ids'].to(device)
- attention_mask = tokenized_input['attention_mask'].to(device)
-
- # Make a prediction
- with torch.no_grad():
- outputs = model(input_ids, attention_mask=attention_mask)
- logits = outputs.logits
- Probs = F.softmax(logits, dim=1).cpu().numpy()[0]
-
- return Probs
-
-
-def RobertaClassifierCSAbstractInference(input_text):
- tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
- model_path = "ClassifierCheckpoint/RobertaClassifierCSAbstract.pth"
- model = RobertaForSequenceClassification.from_pretrained('roberta-base', num_labels=2)
- model.load_state_dict(torch.load(model_path, map_location=device), strict=False)
- model = model.to(device)
- model.eval()
-
-
- tokenized_input = tokenizer(input_text, truncation=True, padding=True, max_length=512, return_tensors='pt')
- input_ids = tokenized_input['input_ids'].to(device)
- attention_mask = tokenized_input['attention_mask'].to(device)
-
- # Make a prediction
- with torch.no_grad():
- outputs = model(input_ids, attention_mask=attention_mask)
- logits = outputs.logits
- Probs = F.softmax(logits, dim=1).cpu().numpy()[0]
-
- return Probs
-
-
-
diff --git a/spaces/Thaweewat/ControlNet-Architecture/annotator/openpose/body.py b/spaces/Thaweewat/ControlNet-Architecture/annotator/openpose/body.py
deleted file mode 100644
index 7c3cf7a388b4ac81004524e64125e383bdd455bd..0000000000000000000000000000000000000000
--- a/spaces/Thaweewat/ControlNet-Architecture/annotator/openpose/body.py
+++ /dev/null
@@ -1,219 +0,0 @@
-import cv2
-import numpy as np
-import math
-import time
-from scipy.ndimage.filters import gaussian_filter
-import matplotlib.pyplot as plt
-import matplotlib
-import torch
-from torchvision import transforms
-
-from . import util
-from .model import bodypose_model
-
-class Body(object):
- def __init__(self, model_path):
- self.model = bodypose_model()
- if torch.cuda.is_available():
- self.model = self.model.cuda()
- print('cuda')
- model_dict = util.transfer(self.model, torch.load(model_path))
- self.model.load_state_dict(model_dict)
- self.model.eval()
-
- def __call__(self, oriImg):
- # scale_search = [0.5, 1.0, 1.5, 2.0]
- scale_search = [0.5]
- boxsize = 368
- stride = 8
- padValue = 128
- thre1 = 0.1
- thre2 = 0.05
- multiplier = [x * boxsize / oriImg.shape[0] for x in scale_search]
- heatmap_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 19))
- paf_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 38))
-
- for m in range(len(multiplier)):
- scale = multiplier[m]
- imageToTest = cv2.resize(oriImg, (0, 0), fx=scale, fy=scale, interpolation=cv2.INTER_CUBIC)
- imageToTest_padded, pad = util.padRightDownCorner(imageToTest, stride, padValue)
- im = np.transpose(np.float32(imageToTest_padded[:, :, :, np.newaxis]), (3, 2, 0, 1)) / 256 - 0.5
- im = np.ascontiguousarray(im)
-
- data = torch.from_numpy(im).float()
- if torch.cuda.is_available():
- data = data.cuda()
- # data = data.permute([2, 0, 1]).unsqueeze(0).float()
- with torch.no_grad():
- Mconv7_stage6_L1, Mconv7_stage6_L2 = self.model(data)
- Mconv7_stage6_L1 = Mconv7_stage6_L1.cpu().numpy()
- Mconv7_stage6_L2 = Mconv7_stage6_L2.cpu().numpy()
-
- # extract outputs, resize, and remove padding
- # heatmap = np.transpose(np.squeeze(net.blobs[output_blobs.keys()[1]].data), (1, 2, 0)) # output 1 is heatmaps
- heatmap = np.transpose(np.squeeze(Mconv7_stage6_L2), (1, 2, 0)) # output 1 is heatmaps
- heatmap = cv2.resize(heatmap, (0, 0), fx=stride, fy=stride, interpolation=cv2.INTER_CUBIC)
- heatmap = heatmap[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :]
- heatmap = cv2.resize(heatmap, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC)
-
- # paf = np.transpose(np.squeeze(net.blobs[output_blobs.keys()[0]].data), (1, 2, 0)) # output 0 is PAFs
- paf = np.transpose(np.squeeze(Mconv7_stage6_L1), (1, 2, 0)) # output 0 is PAFs
- paf = cv2.resize(paf, (0, 0), fx=stride, fy=stride, interpolation=cv2.INTER_CUBIC)
- paf = paf[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :]
- paf = cv2.resize(paf, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC)
-
- heatmap_avg += heatmap_avg + heatmap / len(multiplier)
- paf_avg += + paf / len(multiplier)
-
- all_peaks = []
- peak_counter = 0
-
- for part in range(18):
- map_ori = heatmap_avg[:, :, part]
- one_heatmap = gaussian_filter(map_ori, sigma=3)
-
- map_left = np.zeros(one_heatmap.shape)
- map_left[1:, :] = one_heatmap[:-1, :]
- map_right = np.zeros(one_heatmap.shape)
- map_right[:-1, :] = one_heatmap[1:, :]
- map_up = np.zeros(one_heatmap.shape)
- map_up[:, 1:] = one_heatmap[:, :-1]
- map_down = np.zeros(one_heatmap.shape)
- map_down[:, :-1] = one_heatmap[:, 1:]
-
- peaks_binary = np.logical_and.reduce(
- (one_heatmap >= map_left, one_heatmap >= map_right, one_heatmap >= map_up, one_heatmap >= map_down, one_heatmap > thre1))
- peaks = list(zip(np.nonzero(peaks_binary)[1], np.nonzero(peaks_binary)[0])) # note reverse
- peaks_with_score = [x + (map_ori[x[1], x[0]],) for x in peaks]
- peak_id = range(peak_counter, peak_counter + len(peaks))
- peaks_with_score_and_id = [peaks_with_score[i] + (peak_id[i],) for i in range(len(peak_id))]
-
- all_peaks.append(peaks_with_score_and_id)
- peak_counter += len(peaks)
-
- # find connection in the specified sequence, center 29 is in the position 15
- limbSeq = [[2, 3], [2, 6], [3, 4], [4, 5], [6, 7], [7, 8], [2, 9], [9, 10], \
- [10, 11], [2, 12], [12, 13], [13, 14], [2, 1], [1, 15], [15, 17], \
- [1, 16], [16, 18], [3, 17], [6, 18]]
- # the middle joints heatmap correpondence
- mapIdx = [[31, 32], [39, 40], [33, 34], [35, 36], [41, 42], [43, 44], [19, 20], [21, 22], \
- [23, 24], [25, 26], [27, 28], [29, 30], [47, 48], [49, 50], [53, 54], [51, 52], \
- [55, 56], [37, 38], [45, 46]]
-
- connection_all = []
- special_k = []
- mid_num = 10
-
- for k in range(len(mapIdx)):
- score_mid = paf_avg[:, :, [x - 19 for x in mapIdx[k]]]
- candA = all_peaks[limbSeq[k][0] - 1]
- candB = all_peaks[limbSeq[k][1] - 1]
- nA = len(candA)
- nB = len(candB)
- indexA, indexB = limbSeq[k]
- if (nA != 0 and nB != 0):
- connection_candidate = []
- for i in range(nA):
- for j in range(nB):
- vec = np.subtract(candB[j][:2], candA[i][:2])
- norm = math.sqrt(vec[0] * vec[0] + vec[1] * vec[1])
- norm = max(0.001, norm)
- vec = np.divide(vec, norm)
-
- startend = list(zip(np.linspace(candA[i][0], candB[j][0], num=mid_num), \
- np.linspace(candA[i][1], candB[j][1], num=mid_num)))
-
- vec_x = np.array([score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 0] \
- for I in range(len(startend))])
- vec_y = np.array([score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 1] \
- for I in range(len(startend))])
-
- score_midpts = np.multiply(vec_x, vec[0]) + np.multiply(vec_y, vec[1])
- score_with_dist_prior = sum(score_midpts) / len(score_midpts) + min(
- 0.5 * oriImg.shape[0] / norm - 1, 0)
- criterion1 = len(np.nonzero(score_midpts > thre2)[0]) > 0.8 * len(score_midpts)
- criterion2 = score_with_dist_prior > 0
- if criterion1 and criterion2:
- connection_candidate.append(
- [i, j, score_with_dist_prior, score_with_dist_prior + candA[i][2] + candB[j][2]])
-
- connection_candidate = sorted(connection_candidate, key=lambda x: x[2], reverse=True)
- connection = np.zeros((0, 5))
- for c in range(len(connection_candidate)):
- i, j, s = connection_candidate[c][0:3]
- if (i not in connection[:, 3] and j not in connection[:, 4]):
- connection = np.vstack([connection, [candA[i][3], candB[j][3], s, i, j]])
- if (len(connection) >= min(nA, nB)):
- break
-
- connection_all.append(connection)
- else:
- special_k.append(k)
- connection_all.append([])
-
- # last number in each row is the total parts number of that person
- # the second last number in each row is the score of the overall configuration
- subset = -1 * np.ones((0, 20))
- candidate = np.array([item for sublist in all_peaks for item in sublist])
-
- for k in range(len(mapIdx)):
- if k not in special_k:
- partAs = connection_all[k][:, 0]
- partBs = connection_all[k][:, 1]
- indexA, indexB = np.array(limbSeq[k]) - 1
-
- for i in range(len(connection_all[k])): # = 1:size(temp,1)
- found = 0
- subset_idx = [-1, -1]
- for j in range(len(subset)): # 1:size(subset,1):
- if subset[j][indexA] == partAs[i] or subset[j][indexB] == partBs[i]:
- subset_idx[found] = j
- found += 1
-
- if found == 1:
- j = subset_idx[0]
- if subset[j][indexB] != partBs[i]:
- subset[j][indexB] = partBs[i]
- subset[j][-1] += 1
- subset[j][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2]
- elif found == 2: # if found 2 and disjoint, merge them
- j1, j2 = subset_idx
- membership = ((subset[j1] >= 0).astype(int) + (subset[j2] >= 0).astype(int))[:-2]
- if len(np.nonzero(membership == 2)[0]) == 0: # merge
- subset[j1][:-2] += (subset[j2][:-2] + 1)
- subset[j1][-2:] += subset[j2][-2:]
- subset[j1][-2] += connection_all[k][i][2]
- subset = np.delete(subset, j2, 0)
- else: # as like found == 1
- subset[j1][indexB] = partBs[i]
- subset[j1][-1] += 1
- subset[j1][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2]
-
- # if find no partA in the subset, create a new subset
- elif not found and k < 17:
- row = -1 * np.ones(20)
- row[indexA] = partAs[i]
- row[indexB] = partBs[i]
- row[-1] = 2
- row[-2] = sum(candidate[connection_all[k][i, :2].astype(int), 2]) + connection_all[k][i][2]
- subset = np.vstack([subset, row])
- # delete some rows of subset which has few parts occur
- deleteIdx = []
- for i in range(len(subset)):
- if subset[i][-1] < 4 or subset[i][-2] / subset[i][-1] < 0.4:
- deleteIdx.append(i)
- subset = np.delete(subset, deleteIdx, axis=0)
-
- # subset: n*20 array, 0-17 is the index in candidate, 18 is the total score, 19 is the total parts
- # candidate: x, y, score, id
- return candidate, subset
-
-if __name__ == "__main__":
- body_estimation = Body('../model/body_pose_model.pth')
-
- test_image = '../images/ski.jpg'
- oriImg = cv2.imread(test_image) # B,G,R order
- candidate, subset = body_estimation(oriImg)
- canvas = util.draw_bodypose(oriImg, candidate, subset)
- plt.imshow(canvas[:, :, [2, 1, 0]])
- plt.show()
diff --git a/spaces/UmairMirza/Face-Attendance/README.md b/spaces/UmairMirza/Face-Attendance/README.md
deleted file mode 100644
index d188f38b3b755592a899c80b9d2231e68e847a63..0000000000000000000000000000000000000000
--- a/spaces/UmairMirza/Face-Attendance/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Face Attendance
-emoji: 📈
-colorFrom: pink
-colorTo: pink
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/VincentZB/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/controlnet_inpaint_hed.py b/spaces/VincentZB/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/controlnet_inpaint_hed.py
deleted file mode 100644
index 590cb5db9213b22d00ce0e650a3e632725213a67..0000000000000000000000000000000000000000
--- a/spaces/VincentZB/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/controlnet_inpaint_hed.py
+++ /dev/null
@@ -1,223 +0,0 @@
-import gradio as gr
-import numpy as np
-import torch
-from controlnet_aux import HEDdetector
-from diffusers import ControlNetModel
-from PIL import Image
-
-from diffusion_webui.diffusion_models.controlnet.controlnet_inpaint.pipeline_stable_diffusion_controlnet_inpaint import (
- StableDiffusionControlNetInpaintPipeline,
-)
-from diffusion_webui.utils.model_list import (
- controlnet_hed_model_list,
- stable_inpiant_model_list,
-)
-from diffusion_webui.utils.scheduler_list import (
- SCHEDULER_LIST,
- get_scheduler_list,
-)
-
-# https://github.com/mikonvergence/ControlNetInpaint
-
-
-class StableDiffusionControlNetInpaintHedGenerator:
- def __init__(self):
- self.pipe = None
-
- def load_model(self, stable_model_path, controlnet_model_path, scheduler):
- if self.pipe is None:
- controlnet = ControlNetModel.from_pretrained(
- controlnet_model_path, torch_dtype=torch.float16
- )
- self.pipe = (
- StableDiffusionControlNetInpaintPipeline.from_pretrained(
- pretrained_model_name_or_path=stable_model_path,
- controlnet=controlnet,
- safety_checker=None,
- torch_dtype=torch.float16,
- )
- )
-
- self.pipe = get_scheduler_list(pipe=self.pipe, scheduler=scheduler)
- self.pipe.to("cuda")
- self.pipe.enable_xformers_memory_efficient_attention()
-
- return self.pipe
-
- def load_image(self, image_path):
- image = np.array(image_path)
- image = Image.fromarray(image)
- return image
-
- def controlnet_inpaint_hed(self, image_path: str):
- hed = HEDdetector.from_pretrained("lllyasviel/ControlNet")
- image = image_path["image"].convert("RGB").resize((512, 512))
- image = np.array(image)
- image = hed(image)
-
- return image
-
- def generate_image(
- self,
- image_path: str,
- stable_model_path: str,
- controlnet_model_path: str,
- prompt: str,
- negative_prompt: str,
- num_images_per_prompt: int,
- guidance_scale: int,
- num_inference_step: int,
- controlnet_conditioning_scale: int,
- scheduler: str,
- seed_generator: int,
- ):
- normal_image = image_path["image"].convert("RGB").resize((512, 512))
- mask_image = image_path["mask"].convert("RGB").resize((512, 512))
-
- normal_image = self.load_image(image_path=normal_image)
- mask_image = self.load_image(image_path=mask_image)
-
- control_image = self.controlnet_inpaint_hed(image_path=image_path)
-
- pipe = self.load_model(
- stable_model_path=stable_model_path,
- controlnet_model_path=controlnet_model_path,
- scheduler=scheduler,
- )
-
- if seed_generator == 0:
- random_seed = torch.randint(0, 1000000, (1,))
- generator = torch.manual_seed(random_seed)
- else:
- generator = torch.manual_seed(seed_generator)
-
- output = pipe(
- prompt=prompt,
- image=normal_image,
- mask_image=mask_image,
- control_image=control_image,
- negative_prompt=negative_prompt,
- num_images_per_prompt=num_images_per_prompt,
- num_inference_steps=num_inference_step,
- guidance_scale=guidance_scale,
- controlnet_conditioning_scale=controlnet_conditioning_scale,
- generator=generator,
- ).images
-
- return output
-
- def app():
- with gr.Blocks():
- with gr.Row():
- with gr.Column():
- controlnet_hed_inpaint_image_file = gr.Image(
- source="upload",
- tool="sketch",
- elem_id="image_upload",
- type="pil",
- label="Upload",
- )
-
- controlnet_hed_inpaint_prompt = gr.Textbox(
- lines=1, placeholder="Prompt", show_label=False
- )
-
- controlnet_hed_inpaint_negative_prompt = gr.Textbox(
- lines=1,
- show_label=False,
- placeholder="Negative Prompt",
- )
- with gr.Row():
- with gr.Column():
- controlnet_hed_inpaint_stable_model_id = (
- gr.Dropdown(
- choices=stable_inpiant_model_list,
- value=stable_inpiant_model_list[0],
- label="Stable Model Id",
- )
- )
-
- controlnet_hed_inpaint_guidance_scale = gr.Slider(
- minimum=0.1,
- maximum=15,
- step=0.1,
- value=7.5,
- label="Guidance Scale",
- )
-
- controlnet_hed_inpaint_num_inference_step = (
- gr.Slider(
- minimum=1,
- maximum=100,
- step=1,
- value=50,
- label="Num Inference Step",
- )
- )
- controlnet_hed_inpaint_num_images_per_prompt = (
- gr.Slider(
- minimum=1,
- maximum=10,
- step=1,
- value=1,
- label="Number Of Images",
- )
- )
- with gr.Row():
- with gr.Column():
- controlnet_hed_inpaint_model_id = gr.Dropdown(
- choices=controlnet_hed_model_list,
- value=controlnet_hed_model_list[0],
- label="Controlnet Model Id",
- )
- controlnet_hed_inpaint_scheduler = gr.Dropdown(
- choices=SCHEDULER_LIST,
- value=SCHEDULER_LIST[0],
- label="Scheduler",
- )
- controlnet_hed_inpaint_controlnet_conditioning_scale = gr.Slider(
- minimum=0.1,
- maximum=1.0,
- step=0.1,
- value=0.5,
- label="Controlnet Conditioning Scale",
- )
-
- controlnet_hed_inpaint_seed_generator = (
- gr.Slider(
- minimum=0,
- maximum=1000000,
- step=1,
- value=0,
- label="Seed Generator",
- )
- )
-
- controlnet_hed_inpaint_predict = gr.Button(
- value="Generator"
- )
-
- with gr.Column():
- output_image = gr.Gallery(
- label="Generated images",
- show_label=False,
- elem_id="gallery",
- ).style(grid=(1, 2))
-
- controlnet_hed_inpaint_predict.click(
- fn=StableDiffusionControlNetInpaintHedGenerator().generate_image,
- inputs=[
- controlnet_hed_inpaint_image_file,
- controlnet_hed_inpaint_stable_model_id,
- controlnet_hed_inpaint_model_id,
- controlnet_hed_inpaint_prompt,
- controlnet_hed_inpaint_negative_prompt,
- controlnet_hed_inpaint_num_images_per_prompt,
- controlnet_hed_inpaint_guidance_scale,
- controlnet_hed_inpaint_num_inference_step,
- controlnet_hed_inpaint_controlnet_conditioning_scale,
- controlnet_hed_inpaint_scheduler,
- controlnet_hed_inpaint_seed_generator,
- ],
- outputs=[output_image],
- )
diff --git a/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/infer_pack/commons.py b/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/infer_pack/commons.py
deleted file mode 100644
index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000
--- a/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/infer_pack/commons.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += (
- 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q)
- )
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def slice_segments2(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / (
- num_timescales - 1
- )
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment
- )
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2, 3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1.0 / norm_type)
- return total_norm
diff --git a/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/util/time_counter.py b/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/util/time_counter.py
deleted file mode 100644
index 0aedb2e4d61bfbe7571dca9d50053f0fedaa1359..0000000000000000000000000000000000000000
--- a/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/util/time_counter.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import json
-import time
-
-
-class TimeCounter:
- def __init__(self) -> None:
- pass
-
- def clear(self):
- self.timedict = {}
- self.basetime = time.perf_counter()
-
- def timeit(self, name):
- nowtime = time.perf_counter() - self.basetime
- self.timedict[name] = nowtime
- self.basetime = time.perf_counter()
-
-
-class TimeHolder:
- def __init__(self) -> None:
- self.timedict = {}
-
- def update(self, _timedict: dict):
- for k, v in _timedict.items():
- if k not in self.timedict:
- self.timedict[k] = AverageMeter(name=k, val_only=True)
- self.timedict[k].update(val=v)
-
- def final_res(self):
- return {k: v.avg for k, v in self.timedict.items()}
-
- def __str__(self):
- return json.dumps(self.final_res(), indent=2)
-
-
-class AverageMeter(object):
- """Computes and stores the average and current value"""
-
- def __init__(self, name, fmt=":f", val_only=False):
- self.name = name
- self.fmt = fmt
- self.val_only = val_only
- self.reset()
-
- def reset(self):
- self.val = 0
- self.avg = 0
- self.sum = 0
- self.count = 0
-
- def update(self, val, n=1):
- self.val = val
- self.sum += val * n
- self.count += n
- self.avg = self.sum / self.count
-
- def __str__(self):
- if self.val_only:
- fmtstr = "{name} {val" + self.fmt + "}"
- else:
- fmtstr = "{name} {val" + self.fmt + "} ({avg" + self.fmt + "})"
- return fmtstr.format(**self.__dict__)
diff --git a/spaces/Xenova/the-tokenizer-playground/assets/index-e9ad67fa.js b/spaces/Xenova/the-tokenizer-playground/assets/index-e9ad67fa.js
deleted file mode 100644
index 385150151c99093c57cd7ddc0fab05d315297c56..0000000000000000000000000000000000000000
--- a/spaces/Xenova/the-tokenizer-playground/assets/index-e9ad67fa.js
+++ /dev/null
@@ -1,41 +0,0 @@
-(function(){const n=document.createElement("link").relList;if(n&&n.supports&&n.supports("modulepreload"))return;for(const l of document.querySelectorAll('link[rel="modulepreload"]'))r(l);new MutationObserver(l=>{for(const o of l)if(o.type==="childList")for(const u of o.addedNodes)u.tagName==="LINK"&&u.rel==="modulepreload"&&r(u)}).observe(document,{childList:!0,subtree:!0});function t(l){const o={};return l.integrity&&(o.integrity=l.integrity),l.referrerPolicy&&(o.referrerPolicy=l.referrerPolicy),l.crossOrigin==="use-credentials"?o.credentials="include":l.crossOrigin==="anonymous"?o.credentials="omit":o.credentials="same-origin",o}function r(l){if(l.ep)return;l.ep=!0;const o=t(l);fetch(l.href,o)}})();function lc(e){return e&&e.__esModule&&Object.prototype.hasOwnProperty.call(e,"default")?e.default:e}var Wi={exports:{}},el={},Qi={exports:{}},T={};/**
- * @license React
- * react.production.min.js
- *
- * Copyright (c) Facebook, Inc. and its affiliates.
- *
- * This source code is licensed under the MIT license found in the
- * LICENSE file in the root directory of this source tree.
- */var Xt=Symbol.for("react.element"),oc=Symbol.for("react.portal"),uc=Symbol.for("react.fragment"),ic=Symbol.for("react.strict_mode"),sc=Symbol.for("react.profiler"),ac=Symbol.for("react.provider"),cc=Symbol.for("react.context"),fc=Symbol.for("react.forward_ref"),dc=Symbol.for("react.suspense"),pc=Symbol.for("react.memo"),mc=Symbol.for("react.lazy"),Mu=Symbol.iterator;function hc(e){return e===null||typeof e!="object"?null:(e=Mu&&e[Mu]||e["@@iterator"],typeof e=="function"?e:null)}var Ki={isMounted:function(){return!1},enqueueForceUpdate:function(){},enqueueReplaceState:function(){},enqueueSetState:function(){}},Yi=Object.assign,Xi={};function ot(e,n,t){this.props=e,this.context=n,this.refs=Xi,this.updater=t||Ki}ot.prototype.isReactComponent={};ot.prototype.setState=function(e,n){if(typeof e!="object"&&typeof e!="function"&&e!=null)throw Error("setState(...): takes an object of state variables to update or a function which returns an object of state variables.");this.updater.enqueueSetState(this,e,n,"setState")};ot.prototype.forceUpdate=function(e){this.updater.enqueueForceUpdate(this,e,"forceUpdate")};function Gi(){}Gi.prototype=ot.prototype;function $o(e,n,t){this.props=e,this.context=n,this.refs=Xi,this.updater=t||Ki}var Ao=$o.prototype=new Gi;Ao.constructor=$o;Yi(Ao,ot.prototype);Ao.isPureReactComponent=!0;var Du=Array.isArray,Zi=Object.prototype.hasOwnProperty,Vo={current:null},Ji={key:!0,ref:!0,__self:!0,__source:!0};function qi(e,n,t){var r,l={},o=null,u=null;if(n!=null)for(r in n.ref!==void 0&&(u=n.ref),n.key!==void 0&&(o=""+n.key),n)Zi.call(n,r)&&!Ji.hasOwnProperty(r)&&(l[r]=n[r]);var i=arguments.length-2;if(i===1)l.children=t;else if(1>>1,G=E[W];if(0>>1;Wl(gl,z))gnl(er,gl)?(E[W]=er,E[gn]=z,W=gn):(E[W]=gl,E[yn]=z,W=yn);else if(gnl(er,z))E[W]=er,E[gn]=z,W=gn;else break e}}return P}function l(E,P){var z=E.sortIndex-P.sortIndex;return z!==0?z:E.id-P.id}if(typeof performance=="object"&&typeof performance.now=="function"){var o=performance;e.unstable_now=function(){return o.now()}}else{var u=Date,i=u.now();e.unstable_now=function(){return u.now()-i}}var s=[],c=[],h=1,m=null,p=3,k=!1,g=!1,w=!1,M=typeof setTimeout=="function"?setTimeout:null,f=typeof clearTimeout=="function"?clearTimeout:null,a=typeof setImmediate<"u"?setImmediate:null;typeof navigator<"u"&&navigator.scheduling!==void 0&&navigator.scheduling.isInputPending!==void 0&&navigator.scheduling.isInputPending.bind(navigator.scheduling);function d(E){for(var P=t(c);P!==null;){if(P.callback===null)r(c);else if(P.startTime<=E)r(c),P.sortIndex=P.expirationTime,n(s,P);else break;P=t(c)}}function v(E){if(w=!1,d(E),!g)if(t(s)!==null)g=!0,vl(x);else{var P=t(c);P!==null&&yl(v,P.startTime-E)}}function x(E,P){g=!1,w&&(w=!1,f(N),N=-1),k=!0;var z=p;try{for(d(P),m=t(s);m!==null&&(!(m.expirationTime>P)||E&&!Pe());){var W=m.callback;if(typeof W=="function"){m.callback=null,p=m.priorityLevel;var G=W(m.expirationTime<=P);P=e.unstable_now(),typeof G=="function"?m.callback=G:m===t(s)&&r(s),d(P)}else r(s);m=t(s)}if(m!==null)var bt=!0;else{var yn=t(c);yn!==null&&yl(v,yn.startTime-P),bt=!1}return bt}finally{m=null,p=z,k=!1}}var C=!1,_=null,N=-1,H=5,L=-1;function Pe(){return!(e.unstable_now()-LE||125W?(E.sortIndex=z,n(c,E),t(s)===null&&E===t(c)&&(w?(f(N),N=-1):w=!0,yl(v,z-W))):(E.sortIndex=G,n(s,E),g||k||(g=!0,vl(x))),E},e.unstable_shouldYield=Pe,e.unstable_wrapCallback=function(E){var P=p;return function(){var z=p;p=P;try{return E.apply(this,arguments)}finally{p=z}}}})(ts);ns.exports=ts;var Pc=ns.exports;/**
- * @license React
- * react-dom.production.min.js
- *
- * Copyright (c) Facebook, Inc. and its affiliates.
- *
- * This source code is licensed under the MIT license found in the
- * LICENSE file in the root directory of this source tree.
- */var rs=me,ge=Pc;function y(e){for(var n="https://reactjs.org/docs/error-decoder.html?invariant="+e,t=1;t"u"||typeof window.document>"u"||typeof window.document.createElement>"u"),Kl=Object.prototype.hasOwnProperty,zc=/^[:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD][:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD\-.0-9\u00B7\u0300-\u036F\u203F-\u2040]*$/,Fu={},Uu={};function Tc(e){return Kl.call(Uu,e)?!0:Kl.call(Fu,e)?!1:zc.test(e)?Uu[e]=!0:(Fu[e]=!0,!1)}function Lc(e,n,t,r){if(t!==null&&t.type===0)return!1;switch(typeof n){case"function":case"symbol":return!0;case"boolean":return r?!1:t!==null?!t.acceptsBooleans:(e=e.toLowerCase().slice(0,5),e!=="data-"&&e!=="aria-");default:return!1}}function Rc(e,n,t,r){if(n===null||typeof n>"u"||Lc(e,n,t,r))return!0;if(r)return!1;if(t!==null)switch(t.type){case 3:return!n;case 4:return n===!1;case 5:return isNaN(n);case 6:return isNaN(n)||1>n}return!1}function se(e,n,t,r,l,o,u){this.acceptsBooleans=n===2||n===3||n===4,this.attributeName=r,this.attributeNamespace=l,this.mustUseProperty=t,this.propertyName=e,this.type=n,this.sanitizeURL=o,this.removeEmptyString=u}var ee={};"children dangerouslySetInnerHTML defaultValue defaultChecked innerHTML suppressContentEditableWarning suppressHydrationWarning style".split(" ").forEach(function(e){ee[e]=new se(e,0,!1,e,null,!1,!1)});[["acceptCharset","accept-charset"],["className","class"],["htmlFor","for"],["httpEquiv","http-equiv"]].forEach(function(e){var n=e[0];ee[n]=new se(n,1,!1,e[1],null,!1,!1)});["contentEditable","draggable","spellCheck","value"].forEach(function(e){ee[e]=new se(e,2,!1,e.toLowerCase(),null,!1,!1)});["autoReverse","externalResourcesRequired","focusable","preserveAlpha"].forEach(function(e){ee[e]=new se(e,2,!1,e,null,!1,!1)});"allowFullScreen async autoFocus autoPlay controls default defer disabled disablePictureInPicture disableRemotePlayback formNoValidate hidden loop noModule noValidate open playsInline readOnly required reversed scoped seamless itemScope".split(" ").forEach(function(e){ee[e]=new se(e,3,!1,e.toLowerCase(),null,!1,!1)});["checked","multiple","muted","selected"].forEach(function(e){ee[e]=new se(e,3,!0,e,null,!1,!1)});["capture","download"].forEach(function(e){ee[e]=new se(e,4,!1,e,null,!1,!1)});["cols","rows","size","span"].forEach(function(e){ee[e]=new se(e,6,!1,e,null,!1,!1)});["rowSpan","start"].forEach(function(e){ee[e]=new se(e,5,!1,e.toLowerCase(),null,!1,!1)});var Ho=/[\-:]([a-z])/g;function Wo(e){return e[1].toUpperCase()}"accent-height alignment-baseline arabic-form baseline-shift cap-height clip-path clip-rule color-interpolation color-interpolation-filters color-profile color-rendering dominant-baseline enable-background fill-opacity fill-rule flood-color flood-opacity font-family font-size font-size-adjust font-stretch font-style font-variant font-weight glyph-name glyph-orientation-horizontal glyph-orientation-vertical horiz-adv-x horiz-origin-x image-rendering letter-spacing lighting-color marker-end marker-mid marker-start overline-position overline-thickness paint-order panose-1 pointer-events rendering-intent shape-rendering stop-color stop-opacity strikethrough-position strikethrough-thickness stroke-dasharray stroke-dashoffset stroke-linecap stroke-linejoin stroke-miterlimit stroke-opacity stroke-width text-anchor text-decoration text-rendering underline-position underline-thickness unicode-bidi unicode-range units-per-em v-alphabetic v-hanging v-ideographic v-mathematical vector-effect vert-adv-y vert-origin-x vert-origin-y word-spacing writing-mode xmlns:xlink x-height".split(" ").forEach(function(e){var n=e.replace(Ho,Wo);ee[n]=new se(n,1,!1,e,null,!1,!1)});"xlink:actuate xlink:arcrole xlink:role xlink:show xlink:title xlink:type".split(" ").forEach(function(e){var n=e.replace(Ho,Wo);ee[n]=new se(n,1,!1,e,"http://www.w3.org/1999/xlink",!1,!1)});["xml:base","xml:lang","xml:space"].forEach(function(e){var n=e.replace(Ho,Wo);ee[n]=new se(n,1,!1,e,"http://www.w3.org/XML/1998/namespace",!1,!1)});["tabIndex","crossOrigin"].forEach(function(e){ee[e]=new se(e,1,!1,e.toLowerCase(),null,!1,!1)});ee.xlinkHref=new se("xlinkHref",1,!1,"xlink:href","http://www.w3.org/1999/xlink",!0,!1);["src","href","action","formAction"].forEach(function(e){ee[e]=new se(e,1,!1,e.toLowerCase(),null,!0,!0)});function Qo(e,n,t,r){var l=ee.hasOwnProperty(n)?ee[n]:null;(l!==null?l.type!==0:r||!(2i||l[u]!==o[i]){var s=`
-`+l[u].replace(" at new "," at ");return e.displayName&&s.includes("")&&(s=s.replace("",e.displayName)),s}while(1<=u&&0<=i);break}}}finally{Sl=!1,Error.prepareStackTrace=t}return(e=e?e.displayName||e.name:"")?gt(e):""}function jc(e){switch(e.tag){case 5:return gt(e.type);case 16:return gt("Lazy");case 13:return gt("Suspense");case 19:return gt("SuspenseList");case 0:case 2:case 15:return e=xl(e.type,!1),e;case 11:return e=xl(e.type.render,!1),e;case 1:return e=xl(e.type,!0),e;default:return""}}function Zl(e){if(e==null)return null;if(typeof e=="function")return e.displayName||e.name||null;if(typeof e=="string")return e;switch(e){case Dn:return"Fragment";case Mn:return"Portal";case Yl:return"Profiler";case Ko:return"StrictMode";case Xl:return"Suspense";case Gl:return"SuspenseList"}if(typeof e=="object")switch(e.$$typeof){case us:return(e.displayName||"Context")+".Consumer";case os:return(e._context.displayName||"Context")+".Provider";case Yo:var n=e.render;return e=e.displayName,e||(e=n.displayName||n.name||"",e=e!==""?"ForwardRef("+e+")":"ForwardRef"),e;case Xo:return n=e.displayName||null,n!==null?n:Zl(e.type)||"Memo";case Je:n=e._payload,e=e._init;try{return Zl(e(n))}catch{}}return null}function Oc(e){var n=e.type;switch(e.tag){case 24:return"Cache";case 9:return(n.displayName||"Context")+".Consumer";case 10:return(n._context.displayName||"Context")+".Provider";case 18:return"DehydratedFragment";case 11:return e=n.render,e=e.displayName||e.name||"",n.displayName||(e!==""?"ForwardRef("+e+")":"ForwardRef");case 7:return"Fragment";case 5:return n;case 4:return"Portal";case 3:return"Root";case 6:return"Text";case 16:return Zl(n);case 8:return n===Ko?"StrictMode":"Mode";case 22:return"Offscreen";case 12:return"Profiler";case 21:return"Scope";case 13:return"Suspense";case 19:return"SuspenseList";case 25:return"TracingMarker";case 1:case 0:case 17:case 2:case 14:case 15:if(typeof n=="function")return n.displayName||n.name||null;if(typeof n=="string")return n}return null}function dn(e){switch(typeof e){case"boolean":case"number":case"string":case"undefined":return e;case"object":return e;default:return""}}function ss(e){var n=e.type;return(e=e.nodeName)&&e.toLowerCase()==="input"&&(n==="checkbox"||n==="radio")}function Mc(e){var n=ss(e)?"checked":"value",t=Object.getOwnPropertyDescriptor(e.constructor.prototype,n),r=""+e[n];if(!e.hasOwnProperty(n)&&typeof t<"u"&&typeof t.get=="function"&&typeof t.set=="function"){var l=t.get,o=t.set;return Object.defineProperty(e,n,{configurable:!0,get:function(){return l.call(this)},set:function(u){r=""+u,o.call(this,u)}}),Object.defineProperty(e,n,{enumerable:t.enumerable}),{getValue:function(){return r},setValue:function(u){r=""+u},stopTracking:function(){e._valueTracker=null,delete e[n]}}}}function rr(e){e._valueTracker||(e._valueTracker=Mc(e))}function as(e){if(!e)return!1;var n=e._valueTracker;if(!n)return!0;var t=n.getValue(),r="";return e&&(r=ss(e)?e.checked?"true":"false":e.value),e=r,e!==t?(n.setValue(e),!0):!1}function Lr(e){if(e=e||(typeof document<"u"?document:void 0),typeof e>"u")return null;try{return e.activeElement||e.body}catch{return e.body}}function Jl(e,n){var t=n.checked;return V({},n,{defaultChecked:void 0,defaultValue:void 0,value:void 0,checked:t??e._wrapperState.initialChecked})}function Au(e,n){var t=n.defaultValue==null?"":n.defaultValue,r=n.checked!=null?n.checked:n.defaultChecked;t=dn(n.value!=null?n.value:t),e._wrapperState={initialChecked:r,initialValue:t,controlled:n.type==="checkbox"||n.type==="radio"?n.checked!=null:n.value!=null}}function cs(e,n){n=n.checked,n!=null&&Qo(e,"checked",n,!1)}function ql(e,n){cs(e,n);var t=dn(n.value),r=n.type;if(t!=null)r==="number"?(t===0&&e.value===""||e.value!=t)&&(e.value=""+t):e.value!==""+t&&(e.value=""+t);else if(r==="submit"||r==="reset"){e.removeAttribute("value");return}n.hasOwnProperty("value")?bl(e,n.type,t):n.hasOwnProperty("defaultValue")&&bl(e,n.type,dn(n.defaultValue)),n.checked==null&&n.defaultChecked!=null&&(e.defaultChecked=!!n.defaultChecked)}function Vu(e,n,t){if(n.hasOwnProperty("value")||n.hasOwnProperty("defaultValue")){var r=n.type;if(!(r!=="submit"&&r!=="reset"||n.value!==void 0&&n.value!==null))return;n=""+e._wrapperState.initialValue,t||n===e.value||(e.value=n),e.defaultValue=n}t=e.name,t!==""&&(e.name=""),e.defaultChecked=!!e._wrapperState.initialChecked,t!==""&&(e.name=t)}function bl(e,n,t){(n!=="number"||Lr(e.ownerDocument)!==e)&&(t==null?e.defaultValue=""+e._wrapperState.initialValue:e.defaultValue!==""+t&&(e.defaultValue=""+t))}var wt=Array.isArray;function Kn(e,n,t,r){if(e=e.options,n){n={};for(var l=0;l"+n.valueOf().toString()+"",n=lr.firstChild;e.firstChild;)e.removeChild(e.firstChild);for(;n.firstChild;)e.appendChild(n.firstChild)}});function jt(e,n){if(n){var t=e.firstChild;if(t&&t===e.lastChild&&t.nodeType===3){t.nodeValue=n;return}}e.textContent=n}var xt={animationIterationCount:!0,aspectRatio:!0,borderImageOutset:!0,borderImageSlice:!0,borderImageWidth:!0,boxFlex:!0,boxFlexGroup:!0,boxOrdinalGroup:!0,columnCount:!0,columns:!0,flex:!0,flexGrow:!0,flexPositive:!0,flexShrink:!0,flexNegative:!0,flexOrder:!0,gridArea:!0,gridRow:!0,gridRowEnd:!0,gridRowSpan:!0,gridRowStart:!0,gridColumn:!0,gridColumnEnd:!0,gridColumnSpan:!0,gridColumnStart:!0,fontWeight:!0,lineClamp:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,tabSize:!0,widows:!0,zIndex:!0,zoom:!0,fillOpacity:!0,floodOpacity:!0,stopOpacity:!0,strokeDasharray:!0,strokeDashoffset:!0,strokeMiterlimit:!0,strokeOpacity:!0,strokeWidth:!0},Dc=["Webkit","ms","Moz","O"];Object.keys(xt).forEach(function(e){Dc.forEach(function(n){n=n+e.charAt(0).toUpperCase()+e.substring(1),xt[n]=xt[e]})});function ms(e,n,t){return n==null||typeof n=="boolean"||n===""?"":t||typeof n!="number"||n===0||xt.hasOwnProperty(e)&&xt[e]?(""+n).trim():n+"px"}function hs(e,n){e=e.style;for(var t in n)if(n.hasOwnProperty(t)){var r=t.indexOf("--")===0,l=ms(t,n[t],r);t==="float"&&(t="cssFloat"),r?e.setProperty(t,l):e[t]=l}}var Ic=V({menuitem:!0},{area:!0,base:!0,br:!0,col:!0,embed:!0,hr:!0,img:!0,input:!0,keygen:!0,link:!0,meta:!0,param:!0,source:!0,track:!0,wbr:!0});function to(e,n){if(n){if(Ic[e]&&(n.children!=null||n.dangerouslySetInnerHTML!=null))throw Error(y(137,e));if(n.dangerouslySetInnerHTML!=null){if(n.children!=null)throw Error(y(60));if(typeof n.dangerouslySetInnerHTML!="object"||!("__html"in n.dangerouslySetInnerHTML))throw Error(y(61))}if(n.style!=null&&typeof n.style!="object")throw Error(y(62))}}function ro(e,n){if(e.indexOf("-")===-1)return typeof n.is=="string";switch(e){case"annotation-xml":case"color-profile":case"font-face":case"font-face-src":case"font-face-uri":case"font-face-format":case"font-face-name":case"missing-glyph":return!1;default:return!0}}var lo=null;function Go(e){return e=e.target||e.srcElement||window,e.correspondingUseElement&&(e=e.correspondingUseElement),e.nodeType===3?e.parentNode:e}var oo=null,Yn=null,Xn=null;function Wu(e){if(e=Jt(e)){if(typeof oo!="function")throw Error(y(280));var n=e.stateNode;n&&(n=ol(n),oo(e.stateNode,e.type,n))}}function vs(e){Yn?Xn?Xn.push(e):Xn=[e]:Yn=e}function ys(){if(Yn){var e=Yn,n=Xn;if(Xn=Yn=null,Wu(e),n)for(e=0;e>>=0,e===0?32:31-(Yc(e)/Xc|0)|0}var or=64,ur=4194304;function kt(e){switch(e&-e){case 1:return 1;case 2:return 2;case 4:return 4;case 8:return 8;case 16:return 16;case 32:return 32;case 64:case 128:case 256:case 512:case 1024:case 2048:case 4096:case 8192:case 16384:case 32768:case 65536:case 131072:case 262144:case 524288:case 1048576:case 2097152:return e&4194240;case 4194304:case 8388608:case 16777216:case 33554432:case 67108864:return e&130023424;case 134217728:return 134217728;case 268435456:return 268435456;case 536870912:return 536870912;case 1073741824:return 1073741824;default:return e}}function Mr(e,n){var t=e.pendingLanes;if(t===0)return 0;var r=0,l=e.suspendedLanes,o=e.pingedLanes,u=t&268435455;if(u!==0){var i=u&~l;i!==0?r=kt(i):(o&=u,o!==0&&(r=kt(o)))}else u=t&~l,u!==0?r=kt(u):o!==0&&(r=kt(o));if(r===0)return 0;if(n!==0&&n!==r&&!(n&l)&&(l=r&-r,o=n&-n,l>=o||l===16&&(o&4194240)!==0))return n;if(r&4&&(r|=t&16),n=e.entangledLanes,n!==0)for(e=e.entanglements,n&=r;0t;t++)n.push(e);return n}function Gt(e,n,t){e.pendingLanes|=n,n!==536870912&&(e.suspendedLanes=0,e.pingedLanes=0),e=e.eventTimes,n=31-je(n),e[n]=t}function qc(e,n){var t=e.pendingLanes&~n;e.pendingLanes=n,e.suspendedLanes=0,e.pingedLanes=0,e.expiredLanes&=n,e.mutableReadLanes&=n,e.entangledLanes&=n,n=e.entanglements;var r=e.eventTimes;for(e=e.expirationTimes;0=Ct),bu=String.fromCharCode(32),ei=!1;function Fs(e,n){switch(e){case"keyup":return Pf.indexOf(n.keyCode)!==-1;case"keydown":return n.keyCode!==229;case"keypress":case"mousedown":case"focusout":return!0;default:return!1}}function Us(e){return e=e.detail,typeof e=="object"&&"data"in e?e.data:null}var In=!1;function Tf(e,n){switch(e){case"compositionend":return Us(n);case"keypress":return n.which!==32?null:(ei=!0,bu);case"textInput":return e=n.data,e===bu&&ei?null:e;default:return null}}function Lf(e,n){if(In)return e==="compositionend"||!ru&&Fs(e,n)?(e=Ds(),Sr=eu=nn=null,In=!1,e):null;switch(e){case"paste":return null;case"keypress":if(!(n.ctrlKey||n.altKey||n.metaKey)||n.ctrlKey&&n.altKey){if(n.char&&1=n)return{node:t,offset:n-e};e=r}e:{for(;t;){if(t.nextSibling){t=t.nextSibling;break e}t=t.parentNode}t=void 0}t=li(t)}}function Bs(e,n){return e&&n?e===n?!0:e&&e.nodeType===3?!1:n&&n.nodeType===3?Bs(e,n.parentNode):"contains"in e?e.contains(n):e.compareDocumentPosition?!!(e.compareDocumentPosition(n)&16):!1:!1}function Hs(){for(var e=window,n=Lr();n instanceof e.HTMLIFrameElement;){try{var t=typeof n.contentWindow.location.href=="string"}catch{t=!1}if(t)e=n.contentWindow;else break;n=Lr(e.document)}return n}function lu(e){var n=e&&e.nodeName&&e.nodeName.toLowerCase();return n&&(n==="input"&&(e.type==="text"||e.type==="search"||e.type==="tel"||e.type==="url"||e.type==="password")||n==="textarea"||e.contentEditable==="true")}function $f(e){var n=Hs(),t=e.focusedElem,r=e.selectionRange;if(n!==t&&t&&t.ownerDocument&&Bs(t.ownerDocument.documentElement,t)){if(r!==null&&lu(t)){if(n=r.start,e=r.end,e===void 0&&(e=n),"selectionStart"in t)t.selectionStart=n,t.selectionEnd=Math.min(e,t.value.length);else if(e=(n=t.ownerDocument||document)&&n.defaultView||window,e.getSelection){e=e.getSelection();var l=t.textContent.length,o=Math.min(r.start,l);r=r.end===void 0?o:Math.min(r.end,l),!e.extend&&o>r&&(l=r,r=o,o=l),l=oi(t,o);var u=oi(t,r);l&&u&&(e.rangeCount!==1||e.anchorNode!==l.node||e.anchorOffset!==l.offset||e.focusNode!==u.node||e.focusOffset!==u.offset)&&(n=n.createRange(),n.setStart(l.node,l.offset),e.removeAllRanges(),o>r?(e.addRange(n),e.extend(u.node,u.offset)):(n.setEnd(u.node,u.offset),e.addRange(n)))}}for(n=[],e=t;e=e.parentNode;)e.nodeType===1&&n.push({element:e,left:e.scrollLeft,top:e.scrollTop});for(typeof t.focus=="function"&&t.focus(),t=0;t=document.documentMode,Fn=null,fo=null,Nt=null,po=!1;function ui(e,n,t){var r=t.window===t?t.document:t.nodeType===9?t:t.ownerDocument;po||Fn==null||Fn!==Lr(r)||(r=Fn,"selectionStart"in r&&lu(r)?r={start:r.selectionStart,end:r.selectionEnd}:(r=(r.ownerDocument&&r.ownerDocument.defaultView||window).getSelection(),r={anchorNode:r.anchorNode,anchorOffset:r.anchorOffset,focusNode:r.focusNode,focusOffset:r.focusOffset}),Nt&&Ut(Nt,r)||(Nt=r,r=Fr(fo,"onSelect"),0An||(e.current=wo[An],wo[An]=null,An--)}function D(e,n){An++,wo[An]=e.current,e.current=n}var pn={},le=hn(pn),fe=hn(!1),Nn=pn;function bn(e,n){var t=e.type.contextTypes;if(!t)return pn;var r=e.stateNode;if(r&&r.__reactInternalMemoizedUnmaskedChildContext===n)return r.__reactInternalMemoizedMaskedChildContext;var l={},o;for(o in t)l[o]=n[o];return r&&(e=e.stateNode,e.__reactInternalMemoizedUnmaskedChildContext=n,e.__reactInternalMemoizedMaskedChildContext=l),l}function de(e){return e=e.childContextTypes,e!=null}function $r(){F(fe),F(le)}function pi(e,n,t){if(le.current!==pn)throw Error(y(168));D(le,n),D(fe,t)}function qs(e,n,t){var r=e.stateNode;if(n=n.childContextTypes,typeof r.getChildContext!="function")return t;r=r.getChildContext();for(var l in r)if(!(l in n))throw Error(y(108,Oc(e)||"Unknown",l));return V({},t,r)}function Ar(e){return e=(e=e.stateNode)&&e.__reactInternalMemoizedMergedChildContext||pn,Nn=le.current,D(le,e),D(fe,fe.current),!0}function mi(e,n,t){var r=e.stateNode;if(!r)throw Error(y(169));t?(e=qs(e,n,Nn),r.__reactInternalMemoizedMergedChildContext=e,F(fe),F(le),D(le,e)):F(fe),D(fe,t)}var Ve=null,ul=!1,Il=!1;function bs(e){Ve===null?Ve=[e]:Ve.push(e)}function Jf(e){ul=!0,bs(e)}function vn(){if(!Il&&Ve!==null){Il=!0;var e=0,n=O;try{var t=Ve;for(O=1;e>=u,l-=u,Be=1<<32-je(n)+l|t<N?(H=_,_=null):H=_.sibling;var L=p(f,_,d[N],v);if(L===null){_===null&&(_=H);break}e&&_&&L.alternate===null&&n(f,_),a=o(L,a,N),C===null?x=L:C.sibling=L,C=L,_=H}if(N===d.length)return t(f,_),U&&wn(f,N),x;if(_===null){for(;NN?(H=_,_=null):H=_.sibling;var Pe=p(f,_,L.value,v);if(Pe===null){_===null&&(_=H);break}e&&_&&Pe.alternate===null&&n(f,_),a=o(Pe,a,N),C===null?x=Pe:C.sibling=Pe,C=Pe,_=H}if(L.done)return t(f,_),U&&wn(f,N),x;if(_===null){for(;!L.done;N++,L=d.next())L=m(f,L.value,v),L!==null&&(a=o(L,a,N),C===null?x=L:C.sibling=L,C=L);return U&&wn(f,N),x}for(_=r(f,_);!L.done;N++,L=d.next())L=k(_,f,N,L.value,v),L!==null&&(e&&L.alternate!==null&&_.delete(L.key===null?N:L.key),a=o(L,a,N),C===null?x=L:C.sibling=L,C=L);return e&&_.forEach(function(st){return n(f,st)}),U&&wn(f,N),x}function M(f,a,d,v){if(typeof d=="object"&&d!==null&&d.type===Dn&&d.key===null&&(d=d.props.children),typeof d=="object"&&d!==null){switch(d.$$typeof){case tr:e:{for(var x=d.key,C=a;C!==null;){if(C.key===x){if(x=d.type,x===Dn){if(C.tag===7){t(f,C.sibling),a=l(C,d.props.children),a.return=f,f=a;break e}}else if(C.elementType===x||typeof x=="object"&&x!==null&&x.$$typeof===Je&&Si(x)===C.type){t(f,C.sibling),a=l(C,d.props),a.ref=ht(f,C,d),a.return=f,f=a;break e}t(f,C);break}else n(f,C);C=C.sibling}d.type===Dn?(a=_n(d.props.children,f.mode,v,d.key),a.return=f,f=a):(v=Tr(d.type,d.key,d.props,null,f.mode,v),v.ref=ht(f,a,d),v.return=f,f=v)}return u(f);case Mn:e:{for(C=d.key;a!==null;){if(a.key===C)if(a.tag===4&&a.stateNode.containerInfo===d.containerInfo&&a.stateNode.implementation===d.implementation){t(f,a.sibling),a=l(a,d.children||[]),a.return=f,f=a;break e}else{t(f,a);break}else n(f,a);a=a.sibling}a=Wl(d,f.mode,v),a.return=f,f=a}return u(f);case Je:return C=d._init,M(f,a,C(d._payload),v)}if(wt(d))return g(f,a,d,v);if(ct(d))return w(f,a,d,v);pr(f,d)}return typeof d=="string"&&d!==""||typeof d=="number"?(d=""+d,a!==null&&a.tag===6?(t(f,a.sibling),a=l(a,d),a.return=f,f=a):(t(f,a),a=Hl(d,f.mode,v),a.return=f,f=a),u(f)):t(f,a)}return M}var nt=ia(!0),sa=ia(!1),qt={},$e=hn(qt),Bt=hn(qt),Ht=hn(qt);function En(e){if(e===qt)throw Error(y(174));return e}function pu(e,n){switch(D(Ht,n),D(Bt,e),D($e,qt),e=n.nodeType,e){case 9:case 11:n=(n=n.documentElement)?n.namespaceURI:no(null,"");break;default:e=e===8?n.parentNode:n,n=e.namespaceURI||null,e=e.tagName,n=no(n,e)}F($e),D($e,n)}function tt(){F($e),F(Bt),F(Ht)}function aa(e){En(Ht.current);var n=En($e.current),t=no(n,e.type);n!==t&&(D(Bt,e),D($e,t))}function mu(e){Bt.current===e&&(F($e),F(Bt))}var $=hn(0);function Kr(e){for(var n=e;n!==null;){if(n.tag===13){var t=n.memoizedState;if(t!==null&&(t=t.dehydrated,t===null||t.data==="$?"||t.data==="$!"))return n}else if(n.tag===19&&n.memoizedProps.revealOrder!==void 0){if(n.flags&128)return n}else if(n.child!==null){n.child.return=n,n=n.child;continue}if(n===e)break;for(;n.sibling===null;){if(n.return===null||n.return===e)return null;n=n.return}n.sibling.return=n.return,n=n.sibling}return null}var Fl=[];function hu(){for(var e=0;et?t:4,e(!0);var r=Ul.transition;Ul.transition={};try{e(!1),n()}finally{O=t,Ul.transition=r}}function _a(){return Ne().memoizedState}function nd(e,n,t){var r=cn(e);if(t={lane:r,action:t,hasEagerState:!1,eagerState:null,next:null},Na(e))Pa(n,t);else if(t=ra(e,n,t,r),t!==null){var l=ue();Oe(t,e,r,l),za(t,n,r)}}function td(e,n,t){var r=cn(e),l={lane:r,action:t,hasEagerState:!1,eagerState:null,next:null};if(Na(e))Pa(n,l);else{var o=e.alternate;if(e.lanes===0&&(o===null||o.lanes===0)&&(o=n.lastRenderedReducer,o!==null))try{var u=n.lastRenderedState,i=o(u,t);if(l.hasEagerState=!0,l.eagerState=i,Me(i,u)){var s=n.interleaved;s===null?(l.next=l,fu(n)):(l.next=s.next,s.next=l),n.interleaved=l;return}}catch{}finally{}t=ra(e,n,l,r),t!==null&&(l=ue(),Oe(t,e,r,l),za(t,n,r))}}function Na(e){var n=e.alternate;return e===A||n!==null&&n===A}function Pa(e,n){Pt=Yr=!0;var t=e.pending;t===null?n.next=n:(n.next=t.next,t.next=n),e.pending=n}function za(e,n,t){if(t&4194240){var r=n.lanes;r&=e.pendingLanes,t|=r,n.lanes=t,Jo(e,t)}}var Xr={readContext:_e,useCallback:ne,useContext:ne,useEffect:ne,useImperativeHandle:ne,useInsertionEffect:ne,useLayoutEffect:ne,useMemo:ne,useReducer:ne,useRef:ne,useState:ne,useDebugValue:ne,useDeferredValue:ne,useTransition:ne,useMutableSource:ne,useSyncExternalStore:ne,useId:ne,unstable_isNewReconciler:!1},rd={readContext:_e,useCallback:function(e,n){return Ie().memoizedState=[e,n===void 0?null:n],e},useContext:_e,useEffect:Ei,useImperativeHandle:function(e,n,t){return t=t!=null?t.concat([e]):null,_r(4194308,4,ka.bind(null,n,e),t)},useLayoutEffect:function(e,n){return _r(4194308,4,e,n)},useInsertionEffect:function(e,n){return _r(4,2,e,n)},useMemo:function(e,n){var t=Ie();return n=n===void 0?null:n,e=e(),t.memoizedState=[e,n],e},useReducer:function(e,n,t){var r=Ie();return n=t!==void 0?t(n):n,r.memoizedState=r.baseState=n,e={pending:null,interleaved:null,lanes:0,dispatch:null,lastRenderedReducer:e,lastRenderedState:n},r.queue=e,e=e.dispatch=nd.bind(null,A,e),[r.memoizedState,e]},useRef:function(e){var n=Ie();return e={current:e},n.memoizedState=e},useState:xi,useDebugValue:ku,useDeferredValue:function(e){return Ie().memoizedState=e},useTransition:function(){var e=xi(!1),n=e[0];return e=ed.bind(null,e[1]),Ie().memoizedState=e,[n,e]},useMutableSource:function(){},useSyncExternalStore:function(e,n,t){var r=A,l=Ie();if(U){if(t===void 0)throw Error(y(407));t=t()}else{if(t=n(),J===null)throw Error(y(349));zn&30||da(r,n,t)}l.memoizedState=t;var o={value:t,getSnapshot:n};return l.queue=o,Ei(ma.bind(null,r,o,e),[e]),r.flags|=2048,Kt(9,pa.bind(null,r,o,t,n),void 0,null),t},useId:function(){var e=Ie(),n=J.identifierPrefix;if(U){var t=He,r=Be;t=(r&~(1<<32-je(r)-1)).toString(32)+t,n=":"+n+"R"+t,t=Wt++,0<\/script>",e=e.removeChild(e.firstChild)):typeof r.is=="string"?e=u.createElement(t,{is:r.is}):(e=u.createElement(t),t==="select"&&(u=e,r.multiple?u.multiple=!0:r.size&&(u.size=r.size))):e=u.createElementNS(e,t),e[Fe]=n,e[Vt]=r,Fa(e,n,!1,!1),n.stateNode=e;e:{switch(u=ro(t,r),t){case"dialog":I("cancel",e),I("close",e),l=r;break;case"iframe":case"object":case"embed":I("load",e),l=r;break;case"video":case"audio":for(l=0;llt&&(n.flags|=128,r=!0,vt(o,!1),n.lanes=4194304)}else{if(!r)if(e=Kr(u),e!==null){if(n.flags|=128,r=!0,t=e.updateQueue,t!==null&&(n.updateQueue=t,n.flags|=4),vt(o,!0),o.tail===null&&o.tailMode==="hidden"&&!u.alternate&&!U)return te(n),null}else 2*Q()-o.renderingStartTime>lt&&t!==1073741824&&(n.flags|=128,r=!0,vt(o,!1),n.lanes=4194304);o.isBackwards?(u.sibling=n.child,n.child=u):(t=o.last,t!==null?t.sibling=u:n.child=u,o.last=u)}return o.tail!==null?(n=o.tail,o.rendering=n,o.tail=n.sibling,o.renderingStartTime=Q(),n.sibling=null,t=$.current,D($,r?t&1|2:t&1),n):(te(n),null);case 22:case 23:return Nu(),r=n.memoizedState!==null,e!==null&&e.memoizedState!==null!==r&&(n.flags|=8192),r&&n.mode&1?he&1073741824&&(te(n),n.subtreeFlags&6&&(n.flags|=8192)):te(n),null;case 24:return null;case 25:return null}throw Error(y(156,n.tag))}function fd(e,n){switch(uu(n),n.tag){case 1:return de(n.type)&&$r(),e=n.flags,e&65536?(n.flags=e&-65537|128,n):null;case 3:return tt(),F(fe),F(le),hu(),e=n.flags,e&65536&&!(e&128)?(n.flags=e&-65537|128,n):null;case 5:return mu(n),null;case 13:if(F($),e=n.memoizedState,e!==null&&e.dehydrated!==null){if(n.alternate===null)throw Error(y(340));et()}return e=n.flags,e&65536?(n.flags=e&-65537|128,n):null;case 19:return F($),null;case 4:return tt(),null;case 10:return cu(n.type._context),null;case 22:case 23:return Nu(),null;case 24:return null;default:return null}}var hr=!1,re=!1,dd=typeof WeakSet=="function"?WeakSet:Set,S=null;function Wn(e,n){var t=e.ref;if(t!==null)if(typeof t=="function")try{t(null)}catch(r){B(e,n,r)}else t.current=null}function Ro(e,n,t){try{t()}catch(r){B(e,n,r)}}var ji=!1;function pd(e,n){if(mo=Dr,e=Hs(),lu(e)){if("selectionStart"in e)var t={start:e.selectionStart,end:e.selectionEnd};else e:{t=(t=e.ownerDocument)&&t.defaultView||window;var r=t.getSelection&&t.getSelection();if(r&&r.rangeCount!==0){t=r.anchorNode;var l=r.anchorOffset,o=r.focusNode;r=r.focusOffset;try{t.nodeType,o.nodeType}catch{t=null;break e}var u=0,i=-1,s=-1,c=0,h=0,m=e,p=null;n:for(;;){for(var k;m!==t||l!==0&&m.nodeType!==3||(i=u+l),m!==o||r!==0&&m.nodeType!==3||(s=u+r),m.nodeType===3&&(u+=m.nodeValue.length),(k=m.firstChild)!==null;)p=m,m=k;for(;;){if(m===e)break n;if(p===t&&++c===l&&(i=u),p===o&&++h===r&&(s=u),(k=m.nextSibling)!==null)break;m=p,p=m.parentNode}m=k}t=i===-1||s===-1?null:{start:i,end:s}}else t=null}t=t||{start:0,end:0}}else t=null;for(ho={focusedElem:e,selectionRange:t},Dr=!1,S=n;S!==null;)if(n=S,e=n.child,(n.subtreeFlags&1028)!==0&&e!==null)e.return=n,S=e;else for(;S!==null;){n=S;try{var g=n.alternate;if(n.flags&1024)switch(n.tag){case 0:case 11:case 15:break;case 1:if(g!==null){var w=g.memoizedProps,M=g.memoizedState,f=n.stateNode,a=f.getSnapshotBeforeUpdate(n.elementType===n.type?w:Te(n.type,w),M);f.__reactInternalSnapshotBeforeUpdate=a}break;case 3:var d=n.stateNode.containerInfo;d.nodeType===1?d.textContent="":d.nodeType===9&&d.documentElement&&d.removeChild(d.documentElement);break;case 5:case 6:case 4:case 17:break;default:throw Error(y(163))}}catch(v){B(n,n.return,v)}if(e=n.sibling,e!==null){e.return=n.return,S=e;break}S=n.return}return g=ji,ji=!1,g}function zt(e,n,t){var r=n.updateQueue;if(r=r!==null?r.lastEffect:null,r!==null){var l=r=r.next;do{if((l.tag&e)===e){var o=l.destroy;l.destroy=void 0,o!==void 0&&Ro(n,t,o)}l=l.next}while(l!==r)}}function al(e,n){if(n=n.updateQueue,n=n!==null?n.lastEffect:null,n!==null){var t=n=n.next;do{if((t.tag&e)===e){var r=t.create;t.destroy=r()}t=t.next}while(t!==n)}}function jo(e){var n=e.ref;if(n!==null){var t=e.stateNode;switch(e.tag){case 5:e=t;break;default:e=t}typeof n=="function"?n(e):n.current=e}}function Aa(e){var n=e.alternate;n!==null&&(e.alternate=null,Aa(n)),e.child=null,e.deletions=null,e.sibling=null,e.tag===5&&(n=e.stateNode,n!==null&&(delete n[Fe],delete n[Vt],delete n[go],delete n[Gf],delete n[Zf])),e.stateNode=null,e.return=null,e.dependencies=null,e.memoizedProps=null,e.memoizedState=null,e.pendingProps=null,e.stateNode=null,e.updateQueue=null}function Va(e){return e.tag===5||e.tag===3||e.tag===4}function Oi(e){e:for(;;){for(;e.sibling===null;){if(e.return===null||Va(e.return))return null;e=e.return}for(e.sibling.return=e.return,e=e.sibling;e.tag!==5&&e.tag!==6&&e.tag!==18;){if(e.flags&2||e.child===null||e.tag===4)continue e;e.child.return=e,e=e.child}if(!(e.flags&2))return e.stateNode}}function Oo(e,n,t){var r=e.tag;if(r===5||r===6)e=e.stateNode,n?t.nodeType===8?t.parentNode.insertBefore(e,n):t.insertBefore(e,n):(t.nodeType===8?(n=t.parentNode,n.insertBefore(e,t)):(n=t,n.appendChild(e)),t=t._reactRootContainer,t!=null||n.onclick!==null||(n.onclick=Ur));else if(r!==4&&(e=e.child,e!==null))for(Oo(e,n,t),e=e.sibling;e!==null;)Oo(e,n,t),e=e.sibling}function Mo(e,n,t){var r=e.tag;if(r===5||r===6)e=e.stateNode,n?t.insertBefore(e,n):t.appendChild(e);else if(r!==4&&(e=e.child,e!==null))for(Mo(e,n,t),e=e.sibling;e!==null;)Mo(e,n,t),e=e.sibling}var q=null,Le=!1;function Ze(e,n,t){for(t=t.child;t!==null;)Ba(e,n,t),t=t.sibling}function Ba(e,n,t){if(Ue&&typeof Ue.onCommitFiberUnmount=="function")try{Ue.onCommitFiberUnmount(nl,t)}catch{}switch(t.tag){case 5:re||Wn(t,n);case 6:var r=q,l=Le;q=null,Ze(e,n,t),q=r,Le=l,q!==null&&(Le?(e=q,t=t.stateNode,e.nodeType===8?e.parentNode.removeChild(t):e.removeChild(t)):q.removeChild(t.stateNode));break;case 18:q!==null&&(Le?(e=q,t=t.stateNode,e.nodeType===8?Dl(e.parentNode,t):e.nodeType===1&&Dl(e,t),It(e)):Dl(q,t.stateNode));break;case 4:r=q,l=Le,q=t.stateNode.containerInfo,Le=!0,Ze(e,n,t),q=r,Le=l;break;case 0:case 11:case 14:case 15:if(!re&&(r=t.updateQueue,r!==null&&(r=r.lastEffect,r!==null))){l=r=r.next;do{var o=l,u=o.destroy;o=o.tag,u!==void 0&&(o&2||o&4)&&Ro(t,n,u),l=l.next}while(l!==r)}Ze(e,n,t);break;case 1:if(!re&&(Wn(t,n),r=t.stateNode,typeof r.componentWillUnmount=="function"))try{r.props=t.memoizedProps,r.state=t.memoizedState,r.componentWillUnmount()}catch(i){B(t,n,i)}Ze(e,n,t);break;case 21:Ze(e,n,t);break;case 22:t.mode&1?(re=(r=re)||t.memoizedState!==null,Ze(e,n,t),re=r):Ze(e,n,t);break;default:Ze(e,n,t)}}function Mi(e){var n=e.updateQueue;if(n!==null){e.updateQueue=null;var t=e.stateNode;t===null&&(t=e.stateNode=new dd),n.forEach(function(r){var l=xd.bind(null,e,r);t.has(r)||(t.add(r),r.then(l,l))})}}function ze(e,n){var t=n.deletions;if(t!==null)for(var r=0;rl&&(l=u),r&=~o}if(r=l,r=Q()-r,r=(120>r?120:480>r?480:1080>r?1080:1920>r?1920:3e3>r?3e3:4320>r?4320:1960*hd(r/1960))-r,10e?16:e,tn===null)var r=!1;else{if(e=tn,tn=null,Jr=0,j&6)throw Error(y(331));var l=j;for(j|=4,S=e.current;S!==null;){var o=S,u=o.child;if(S.flags&16){var i=o.deletions;if(i!==null){for(var s=0;sQ()-Cu?Cn(e,0):Eu|=t),pe(e,n)}function Za(e,n){n===0&&(e.mode&1?(n=ur,ur<<=1,!(ur&130023424)&&(ur=4194304)):n=1);var t=ue();e=Ye(e,n),e!==null&&(Gt(e,n,t),pe(e,t))}function Sd(e){var n=e.memoizedState,t=0;n!==null&&(t=n.retryLane),Za(e,t)}function xd(e,n){var t=0;switch(e.tag){case 13:var r=e.stateNode,l=e.memoizedState;l!==null&&(t=l.retryLane);break;case 19:r=e.stateNode;break;default:throw Error(y(314))}r!==null&&r.delete(n),Za(e,t)}var Ja;Ja=function(e,n,t){if(e!==null)if(e.memoizedProps!==n.pendingProps||fe.current)ce=!0;else{if(!(e.lanes&t)&&!(n.flags&128))return ce=!1,ad(e,n,t);ce=!!(e.flags&131072)}else ce=!1,U&&n.flags&1048576&&ea(n,Br,n.index);switch(n.lanes=0,n.tag){case 2:var r=n.type;Nr(e,n),e=n.pendingProps;var l=bn(n,le.current);Zn(n,t),l=yu(null,n,r,e,l,t);var o=gu();return n.flags|=1,typeof l=="object"&&l!==null&&typeof l.render=="function"&&l.$$typeof===void 0?(n.tag=1,n.memoizedState=null,n.updateQueue=null,de(r)?(o=!0,Ar(n)):o=!1,n.memoizedState=l.state!==null&&l.state!==void 0?l.state:null,du(n),l.updater=il,n.stateNode=l,l._reactInternals=n,Co(n,r,e,t),n=Po(null,n,r,!0,o,t)):(n.tag=0,U&&o&&ou(n),oe(null,n,l,t),n=n.child),n;case 16:r=n.elementType;e:{switch(Nr(e,n),e=n.pendingProps,l=r._init,r=l(r._payload),n.type=r,l=n.tag=Cd(r),e=Te(r,e),l){case 0:n=No(null,n,r,e,t);break e;case 1:n=Ti(null,n,r,e,t);break e;case 11:n=Pi(null,n,r,e,t);break e;case 14:n=zi(null,n,r,Te(r.type,e),t);break e}throw Error(y(306,r,""))}return n;case 0:return r=n.type,l=n.pendingProps,l=n.elementType===r?l:Te(r,l),No(e,n,r,l,t);case 1:return r=n.type,l=n.pendingProps,l=n.elementType===r?l:Te(r,l),Ti(e,n,r,l,t);case 3:e:{if(Ma(n),e===null)throw Error(y(387));r=n.pendingProps,o=n.memoizedState,l=o.element,la(e,n),Qr(n,r,null,t);var u=n.memoizedState;if(r=u.element,o.isDehydrated)if(o={element:r,isDehydrated:!1,cache:u.cache,pendingSuspenseBoundaries:u.pendingSuspenseBoundaries,transitions:u.transitions},n.updateQueue.baseState=o,n.memoizedState=o,n.flags&256){l=rt(Error(y(423)),n),n=Li(e,n,r,t,l);break e}else if(r!==l){l=rt(Error(y(424)),n),n=Li(e,n,r,t,l);break e}else for(ve=un(n.stateNode.containerInfo.firstChild),ye=n,U=!0,Re=null,t=sa(n,null,r,t),n.child=t;t;)t.flags=t.flags&-3|4096,t=t.sibling;else{if(et(),r===l){n=Xe(e,n,t);break e}oe(e,n,r,t)}n=n.child}return n;case 5:return aa(n),e===null&&So(n),r=n.type,l=n.pendingProps,o=e!==null?e.memoizedProps:null,u=l.children,vo(r,l)?u=null:o!==null&&vo(r,o)&&(n.flags|=32),Oa(e,n),oe(e,n,u,t),n.child;case 6:return e===null&&So(n),null;case 13:return Da(e,n,t);case 4:return pu(n,n.stateNode.containerInfo),r=n.pendingProps,e===null?n.child=nt(n,null,r,t):oe(e,n,r,t),n.child;case 11:return r=n.type,l=n.pendingProps,l=n.elementType===r?l:Te(r,l),Pi(e,n,r,l,t);case 7:return oe(e,n,n.pendingProps,t),n.child;case 8:return oe(e,n,n.pendingProps.children,t),n.child;case 12:return oe(e,n,n.pendingProps.children,t),n.child;case 10:e:{if(r=n.type._context,l=n.pendingProps,o=n.memoizedProps,u=l.value,D(Hr,r._currentValue),r._currentValue=u,o!==null)if(Me(o.value,u)){if(o.children===l.children&&!fe.current){n=Xe(e,n,t);break e}}else for(o=n.child,o!==null&&(o.return=n);o!==null;){var i=o.dependencies;if(i!==null){u=o.child;for(var s=i.firstContext;s!==null;){if(s.context===r){if(o.tag===1){s=We(-1,t&-t),s.tag=2;var c=o.updateQueue;if(c!==null){c=c.shared;var h=c.pending;h===null?s.next=s:(s.next=h.next,h.next=s),c.pending=s}}o.lanes|=t,s=o.alternate,s!==null&&(s.lanes|=t),xo(o.return,t,n),i.lanes|=t;break}s=s.next}}else if(o.tag===10)u=o.type===n.type?null:o.child;else if(o.tag===18){if(u=o.return,u===null)throw Error(y(341));u.lanes|=t,i=u.alternate,i!==null&&(i.lanes|=t),xo(u,t,n),u=o.sibling}else u=o.child;if(u!==null)u.return=o;else for(u=o;u!==null;){if(u===n){u=null;break}if(o=u.sibling,o!==null){o.return=u.return,u=o;break}u=u.return}o=u}oe(e,n,l.children,t),n=n.child}return n;case 9:return l=n.type,r=n.pendingProps.children,Zn(n,t),l=_e(l),r=r(l),n.flags|=1,oe(e,n,r,t),n.child;case 14:return r=n.type,l=Te(r,n.pendingProps),l=Te(r.type,l),zi(e,n,r,l,t);case 15:return Ra(e,n,n.type,n.pendingProps,t);case 17:return r=n.type,l=n.pendingProps,l=n.elementType===r?l:Te(r,l),Nr(e,n),n.tag=1,de(r)?(e=!0,Ar(n)):e=!1,Zn(n,t),ua(n,r,l),Co(n,r,l,t),Po(null,n,r,!0,e,t);case 19:return Ia(e,n,t);case 22:return ja(e,n,t)}throw Error(y(156,n.tag))};function qa(e,n){return Cs(e,n)}function Ed(e,n,t,r){this.tag=e,this.key=t,this.sibling=this.child=this.return=this.stateNode=this.type=this.elementType=null,this.index=0,this.ref=null,this.pendingProps=n,this.dependencies=this.memoizedState=this.updateQueue=this.memoizedProps=null,this.mode=r,this.subtreeFlags=this.flags=0,this.deletions=null,this.childLanes=this.lanes=0,this.alternate=null}function Ee(e,n,t,r){return new Ed(e,n,t,r)}function zu(e){return e=e.prototype,!(!e||!e.isReactComponent)}function Cd(e){if(typeof e=="function")return zu(e)?1:0;if(e!=null){if(e=e.$$typeof,e===Yo)return 11;if(e===Xo)return 14}return 2}function fn(e,n){var t=e.alternate;return t===null?(t=Ee(e.tag,n,e.key,e.mode),t.elementType=e.elementType,t.type=e.type,t.stateNode=e.stateNode,t.alternate=e,e.alternate=t):(t.pendingProps=n,t.type=e.type,t.flags=0,t.subtreeFlags=0,t.deletions=null),t.flags=e.flags&14680064,t.childLanes=e.childLanes,t.lanes=e.lanes,t.child=e.child,t.memoizedProps=e.memoizedProps,t.memoizedState=e.memoizedState,t.updateQueue=e.updateQueue,n=e.dependencies,t.dependencies=n===null?null:{lanes:n.lanes,firstContext:n.firstContext},t.sibling=e.sibling,t.index=e.index,t.ref=e.ref,t}function Tr(e,n,t,r,l,o){var u=2;if(r=e,typeof e=="function")zu(e)&&(u=1);else if(typeof e=="string")u=5;else e:switch(e){case Dn:return _n(t.children,l,o,n);case Ko:u=8,l|=8;break;case Yl:return e=Ee(12,t,n,l|2),e.elementType=Yl,e.lanes=o,e;case Xl:return e=Ee(13,t,n,l),e.elementType=Xl,e.lanes=o,e;case Gl:return e=Ee(19,t,n,l),e.elementType=Gl,e.lanes=o,e;case is:return fl(t,l,o,n);default:if(typeof e=="object"&&e!==null)switch(e.$$typeof){case os:u=10;break e;case us:u=9;break e;case Yo:u=11;break e;case Xo:u=14;break e;case Je:u=16,r=null;break e}throw Error(y(130,e==null?e:typeof e,""))}return n=Ee(u,t,n,l),n.elementType=e,n.type=r,n.lanes=o,n}function _n(e,n,t,r){return e=Ee(7,e,r,n),e.lanes=t,e}function fl(e,n,t,r){return e=Ee(22,e,r,n),e.elementType=is,e.lanes=t,e.stateNode={isHidden:!1},e}function Hl(e,n,t){return e=Ee(6,e,null,n),e.lanes=t,e}function Wl(e,n,t){return n=Ee(4,e.children!==null?e.children:[],e.key,n),n.lanes=t,n.stateNode={containerInfo:e.containerInfo,pendingChildren:null,implementation:e.implementation},n}function _d(e,n,t,r,l){this.tag=n,this.containerInfo=e,this.finishedWork=this.pingCache=this.current=this.pendingChildren=null,this.timeoutHandle=-1,this.callbackNode=this.pendingContext=this.context=null,this.callbackPriority=0,this.eventTimes=Cl(0),this.expirationTimes=Cl(-1),this.entangledLanes=this.finishedLanes=this.mutableReadLanes=this.expiredLanes=this.pingedLanes=this.suspendedLanes=this.pendingLanes=0,this.entanglements=Cl(0),this.identifierPrefix=r,this.onRecoverableError=l,this.mutableSourceEagerHydrationData=null}function Tu(e,n,t,r,l,o,u,i,s){return e=new _d(e,n,t,i,s),n===1?(n=1,o===!0&&(n|=8)):n=0,o=Ee(3,null,null,n),e.current=o,o.stateNode=e,o.memoizedState={element:r,isDehydrated:t,cache:null,transitions:null,pendingSuspenseBoundaries:null},du(o),e}function Nd(e,n,t){var r=3"u"||typeof __REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE!="function"))try{__REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE(tc)}catch(e){console.error(e)}}tc(),es.exports=we;var Rd=es.exports,Bi=Rd;Ql.createRoot=Bi.createRoot,Ql.hydrateRoot=Bi.hydrateRoot;const Hi=["bg-purple-300","bg-green-300","bg-yellow-300","bg-red-300","bg-blue-300"];function jd({text:e,position:n}){return e!==`
-`?R.jsx("span",{className:`leading-5 inline-block ${Hi[n%Hi.length]}`,children:e}):R.jsx("br",{})}function Od(){const[e,n]=me.useState(""),[t,r]=me.useState([]),[l,o]=me.useState([]),[u,i]=me.useState("text"),[s,c]=me.useState("Xenova/gpt-4"),h=me.useRef(null),m=me.useRef(null);me.useEffect(()=>{m.current||(m.current=new Worker(new URL("/assets/worker-6c002022.js",self.location),{type:"module"}));const g=w=>{r(w.data.token_ids),o(w.data.decoded)};return m.current.addEventListener("message",g),()=>m.current.removeEventListener("message",g)},[]);const p=me.useCallback(g=>{const w=s,M=g.target.value;M.length>1e4&&(i(null),console.log("User most likely pasted in a large body of text (> 10k chars), so we hide the output (until specifically requested by the user).")),n(M),m.current.postMessage({model_id:w,text:M})},[s]),k=me.useCallback(g=>{const w=g.target.value;c(w),m.current.postMessage({model_id:w,text:e})},[e]);return R.jsxs("div",{className:"w-full max-w-[720px] flex flex-col gap-4 items-center",children:[R.jsxs("div",{children:[R.jsx("h1",{className:"text-5xl font-bold mb-2",children:"The Tokenizer Playground"}),R.jsxs("h2",{className:"text-lg font-normal",children:["Experiment with different tokenizers (running ",R.jsx("a",{className:"text-gray-900 underline",href:"https://github.com/xenova/transformers.js",children:"locally"})," in your browser)."]})]}),R.jsx("div",{children:R.jsxs("select",{value:s,onChange:k,className:"bg-gray-50 border border-gray-300 text-gray-900 text-sm rounded-lg focus:ring-blue-500 focus:border-blue-500 block w-full p-2",children:[R.jsx("option",{value:"Xenova/gpt-4",children:"gpt-4 / gpt-3.5-turbo / text-embedding-ada-002"}),R.jsx("option",{value:"Xenova/text-davinci-003",children:"text-davinci-003 / text-davinci-002"}),R.jsx("option",{value:"Xenova/gpt-3",children:"gpt-3"})]})}),R.jsx("textarea",{onChange:p,rows:"8",className:"font-mono text-lg block w-full p-2.5 text-gray-900 bg-gray-50 rounded-lg border border-gray-200",placeholder:"Enter some text"}),R.jsxs("div",{className:"flex justify-center gap-5",children:[R.jsxs("div",{className:"flex flex-col",children:[R.jsx("h2",{className:"font-semibold uppercase leading-4",children:"Tokens"}),R.jsx("h3",{className:"font-semibold text-3xl",children:t.length.toLocaleString()})]}),R.jsxs("div",{className:"flex flex-col",children:[R.jsx("h2",{className:"font-semibold uppercase leading-4",children:"Characters"}),R.jsx("h3",{className:"font-semibold text-3xl",children:e.length.toLocaleString()})]})]}),R.jsx("div",{ref:h,className:"font-mono text-lg p-2.5 w-full bg-gray-100 rounded-lg border border-gray-200 whitespace-pre-wrap text-left h-[200px] overflow-y-auto",children:u==="text"?l.map((g,w)=>R.jsx(jd,{text:g,position:w},w)):u==="token_ids"?`[${t.join(", ")}]`:null}),R.jsxs("div",{className:"flex items-center gap-2 self-end",children:[R.jsxs("div",{className:"flex items-center",children:[R.jsx("input",{checked:u==="text",onChange:()=>i("text"),id:"output-radio-1",type:"radio",value:"",name:"output-radio",className:"w-4 h-4 text-blue-600 bg-gray-100 border-gray-300 focus:ring-blue-500"}),R.jsx("label",{htmlFor:"output-radio-1",className:"ml-1 text-sm font-medium text-gray-900 dark:text-gray-300",children:"Text"})]}),R.jsxs("div",{className:"flex items-center",children:[R.jsx("input",{checked:u==="token_ids",onChange:()=>i("token_ids"),id:"output-radio-2",type:"radio",value:"",name:"output-radio",className:"w-4 h-4 text-blue-600 bg-gray-100 border-gray-300 focus:ring-blue-500"}),R.jsx("label",{htmlFor:"output-radio-2",className:"ml-1 text-sm font-medium text-gray-900 dark:text-gray-300",children:"Token IDs"})]}),R.jsxs("div",{className:"flex items-center",children:[R.jsx("input",{checked:u===null,onChange:()=>i(null),id:"output-radio-3",type:"radio",value:"",name:"output-radio",className:"w-4 h-4 text-blue-600 bg-gray-100 border-gray-300 focus:ring-blue-500"}),R.jsx("label",{htmlFor:"output-radio-3",className:"ml-1 text-sm font-medium text-gray-900 dark:text-gray-300",children:"Hide"})]})]})]})}Ql.createRoot(document.getElementById("root")).render(R.jsx(kc.StrictMode,{children:R.jsx(Od,{})}));
diff --git a/spaces/Xeraphinite/Coursera-GPT/app.py b/spaces/Xeraphinite/Coursera-GPT/app.py
deleted file mode 100644
index 281860209371e5f9460f6bdc9a762ed4d978d6f6..0000000000000000000000000000000000000000
--- a/spaces/Xeraphinite/Coursera-GPT/app.py
+++ /dev/null
@@ -1,78 +0,0 @@
-import gradio as gr
-import openai
-import os
-
-def predict(education_level, annual_income, employment_status, course_name, openai_api_key):
- # 0. preparation
- os.environ['OPENAI_API_KEY'] = openai_api_key
- openai.api_key = openai_api_key
-
- overall_prompt = '''你将作为一名专业的申请人助手,在世界上最大的 MOOC 平台 Coursera 上完成一份 Financial Aid 相关的任务,之后将会给你相关的课程信息,任务如下:'''
- role = f'''个人信息: a {education_level} with {annual_income} annual income and {employment_status}.'''
-
- # 1. Reasons for aid
- task = '请你完成一份 Financial Aid 申请表,字数在 150-300 words 之间,内容需要包括,请注意,输出仅仅包括 Reasons for Financial Aid ' \
- 'Application 的内容即可,前后均不需要添加任何东西(包括 "Reasons for Financial Aid Application:"),也不需要输出任何解释性语句.'
-
- response = openai.ChatCompletion.create(
- model='gpt-3.5-turbo',
- temperature=0.0,
- messages=[
- {'role': 'system', 'name': 'overall_prompt', 'content': overall_prompt},
- {'role': 'user', 'name': 'task', 'content': task},
- {'role': 'user', 'name': 'role', 'content': role},
- {'role': 'user', 'name': 'course_name', 'content': f'Course name: {course_name}'},
- {'role': 'system', 'name': 'overall_prompt', 'content': overall_prompt},
- {'role': 'user', 'name': 'task', 'content': task},
- ]
- )
-
- reasons_for_aid = response.choices[0].message.content
- reasons_for_aid = reasons_for_aid.replace('Reasons for Financial Aid Application:\n', '')
-
- while reasons_for_aid.startswith('\n'):
- reasons_for_aid = reasons_for_aid[1:]
-
- # 2. How will your selected course help with your goals?
- task = '请你根据给出的信息回答:How will your selected course help with your goals? 答案字数在 150-300 words ' \
- '之间,请注意,输出仅仅包括问题的答案即可,前后均不需要添加任何东西,也不需要输出任何解释性语句.'
-
- response = openai.ChatCompletion.create(
- model='gpt-3.5-turbo',
- temperature=0.0,
- messages=[
- {'role': 'system', 'name': 'overall_prompt', 'content': overall_prompt},
- {'role': 'user', 'name': 'task', 'content': task},
- {'role': 'user', 'name': 'role', 'content': role},
- {'role': 'user', 'name': 'course_name', 'content': f'Course name: {course_name}'},
- {'role': 'system', 'name': 'overall_prompt', 'content': overall_prompt},
- {'role': 'user', 'name': 'task', 'content': task},
- ]
- )
-
- how_will_course_help = response.choices[0].message.content
-
- return reasons_for_aid, how_will_course_help
-
-
-params = {
- 'education_level': 'College Degree',
- 'annual_income': 0,
- 'employment_status': 'Student',
-}
-
-if __name__ == '__main__':
- gr.Interface(
- fn=predict,
- inputs=[
- gr.components.Dropdown(['High School', 'Some College', 'College Degree', 'Master’s/Advanced degree', 'Other'], value=params['education_level'], label='Education'),
- gr.components.Slider(0, 100, params['annual_income'], label='Annual Income($ USD)'),
- gr.components.Dropdown(['Full-time', 'Part-time', 'Unemployed', 'Student', 'Other'], value=params['employment_status'], label='Employment Status'),
- gr.Textbox(label="Course Name"),
- gr.Textbox(label="OpenAI API Key")
- ],
- outputs=[
- gr.Textbox(label="Reason you applied for aid", show_copy_button=True),
- gr.Textbox(label="How will your selected course help with your goals?", show_copy_button=True)
- ],
- ).launch()
diff --git a/spaces/Yan233th/so-vits-svc-models/spec_gen.py b/spaces/Yan233th/so-vits-svc-models/spec_gen.py
deleted file mode 100644
index 9476395adab6fa841fde10c05fbb92902310ebd4..0000000000000000000000000000000000000000
--- a/spaces/Yan233th/so-vits-svc-models/spec_gen.py
+++ /dev/null
@@ -1,22 +0,0 @@
-from data_utils import TextAudioSpeakerLoader
-import json
-from tqdm import tqdm
-
-from utils import HParams
-
-config_path = 'configs/config.json'
-with open(config_path, "r") as f:
- data = f.read()
-config = json.loads(data)
-hps = HParams(**config)
-
-train_dataset = TextAudioSpeakerLoader("filelists/train.txt", hps)
-test_dataset = TextAudioSpeakerLoader("filelists/test.txt", hps)
-eval_dataset = TextAudioSpeakerLoader("filelists/val.txt", hps)
-
-for _ in tqdm(train_dataset):
- pass
-for _ in tqdm(eval_dataset):
- pass
-for _ in tqdm(test_dataset):
- pass
\ No newline at end of file
diff --git a/spaces/YangHao520/testCreateFile/app.py b/spaces/YangHao520/testCreateFile/app.py
deleted file mode 100644
index 272c97f72e2de1d6f8c96aa96a88a4b219548de0..0000000000000000000000000000000000000000
--- a/spaces/YangHao520/testCreateFile/app.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import os
-
-import gradio as gr
-import tempfile
-import shutil
-def generate_file(file_obj):
- global tmpdir
- try:
- print('临时文件夹地址:{}'.format(tmpdir))
- print('上传文件的地址:{}'.format(file_obj.name)) # 输出上传后的文件在gradio中保存的绝对地址
-
- #获取到上传后的文件的绝对路径后,其余的操作就和平常一致了
-
- # 将文件复制到临时目录中
- shutil.copy(file_obj.name, tmpdir)
-
- # 获取上传Gradio的文件名称
- FileName=os.path.basename(file_obj.name)
-
- # 获取拷贝在临时目录的新的文件地址
- NewfilePath=os.path.join(tmpdir,FileName)
- print(NewfilePath)
-
- # 打开复制到新路径后的文件
- with open(NewfilePath, 'rb') as file_obj:
-
- #在本地电脑打开一个新的文件,并且将上传文件内容写入到新文件
- outputPath=os.path.join(tmpdir,"New"+FileName)
- with open(outputPath,'wb') as w:
- w.write(file_obj.read())
-
- # 返回新文件的的地址(注意这里)
- return outputPath
- except:
- return ''
-def main():
- global tmpdir
- with tempfile.TemporaryDirectory(dir='.') as tmpdir:
- # 定义输入和输出
- inputs = gr.components.File(label="上传文件")
- outputs = gr.components.File(label="下载文件")
-
- # 创建 Gradio 应用程序g
- app = gr.Interface(fn=generate_file, inputs=inputs, outputs=outputs, title="文件上传、并生成可下载文件demo",
- description="上传任何文件都可以,只要大小别超过你电脑的内存即可"
- )
-
- # 启动应用程序
- app.launch()
-if __name__=="__main__":
- main()
\ No newline at end of file
diff --git a/spaces/Yuliang/ECON/lib/net/Discriminator.py b/spaces/Yuliang/ECON/lib/net/Discriminator.py
deleted file mode 100644
index b47ef9fd05ef645950be61111d417638a57ae3c6..0000000000000000000000000000000000000000
--- a/spaces/Yuliang/ECON/lib/net/Discriminator.py
+++ /dev/null
@@ -1,521 +0,0 @@
-""" The code is based on https://github.com/apple/ml-gsn/ with adaption. """
-
-import math
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from lib.torch_utils.ops.native_ops import (
- FusedLeakyReLU,
- fused_leaky_relu,
- upfirdn2d,
-)
-
-
-class DiscriminatorHead(nn.Module):
- def __init__(self, in_channel, disc_stddev=False):
- super().__init__()
-
- self.disc_stddev = disc_stddev
- stddev_dim = 1 if disc_stddev else 0
-
- self.conv_stddev = ConvLayer2d(
- in_channel=in_channel + stddev_dim,
- out_channel=in_channel,
- kernel_size=3,
- activate=True
- )
-
- self.final_linear = nn.Sequential(
- nn.Flatten(),
- EqualLinear(in_channel=in_channel * 4 * 4, out_channel=in_channel, activate=True),
- EqualLinear(in_channel=in_channel, out_channel=1),
- )
-
- def cat_stddev(self, x, stddev_group=4, stddev_feat=1):
- perm = torch.randperm(len(x))
- inv_perm = torch.argsort(perm)
-
- batch, channel, height, width = x.shape
- x = x[perm
- ] # shuffle inputs so that all views in a single trajectory don't get put together
-
- group = min(batch, stddev_group)
- stddev = x.view(group, -1, stddev_feat, channel // stddev_feat, height, width)
- stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8)
- stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2)
- stddev = stddev.repeat(group, 1, height, width)
-
- stddev = stddev[inv_perm] # reorder inputs
- x = x[inv_perm]
-
- out = torch.cat([x, stddev], 1)
- return out
-
- def forward(self, x):
- if self.disc_stddev:
- x = self.cat_stddev(x)
- x = self.conv_stddev(x)
- out = self.final_linear(x)
- return out
-
-
-class ConvDecoder(nn.Module):
- def __init__(self, in_channel, out_channel, in_res, out_res):
- super().__init__()
-
- log_size_in = int(math.log(in_res, 2))
- log_size_out = int(math.log(out_res, 2))
-
- self.layers = []
- in_ch = in_channel
- for i in range(log_size_in, log_size_out):
- out_ch = in_ch // 2
- self.layers.append(
- ConvLayer2d(
- in_channel=in_ch,
- out_channel=out_ch,
- kernel_size=3,
- upsample=True,
- bias=True,
- activate=True
- )
- )
- in_ch = out_ch
-
- self.layers.append(
- ConvLayer2d(
- in_channel=in_ch, out_channel=out_channel, kernel_size=3, bias=True, activate=False
- )
- )
- self.layers = nn.Sequential(*self.layers)
-
- def forward(self, x):
- return self.layers(x)
-
-
-class StyleDiscriminator(nn.Module):
- def __init__(self, in_channel, in_res, ch_mul=64, ch_max=512, **kwargs):
- super().__init__()
-
- log_size_in = int(math.log(in_res, 2))
- log_size_out = int(math.log(4, 2))
-
- self.conv_in = ConvLayer2d(in_channel=in_channel, out_channel=ch_mul, kernel_size=3)
-
- # each resblock will half the resolution and double the number of features (until a maximum of ch_max)
- self.layers = []
- in_channels = ch_mul
- for i in range(log_size_in, log_size_out, -1):
- out_channels = int(min(in_channels * 2, ch_max))
- self.layers.append(
- ConvResBlock2d(in_channel=in_channels, out_channel=out_channels, downsample=True)
- )
- in_channels = out_channels
- self.layers = nn.Sequential(*self.layers)
-
- self.disc_out = DiscriminatorHead(in_channel=in_channels, disc_stddev=True)
-
- def forward(self, x):
- x = self.conv_in(x)
- x = self.layers(x)
- out = self.disc_out(x)
- return out
-
-
-def make_kernel(k):
- k = torch.tensor(k, dtype=torch.float32)
-
- if k.ndim == 1:
- k = k[None, :] * k[:, None]
-
- k /= k.sum()
-
- return k
-
-
-class Blur(nn.Module):
- """Blur layer.
-
- Applies a blur kernel to input image using finite impulse response filter. Blurring feature maps after
- convolutional upsampling or before convolutional downsampling helps produces models that are more robust to
- shifting inputs (https://richzhang.github.io/antialiased-cnns/). In the context of GANs, this can provide
- cleaner gradients, and therefore more stable training.
-
- Args:
- ----
- kernel: list, int
- A list of integers representing a blur kernel. For exmaple: [1, 3, 3, 1].
- pad: tuple, int
- A tuple of integers representing the number of rows/columns of padding to be added to the top/left and
- the bottom/right respectively.
- upsample_factor: int
- Upsample factor.
-
- """
- def __init__(self, kernel, pad, upsample_factor=1):
- super().__init__()
-
- kernel = make_kernel(kernel)
-
- if upsample_factor > 1:
- kernel = kernel * (upsample_factor**2)
-
- self.register_buffer("kernel", kernel)
- self.pad = pad
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, pad=self.pad)
- return out
-
-
-class Upsample(nn.Module):
- """Upsampling layer.
-
- Perform upsampling using a blur kernel.
-
- Args:
- ----
- kernel: list, int
- A list of integers representing a blur kernel. For exmaple: [1, 3, 3, 1].
- factor: int
- Upsampling factor.
-
- """
- def __init__(self, kernel=[1, 3, 3, 1], factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel) * (factor**2)
- self.register_buffer("kernel", kernel)
-
- p = kernel.shape[0] - factor
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad)
- return out
-
-
-class Downsample(nn.Module):
- """Downsampling layer.
-
- Perform downsampling using a blur kernel.
-
- Args:
- ----
- kernel: list, int
- A list of integers representing a blur kernel. For exmaple: [1, 3, 3, 1].
- factor: int
- Downsampling factor.
-
- """
- def __init__(self, kernel=[1, 3, 3, 1], factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel)
- self.register_buffer("kernel", kernel)
-
- p = kernel.shape[0] - factor
- pad0 = (p + 1) // 2
- pad1 = p // 2
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad)
- return out
-
-
-class EqualLinear(nn.Module):
- """Linear layer with equalized learning rate.
-
- During the forward pass the weights are scaled by the inverse of the He constant (i.e. sqrt(in_dim)) to
- prevent vanishing gradients and accelerate training. This constant only works for ReLU or LeakyReLU
- activation functions.
-
- Args:
- ----
- in_channel: int
- Input channels.
- out_channel: int
- Output channels.
- bias: bool
- Use bias term.
- bias_init: float
- Initial value for the bias.
- lr_mul: float
- Learning rate multiplier. By scaling weights and the bias we can proportionally scale the magnitude of
- the gradients, effectively increasing/decreasing the learning rate for this layer.
- activate: bool
- Apply leakyReLU activation.
-
- """
- def __init__(self, in_channel, out_channel, bias=True, bias_init=0, lr_mul=1, activate=False):
- super().__init__()
-
- self.weight = nn.Parameter(torch.randn(out_channel, in_channel).div_(lr_mul))
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_channel).fill_(bias_init))
- else:
- self.bias = None
-
- self.activate = activate
- self.scale = (1 / math.sqrt(in_channel)) * lr_mul
- self.lr_mul = lr_mul
-
- def forward(self, input):
- if self.activate:
- out = F.linear(input, self.weight * self.scale)
- out = fused_leaky_relu(out, self.bias * self.lr_mul)
- else:
- out = F.linear(input, self.weight * self.scale, bias=self.bias * self.lr_mul)
- return out
-
- def __repr__(self):
- return f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})"
-
-
-class EqualConv2d(nn.Module):
- """2D convolution layer with equalized learning rate.
-
- During the forward pass the weights are scaled by the inverse of the He constant (i.e. sqrt(in_dim)) to
- prevent vanishing gradients and accelerate training. This constant only works for ReLU or LeakyReLU
- activation functions.
-
- Args:
- ----
- in_channel: int
- Input channels.
- out_channel: int
- Output channels.
- kernel_size: int
- Kernel size.
- stride: int
- Stride of convolutional kernel across the input.
- padding: int
- Amount of zero padding applied to both sides of the input.
- bias: bool
- Use bias term.
-
- """
- def __init__(self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True):
- super().__init__()
-
- self.weight = nn.Parameter(torch.randn(out_channel, in_channel, kernel_size, kernel_size))
- self.scale = 1 / math.sqrt(in_channel * kernel_size**2)
-
- self.stride = stride
- self.padding = padding
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_channel))
- else:
- self.bias = None
-
- def forward(self, input):
- out = F.conv2d(
- input,
- self.weight * self.scale,
- bias=self.bias,
- stride=self.stride,
- padding=self.padding
- )
- return out
-
- def __repr__(self):
- return (
- f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},"
- f" {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})"
- )
-
-
-class EqualConvTranspose2d(nn.Module):
- """2D transpose convolution layer with equalized learning rate.
-
- During the forward pass the weights are scaled by the inverse of the He constant (i.e. sqrt(in_dim)) to
- prevent vanishing gradients and accelerate training. This constant only works for ReLU or LeakyReLU
- activation functions.
-
- Args:
- ----
- in_channel: int
- Input channels.
- out_channel: int
- Output channels.
- kernel_size: int
- Kernel size.
- stride: int
- Stride of convolutional kernel across the input.
- padding: int
- Amount of zero padding applied to both sides of the input.
- output_padding: int
- Extra padding added to input to achieve the desired output size.
- bias: bool
- Use bias term.
-
- """
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- stride=1,
- padding=0,
- output_padding=0,
- bias=True
- ):
- super().__init__()
-
- self.weight = nn.Parameter(torch.randn(in_channel, out_channel, kernel_size, kernel_size))
- self.scale = 1 / math.sqrt(in_channel * kernel_size**2)
-
- self.stride = stride
- self.padding = padding
- self.output_padding = output_padding
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_channel))
- else:
- self.bias = None
-
- def forward(self, input):
- out = F.conv_transpose2d(
- input,
- self.weight * self.scale,
- bias=self.bias,
- stride=self.stride,
- padding=self.padding,
- output_padding=self.output_padding,
- )
- return out
-
- def __repr__(self):
- return (
- f'{self.__class__.__name__}({self.weight.shape[0]}, {self.weight.shape[1]},'
- f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})'
- )
-
-
-class ConvLayer2d(nn.Sequential):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size=3,
- upsample=False,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- bias=True,
- activate=True,
- ):
- assert not (upsample and downsample), 'Cannot upsample and downsample simultaneously'
- layers = []
-
- if upsample:
- factor = 2
- p = (len(blur_kernel) - factor) - (kernel_size - 1)
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2 + 1
-
- layers.append(
- EqualConvTranspose2d(
- in_channel,
- out_channel,
- kernel_size,
- padding=0,
- stride=2,
- bias=bias and not activate
- )
- )
- layers.append(Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor))
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- layers.append(Blur(blur_kernel, pad=(pad0, pad1)))
- layers.append(
- EqualConv2d(
- in_channel,
- out_channel,
- kernel_size,
- padding=0,
- stride=2,
- bias=bias and not activate
- )
- )
-
- if (not downsample) and (not upsample):
- padding = kernel_size // 2
-
- layers.append(
- EqualConv2d(
- in_channel,
- out_channel,
- kernel_size,
- padding=padding,
- stride=1,
- bias=bias and not activate
- )
- )
-
- if activate:
- layers.append(FusedLeakyReLU(out_channel, bias=bias))
-
- super().__init__(*layers)
-
-
-class ConvResBlock2d(nn.Module):
- """2D convolutional residual block with equalized learning rate.
-
- Residual block composed of 3x3 convolutions and leaky ReLUs.
-
- Args:
- ----
- in_channel: int
- Input channels.
- out_channel: int
- Output channels.
- upsample: bool
- Apply upsampling via strided convolution in the first conv.
- downsample: bool
- Apply downsampling via strided convolution in the second conv.
-
- """
- def __init__(self, in_channel, out_channel, upsample=False, downsample=False):
- super().__init__()
-
- assert not (upsample and downsample), 'Cannot upsample and downsample simultaneously'
- mid_ch = in_channel if downsample else out_channel
-
- self.conv1 = ConvLayer2d(in_channel, mid_ch, upsample=upsample, kernel_size=3)
- self.conv2 = ConvLayer2d(mid_ch, out_channel, downsample=downsample, kernel_size=3)
-
- if (in_channel != out_channel) or upsample or downsample:
- self.skip = ConvLayer2d(
- in_channel,
- out_channel,
- upsample=upsample,
- downsample=downsample,
- kernel_size=1,
- activate=False,
- bias=False,
- )
-
- def forward(self, input):
- out = self.conv1(input)
- out = self.conv2(out)
-
- if hasattr(self, 'skip'):
- skip = self.skip(input)
- out = (out + skip) / math.sqrt(2)
- else:
- out = (out + input) / math.sqrt(2)
- return out
diff --git a/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v2_2/model_download/yolov5_model_p5_all.sh b/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v2_2/model_download/yolov5_model_p5_all.sh
deleted file mode 100644
index a8e11f6c73445e2e7855d7b62c2b8ebbb7236e9d..0000000000000000000000000000000000000000
--- a/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v2_2/model_download/yolov5_model_p5_all.sh
+++ /dev/null
@@ -1,8 +0,0 @@
-cd ./yolov5
-
-# 下载YOLOv5模型
-wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5n.pt
-wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s.pt
-wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5m.pt
-wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5l.pt
-wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5x.pt
\ No newline at end of file
diff --git a/spaces/aaronb/Anything2Image/anything2image/api.py b/spaces/aaronb/Anything2Image/anything2image/api.py
deleted file mode 100644
index 396001ea898e8c5cba3a7117c0cf2a7893049ec6..0000000000000000000000000000000000000000
--- a/spaces/aaronb/Anything2Image/anything2image/api.py
+++ /dev/null
@@ -1,59 +0,0 @@
-import soundfile as sf
-import torch
-import numpy as np
-from diffusers import StableUnCLIPImg2ImgPipeline
-from PIL import Image
-
-from . import imagebind
-
-
-class Anything2Image:
- def __init__(
- self,
- device = "cuda:0" if torch.cuda.is_available() else "cpu",
- imagebind_download_dir="checkpoints"
- ):
- self.pipe = StableUnCLIPImg2ImgPipeline.from_pretrained(
- "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=None if device == 'cpu' else torch.float16,
- ).to(device)
- self.model = imagebind.imagebind_huge(pretrained=True, download_dir=imagebind_download_dir).eval().to(device)
- self.device = device
-
- @torch.no_grad()
- def __call__(self, prompt=None, audio=None, image=None, text=None):
- device, model, pipe = self.device, self.model, self.pipe
-
- if audio is not None:
- sr, waveform = audio
- sf.write('tmp.wav', waveform, sr)
- embeddings = model.forward({
- imagebind.ModalityType.AUDIO: imagebind.load_and_transform_audio_data(['tmp.wav'], device),
- })
- audio_embeddings = embeddings[imagebind.ModalityType.AUDIO]
- if image is not None:
- Image.fromarray(image).save('tmp.png')
- embeddings = model.forward({
- imagebind.ModalityType.VISION: imagebind.load_and_transform_vision_data(['tmp.png'], device),
- }, normalize=False)
- image_embeddings = embeddings[imagebind.ModalityType.VISION]
-
- if audio is not None and image is not None:
- embeddings = (audio_embeddings + image_embeddings) / 2
- elif image is not None:
- embeddings = image_embeddings
- elif audio is not None:
- embeddings = audio_embeddings
- else:
- embeddings = None
-
- if text is not None and text != "":
- embeddings = self.model.forward({
- imagebind.ModalityType.TEXT: imagebind.load_and_transform_text([text], device),
- }, normalize=False)
- embeddings = embeddings[imagebind.ModalityType.TEXT]
-
- if embeddings is not None and self.device != 'cpu':
- embeddings = embeddings.half()
-
- images = pipe(prompt=prompt, image_embeds=embeddings).images
- return images[0]
\ No newline at end of file
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/datasets/ade20k.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/datasets/ade20k.py
deleted file mode 100644
index efc8b4bb20c981f3db6df7eb52b3dc0744c94cc0..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/datasets/ade20k.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# dataset settings
-dataset_type = 'ADE20KDataset'
-data_root = 'data/ade/ADEChallengeData2016'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-crop_size = (512, 512)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', reduce_zero_label=True),
- dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)),
- dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
- dict(type='RandomFlip', prob=0.5),
- dict(type='PhotoMetricDistortion'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_semantic_seg']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(2048, 512),
- # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75],
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=4,
- workers_per_gpu=4,
- train=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/training',
- ann_dir='annotations/training',
- pipeline=train_pipeline),
- val=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/validation',
- ann_dir='annotations/validation',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/validation',
- ann_dir='annotations/validation',
- pipeline=test_pipeline))
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/dense_heads/ld_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/dense_heads/ld_head.py
deleted file mode 100644
index 501e1f7befa086f0b2f818531807411fc383d7bd..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/dense_heads/ld_head.py
+++ /dev/null
@@ -1,261 +0,0 @@
-import torch
-from mmcv.runner import force_fp32
-
-from mmdet.core import (bbox2distance, bbox_overlaps, distance2bbox,
- multi_apply, reduce_mean)
-from ..builder import HEADS, build_loss
-from .gfl_head import GFLHead
-
-
-@HEADS.register_module()
-class LDHead(GFLHead):
- """Localization distillation Head. (Short description)
-
- It utilizes the learned bbox distributions to transfer the localization
- dark knowledge from teacher to student. Original paper: `Localization
- Distillation for Object Detection. `_
-
- Args:
- num_classes (int): Number of categories excluding the background
- category.
- in_channels (int): Number of channels in the input feature map.
- loss_ld (dict): Config of Localization Distillation Loss (LD),
- T is the temperature for distillation.
- """
-
- def __init__(self,
- num_classes,
- in_channels,
- loss_ld=dict(
- type='LocalizationDistillationLoss',
- loss_weight=0.25,
- T=10),
- **kwargs):
-
- super(LDHead, self).__init__(num_classes, in_channels, **kwargs)
- self.loss_ld = build_loss(loss_ld)
-
- def loss_single(self, anchors, cls_score, bbox_pred, labels, label_weights,
- bbox_targets, stride, soft_targets, num_total_samples):
- """Compute loss of a single scale level.
-
- Args:
- anchors (Tensor): Box reference for each scale level with shape
- (N, num_total_anchors, 4).
- cls_score (Tensor): Cls and quality joint scores for each scale
- level has shape (N, num_classes, H, W).
- bbox_pred (Tensor): Box distribution logits for each scale
- level with shape (N, 4*(n+1), H, W), n is max value of integral
- set.
- labels (Tensor): Labels of each anchors with shape
- (N, num_total_anchors).
- label_weights (Tensor): Label weights of each anchor with shape
- (N, num_total_anchors)
- bbox_targets (Tensor): BBox regression targets of each anchor wight
- shape (N, num_total_anchors, 4).
- stride (tuple): Stride in this scale level.
- num_total_samples (int): Number of positive samples that is
- reduced over all GPUs.
-
- Returns:
- dict[tuple, Tensor]: Loss components and weight targets.
- """
- assert stride[0] == stride[1], 'h stride is not equal to w stride!'
- anchors = anchors.reshape(-1, 4)
- cls_score = cls_score.permute(0, 2, 3,
- 1).reshape(-1, self.cls_out_channels)
- bbox_pred = bbox_pred.permute(0, 2, 3,
- 1).reshape(-1, 4 * (self.reg_max + 1))
- soft_targets = soft_targets.permute(0, 2, 3,
- 1).reshape(-1,
- 4 * (self.reg_max + 1))
-
- bbox_targets = bbox_targets.reshape(-1, 4)
- labels = labels.reshape(-1)
- label_weights = label_weights.reshape(-1)
-
- # FG cat_id: [0, num_classes -1], BG cat_id: num_classes
- bg_class_ind = self.num_classes
- pos_inds = ((labels >= 0)
- & (labels < bg_class_ind)).nonzero().squeeze(1)
- score = label_weights.new_zeros(labels.shape)
-
- if len(pos_inds) > 0:
- pos_bbox_targets = bbox_targets[pos_inds]
- pos_bbox_pred = bbox_pred[pos_inds]
- pos_anchors = anchors[pos_inds]
- pos_anchor_centers = self.anchor_center(pos_anchors) / stride[0]
-
- weight_targets = cls_score.detach().sigmoid()
- weight_targets = weight_targets.max(dim=1)[0][pos_inds]
- pos_bbox_pred_corners = self.integral(pos_bbox_pred)
- pos_decode_bbox_pred = distance2bbox(pos_anchor_centers,
- pos_bbox_pred_corners)
- pos_decode_bbox_targets = pos_bbox_targets / stride[0]
- score[pos_inds] = bbox_overlaps(
- pos_decode_bbox_pred.detach(),
- pos_decode_bbox_targets,
- is_aligned=True)
- pred_corners = pos_bbox_pred.reshape(-1, self.reg_max + 1)
- pos_soft_targets = soft_targets[pos_inds]
- soft_corners = pos_soft_targets.reshape(-1, self.reg_max + 1)
-
- target_corners = bbox2distance(pos_anchor_centers,
- pos_decode_bbox_targets,
- self.reg_max).reshape(-1)
-
- # regression loss
- loss_bbox = self.loss_bbox(
- pos_decode_bbox_pred,
- pos_decode_bbox_targets,
- weight=weight_targets,
- avg_factor=1.0)
-
- # dfl loss
- loss_dfl = self.loss_dfl(
- pred_corners,
- target_corners,
- weight=weight_targets[:, None].expand(-1, 4).reshape(-1),
- avg_factor=4.0)
-
- # ld loss
- loss_ld = self.loss_ld(
- pred_corners,
- soft_corners,
- weight=weight_targets[:, None].expand(-1, 4).reshape(-1),
- avg_factor=4.0)
-
- else:
- loss_ld = bbox_pred.sum() * 0
- loss_bbox = bbox_pred.sum() * 0
- loss_dfl = bbox_pred.sum() * 0
- weight_targets = bbox_pred.new_tensor(0)
-
- # cls (qfl) loss
- loss_cls = self.loss_cls(
- cls_score, (labels, score),
- weight=label_weights,
- avg_factor=num_total_samples)
-
- return loss_cls, loss_bbox, loss_dfl, loss_ld, weight_targets.sum()
-
- def forward_train(self,
- x,
- out_teacher,
- img_metas,
- gt_bboxes,
- gt_labels=None,
- gt_bboxes_ignore=None,
- proposal_cfg=None,
- **kwargs):
- """
- Args:
- x (list[Tensor]): Features from FPN.
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- gt_bboxes (Tensor): Ground truth bboxes of the image,
- shape (num_gts, 4).
- gt_labels (Tensor): Ground truth labels of each box,
- shape (num_gts,).
- gt_bboxes_ignore (Tensor): Ground truth bboxes to be
- ignored, shape (num_ignored_gts, 4).
- proposal_cfg (mmcv.Config): Test / postprocessing configuration,
- if None, test_cfg would be used
-
- Returns:
- tuple[dict, list]: The loss components and proposals of each image.
-
- - losses (dict[str, Tensor]): A dictionary of loss components.
- - proposal_list (list[Tensor]): Proposals of each image.
- """
- outs = self(x)
- soft_target = out_teacher[1]
- if gt_labels is None:
- loss_inputs = outs + (gt_bboxes, soft_target, img_metas)
- else:
- loss_inputs = outs + (gt_bboxes, gt_labels, soft_target, img_metas)
- losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore)
- if proposal_cfg is None:
- return losses
- else:
- proposal_list = self.get_bboxes(*outs, img_metas, cfg=proposal_cfg)
- return losses, proposal_list
-
- @force_fp32(apply_to=('cls_scores', 'bbox_preds'))
- def loss(self,
- cls_scores,
- bbox_preds,
- gt_bboxes,
- gt_labels,
- soft_target,
- img_metas,
- gt_bboxes_ignore=None):
- """Compute losses of the head.
-
- Args:
- cls_scores (list[Tensor]): Cls and quality scores for each scale
- level has shape (N, num_classes, H, W).
- bbox_preds (list[Tensor]): Box distribution logits for each scale
- level with shape (N, 4*(n+1), H, W), n is max value of integral
- set.
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]): class indices corresponding to each box
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- gt_bboxes_ignore (list[Tensor] | None): specify which bounding
- boxes can be ignored when computing the loss.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components.
- """
-
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
- assert len(featmap_sizes) == self.anchor_generator.num_levels
-
- device = cls_scores[0].device
- anchor_list, valid_flag_list = self.get_anchors(
- featmap_sizes, img_metas, device=device)
- label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1
-
- cls_reg_targets = self.get_targets(
- anchor_list,
- valid_flag_list,
- gt_bboxes,
- img_metas,
- gt_bboxes_ignore_list=gt_bboxes_ignore,
- gt_labels_list=gt_labels,
- label_channels=label_channels)
- if cls_reg_targets is None:
- return None
-
- (anchor_list, labels_list, label_weights_list, bbox_targets_list,
- bbox_weights_list, num_total_pos, num_total_neg) = cls_reg_targets
-
- num_total_samples = reduce_mean(
- torch.tensor(num_total_pos, dtype=torch.float,
- device=device)).item()
- num_total_samples = max(num_total_samples, 1.0)
-
- losses_cls, losses_bbox, losses_dfl, losses_ld, \
- avg_factor = multi_apply(
- self.loss_single,
- anchor_list,
- cls_scores,
- bbox_preds,
- labels_list,
- label_weights_list,
- bbox_targets_list,
- self.anchor_generator.strides,
- soft_target,
- num_total_samples=num_total_samples)
-
- avg_factor = sum(avg_factor) + 1e-6
- avg_factor = reduce_mean(avg_factor).item()
- losses_bbox = [x / avg_factor for x in losses_bbox]
- losses_dfl = [x / avg_factor for x in losses_dfl]
- return dict(
- loss_cls=losses_cls,
- loss_bbox=losses_bbox,
- loss_dfl=losses_dfl,
- loss_ld=losses_ld)
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/mask_heads/feature_relay_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/mask_heads/feature_relay_head.py
deleted file mode 100644
index a1cfb2ce8631d51e5c465f9bbc4164a37acc4782..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/mask_heads/feature_relay_head.py
+++ /dev/null
@@ -1,55 +0,0 @@
-import torch.nn as nn
-from mmcv.cnn import kaiming_init
-from mmcv.runner import auto_fp16
-
-from mmdet.models.builder import HEADS
-
-
-@HEADS.register_module()
-class FeatureRelayHead(nn.Module):
- """Feature Relay Head used in `SCNet `_.
-
- Args:
- in_channels (int, optional): number of input channels. Default: 256.
- conv_out_channels (int, optional): number of output channels before
- classification layer. Default: 256.
- roi_feat_size (int, optional): roi feat size at box head. Default: 7.
- scale_factor (int, optional): scale factor to match roi feat size
- at mask head. Default: 2.
- """
-
- def __init__(self,
- in_channels=1024,
- out_conv_channels=256,
- roi_feat_size=7,
- scale_factor=2):
- super(FeatureRelayHead, self).__init__()
- assert isinstance(roi_feat_size, int)
-
- self.in_channels = in_channels
- self.out_conv_channels = out_conv_channels
- self.roi_feat_size = roi_feat_size
- self.out_channels = (roi_feat_size**2) * out_conv_channels
- self.scale_factor = scale_factor
- self.fp16_enabled = False
-
- self.fc = nn.Linear(self.in_channels, self.out_channels)
- self.upsample = nn.Upsample(
- scale_factor=scale_factor, mode='bilinear', align_corners=True)
-
- def init_weights(self):
- """Init weights for the head."""
- kaiming_init(self.fc)
-
- @auto_fp16()
- def forward(self, x):
- """Forward function."""
- N, in_C = x.shape
- if N > 0:
- out_C = self.out_conv_channels
- out_HW = self.roi_feat_size
- x = self.fc(x)
- x = x.reshape(N, out_C, out_HW, out_HW)
- x = self.upsample(x)
- return x
- return None
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/video/optflow.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/video/optflow.py
deleted file mode 100644
index 84160f8d6ef9fceb5a2f89e7481593109fc1905d..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/video/optflow.py
+++ /dev/null
@@ -1,254 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import warnings
-
-import cv2
-import numpy as np
-
-from annotator.uniformer.mmcv.arraymisc import dequantize, quantize
-from annotator.uniformer.mmcv.image import imread, imwrite
-from annotator.uniformer.mmcv.utils import is_str
-
-
-def flowread(flow_or_path, quantize=False, concat_axis=0, *args, **kwargs):
- """Read an optical flow map.
-
- Args:
- flow_or_path (ndarray or str): A flow map or filepath.
- quantize (bool): whether to read quantized pair, if set to True,
- remaining args will be passed to :func:`dequantize_flow`.
- concat_axis (int): The axis that dx and dy are concatenated,
- can be either 0 or 1. Ignored if quantize is False.
-
- Returns:
- ndarray: Optical flow represented as a (h, w, 2) numpy array
- """
- if isinstance(flow_or_path, np.ndarray):
- if (flow_or_path.ndim != 3) or (flow_or_path.shape[-1] != 2):
- raise ValueError(f'Invalid flow with shape {flow_or_path.shape}')
- return flow_or_path
- elif not is_str(flow_or_path):
- raise TypeError(f'"flow_or_path" must be a filename or numpy array, '
- f'not {type(flow_or_path)}')
-
- if not quantize:
- with open(flow_or_path, 'rb') as f:
- try:
- header = f.read(4).decode('utf-8')
- except Exception:
- raise IOError(f'Invalid flow file: {flow_or_path}')
- else:
- if header != 'PIEH':
- raise IOError(f'Invalid flow file: {flow_or_path}, '
- 'header does not contain PIEH')
-
- w = np.fromfile(f, np.int32, 1).squeeze()
- h = np.fromfile(f, np.int32, 1).squeeze()
- flow = np.fromfile(f, np.float32, w * h * 2).reshape((h, w, 2))
- else:
- assert concat_axis in [0, 1]
- cat_flow = imread(flow_or_path, flag='unchanged')
- if cat_flow.ndim != 2:
- raise IOError(
- f'{flow_or_path} is not a valid quantized flow file, '
- f'its dimension is {cat_flow.ndim}.')
- assert cat_flow.shape[concat_axis] % 2 == 0
- dx, dy = np.split(cat_flow, 2, axis=concat_axis)
- flow = dequantize_flow(dx, dy, *args, **kwargs)
-
- return flow.astype(np.float32)
-
-
-def flowwrite(flow, filename, quantize=False, concat_axis=0, *args, **kwargs):
- """Write optical flow to file.
-
- If the flow is not quantized, it will be saved as a .flo file losslessly,
- otherwise a jpeg image which is lossy but of much smaller size. (dx and dy
- will be concatenated horizontally into a single image if quantize is True.)
-
- Args:
- flow (ndarray): (h, w, 2) array of optical flow.
- filename (str): Output filepath.
- quantize (bool): Whether to quantize the flow and save it to 2 jpeg
- images. If set to True, remaining args will be passed to
- :func:`quantize_flow`.
- concat_axis (int): The axis that dx and dy are concatenated,
- can be either 0 or 1. Ignored if quantize is False.
- """
- if not quantize:
- with open(filename, 'wb') as f:
- f.write('PIEH'.encode('utf-8'))
- np.array([flow.shape[1], flow.shape[0]], dtype=np.int32).tofile(f)
- flow = flow.astype(np.float32)
- flow.tofile(f)
- f.flush()
- else:
- assert concat_axis in [0, 1]
- dx, dy = quantize_flow(flow, *args, **kwargs)
- dxdy = np.concatenate((dx, dy), axis=concat_axis)
- imwrite(dxdy, filename)
-
-
-def quantize_flow(flow, max_val=0.02, norm=True):
- """Quantize flow to [0, 255].
-
- After this step, the size of flow will be much smaller, and can be
- dumped as jpeg images.
-
- Args:
- flow (ndarray): (h, w, 2) array of optical flow.
- max_val (float): Maximum value of flow, values beyond
- [-max_val, max_val] will be truncated.
- norm (bool): Whether to divide flow values by image width/height.
-
- Returns:
- tuple[ndarray]: Quantized dx and dy.
- """
- h, w, _ = flow.shape
- dx = flow[..., 0]
- dy = flow[..., 1]
- if norm:
- dx = dx / w # avoid inplace operations
- dy = dy / h
- # use 255 levels instead of 256 to make sure 0 is 0 after dequantization.
- flow_comps = [
- quantize(d, -max_val, max_val, 255, np.uint8) for d in [dx, dy]
- ]
- return tuple(flow_comps)
-
-
-def dequantize_flow(dx, dy, max_val=0.02, denorm=True):
- """Recover from quantized flow.
-
- Args:
- dx (ndarray): Quantized dx.
- dy (ndarray): Quantized dy.
- max_val (float): Maximum value used when quantizing.
- denorm (bool): Whether to multiply flow values with width/height.
-
- Returns:
- ndarray: Dequantized flow.
- """
- assert dx.shape == dy.shape
- assert dx.ndim == 2 or (dx.ndim == 3 and dx.shape[-1] == 1)
-
- dx, dy = [dequantize(d, -max_val, max_val, 255) for d in [dx, dy]]
-
- if denorm:
- dx *= dx.shape[1]
- dy *= dx.shape[0]
- flow = np.dstack((dx, dy))
- return flow
-
-
-def flow_warp(img, flow, filling_value=0, interpolate_mode='nearest'):
- """Use flow to warp img.
-
- Args:
- img (ndarray, float or uint8): Image to be warped.
- flow (ndarray, float): Optical Flow.
- filling_value (int): The missing pixels will be set with filling_value.
- interpolate_mode (str): bilinear -> Bilinear Interpolation;
- nearest -> Nearest Neighbor.
-
- Returns:
- ndarray: Warped image with the same shape of img
- """
- warnings.warn('This function is just for prototyping and cannot '
- 'guarantee the computational efficiency.')
- assert flow.ndim == 3, 'Flow must be in 3D arrays.'
- height = flow.shape[0]
- width = flow.shape[1]
- channels = img.shape[2]
-
- output = np.ones(
- (height, width, channels), dtype=img.dtype) * filling_value
-
- grid = np.indices((height, width)).swapaxes(0, 1).swapaxes(1, 2)
- dx = grid[:, :, 0] + flow[:, :, 1]
- dy = grid[:, :, 1] + flow[:, :, 0]
- sx = np.floor(dx).astype(int)
- sy = np.floor(dy).astype(int)
- valid = (sx >= 0) & (sx < height - 1) & (sy >= 0) & (sy < width - 1)
-
- if interpolate_mode == 'nearest':
- output[valid, :] = img[dx[valid].round().astype(int),
- dy[valid].round().astype(int), :]
- elif interpolate_mode == 'bilinear':
- # dirty walkround for integer positions
- eps_ = 1e-6
- dx, dy = dx + eps_, dy + eps_
- left_top_ = img[np.floor(dx[valid]).astype(int),
- np.floor(dy[valid]).astype(int), :] * (
- np.ceil(dx[valid]) - dx[valid])[:, None] * (
- np.ceil(dy[valid]) - dy[valid])[:, None]
- left_down_ = img[np.ceil(dx[valid]).astype(int),
- np.floor(dy[valid]).astype(int), :] * (
- dx[valid] - np.floor(dx[valid]))[:, None] * (
- np.ceil(dy[valid]) - dy[valid])[:, None]
- right_top_ = img[np.floor(dx[valid]).astype(int),
- np.ceil(dy[valid]).astype(int), :] * (
- np.ceil(dx[valid]) - dx[valid])[:, None] * (
- dy[valid] - np.floor(dy[valid]))[:, None]
- right_down_ = img[np.ceil(dx[valid]).astype(int),
- np.ceil(dy[valid]).astype(int), :] * (
- dx[valid] - np.floor(dx[valid]))[:, None] * (
- dy[valid] - np.floor(dy[valid]))[:, None]
- output[valid, :] = left_top_ + left_down_ + right_top_ + right_down_
- else:
- raise NotImplementedError(
- 'We only support interpolation modes of nearest and bilinear, '
- f'but got {interpolate_mode}.')
- return output.astype(img.dtype)
-
-
-def flow_from_bytes(content):
- """Read dense optical flow from bytes.
-
- .. note::
- This load optical flow function works for FlyingChairs, FlyingThings3D,
- Sintel, FlyingChairsOcc datasets, but cannot load the data from
- ChairsSDHom.
-
- Args:
- content (bytes): Optical flow bytes got from files or other streams.
-
- Returns:
- ndarray: Loaded optical flow with the shape (H, W, 2).
- """
-
- # header in first 4 bytes
- header = content[:4]
- if header.decode('utf-8') != 'PIEH':
- raise Exception('Flow file header does not contain PIEH')
- # width in second 4 bytes
- width = np.frombuffer(content[4:], np.int32, 1).squeeze()
- # height in third 4 bytes
- height = np.frombuffer(content[8:], np.int32, 1).squeeze()
- # after first 12 bytes, all bytes are flow
- flow = np.frombuffer(content[12:], np.float32, width * height * 2).reshape(
- (height, width, 2))
-
- return flow
-
-
-def sparse_flow_from_bytes(content):
- """Read the optical flow in KITTI datasets from bytes.
-
- This function is modified from RAFT load the `KITTI datasets
- `_.
-
- Args:
- content (bytes): Optical flow bytes got from files or other streams.
-
- Returns:
- Tuple(ndarray, ndarray): Loaded optical flow with the shape (H, W, 2)
- and flow valid mask with the shape (H, W).
- """ # nopa
-
- content = np.frombuffer(content, np.uint8)
- flow = cv2.imdecode(content, cv2.IMREAD_ANYDEPTH | cv2.IMREAD_COLOR)
- flow = flow[:, :, ::-1].astype(np.float32)
- # flow shape (H, W, 2) valid shape (H, W)
- flow, valid = flow[:, :, :2], flow[:, :, 2]
- flow = (flow - 2**15) / 64.0
- return flow, valid
diff --git a/spaces/abidismail/22h-vintedois-diffusion-v0-1/app.py b/spaces/abidismail/22h-vintedois-diffusion-v0-1/app.py
deleted file mode 100644
index c1dd484084e36ddbdfd38baef27a08040b2d7893..0000000000000000000000000000000000000000
--- a/spaces/abidismail/22h-vintedois-diffusion-v0-1/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/22h/vintedois-diffusion-v0-1").launch()
\ No newline at end of file
diff --git a/spaces/abidlabs/quickdraw2/app.py b/spaces/abidlabs/quickdraw2/app.py
deleted file mode 100644
index 8437ee8f4e64645c43e30a37d6ebee396d5f0791..0000000000000000000000000000000000000000
--- a/spaces/abidlabs/quickdraw2/app.py
+++ /dev/null
@@ -1,43 +0,0 @@
-from pathlib import Path
-
-import torch
-import gradio as gr
-from torch import nn
-
-
-LABELS = Path('class_names.txt').read_text().splitlines()
-
-model = nn.Sequential(
- nn.Conv2d(1, 32, 3, padding='same'),
- nn.ReLU(),
- nn.MaxPool2d(2),
- nn.Conv2d(32, 64, 3, padding='same'),
- nn.ReLU(),
- nn.MaxPool2d(2),
- nn.Conv2d(64, 128, 3, padding='same'),
- nn.ReLU(),
- nn.MaxPool2d(2),
- nn.Flatten(),
- nn.Linear(1152, 256),
- nn.ReLU(),
- nn.Linear(256, len(LABELS)),
-)
-state_dict = torch.load('pytorch_model.bin', map_location='cpu')
-model.load_state_dict(state_dict, strict=False)
-model.eval()
-
-def predict(im):
- x = torch.tensor(im, dtype=torch.float32).unsqueeze(0).unsqueeze(0) / 255.
-
- with torch.no_grad():
- out = model(x)
-
- probabilities = torch.nn.functional.softmax(out[0], dim=0)
-
- values, indices = torch.topk(probabilities, 5)
-
- return {LABELS[i]: v.item() for i, v in zip(indices, values)}
-
-
-interface = gr.Interface(predict, inputs='sketchpad', outputs='label', live=True)
-interface.launch(debug=True)
diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/constants.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/constants.py
deleted file mode 100644
index 8a5785b6fdb21910a174252c5af2f05b40ece4a5..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/constants.py
+++ /dev/null
@@ -1,149 +0,0 @@
-DEFAULT_Z_NEAR = 0.05 # Near clipping plane, in meters
-DEFAULT_Z_FAR = 100.0 # Far clipping plane, in meters
-DEFAULT_SCENE_SCALE = 2.0 # Default scene scale
-MAX_N_LIGHTS = 4 # Maximum number of lights of each type allowed
-TARGET_OPEN_GL_MAJOR = 4 # Target OpenGL Major Version
-TARGET_OPEN_GL_MINOR = 1 # Target OpenGL Minor Version
-MIN_OPEN_GL_MAJOR = 3 # Minimum OpenGL Major Version
-MIN_OPEN_GL_MINOR = 3 # Minimum OpenGL Minor Version
-FLOAT_SZ = 4 # Byte size of GL float32
-UINT_SZ = 4 # Byte size of GL uint32
-SHADOW_TEX_SZ = 2048 # Width and Height of Shadow Textures
-TEXT_PADDING = 20 # Width of padding for rendering text (px)
-
-
-# Flags for render type
-class RenderFlags(object):
- """Flags for rendering in the scene.
-
- Combine them with the bitwise or. For example,
-
- >>> flags = OFFSCREEN | SHADOWS_DIRECTIONAL | VERTEX_NORMALS
-
- would result in an offscreen render with directional shadows and
- vertex normals enabled.
- """
- NONE = 0
- """Normal PBR Render."""
- DEPTH_ONLY = 1
- """Only render the depth buffer."""
- OFFSCREEN = 2
- """Render offscreen and return the depth and (optionally) color buffers."""
- FLIP_WIREFRAME = 4
- """Invert the status of wireframe rendering for each mesh."""
- ALL_WIREFRAME = 8
- """Render all meshes as wireframes."""
- ALL_SOLID = 16
- """Render all meshes as solids."""
- SHADOWS_DIRECTIONAL = 32
- """Render shadows for directional lights."""
- SHADOWS_POINT = 64
- """Render shadows for point lights."""
- SHADOWS_SPOT = 128
- """Render shadows for spot lights."""
- SHADOWS_ALL = 32 | 64 | 128
- """Render shadows for all lights."""
- VERTEX_NORMALS = 256
- """Render vertex normals."""
- FACE_NORMALS = 512
- """Render face normals."""
- SKIP_CULL_FACES = 1024
- """Do not cull back faces."""
- RGBA = 2048
- """Render the color buffer with the alpha channel enabled."""
- FLAT = 4096
- """Render the color buffer flat, with no lighting computations."""
- SEG = 8192
-
-
-class TextAlign:
- """Text alignment options for captions.
-
- Only use one at a time.
- """
- CENTER = 0
- """Center the text by width and height."""
- CENTER_LEFT = 1
- """Center the text by height and left-align it."""
- CENTER_RIGHT = 2
- """Center the text by height and right-align it."""
- BOTTOM_LEFT = 3
- """Put the text in the bottom-left corner."""
- BOTTOM_RIGHT = 4
- """Put the text in the bottom-right corner."""
- BOTTOM_CENTER = 5
- """Center the text by width and fix it to the bottom."""
- TOP_LEFT = 6
- """Put the text in the top-left corner."""
- TOP_RIGHT = 7
- """Put the text in the top-right corner."""
- TOP_CENTER = 8
- """Center the text by width and fix it to the top."""
-
-
-class GLTF(object):
- """Options for GL objects."""
- NEAREST = 9728
- """Nearest neighbor interpolation."""
- LINEAR = 9729
- """Linear interpolation."""
- NEAREST_MIPMAP_NEAREST = 9984
- """Nearest mipmapping."""
- LINEAR_MIPMAP_NEAREST = 9985
- """Linear mipmapping."""
- NEAREST_MIPMAP_LINEAR = 9986
- """Nearest mipmapping."""
- LINEAR_MIPMAP_LINEAR = 9987
- """Linear mipmapping."""
- CLAMP_TO_EDGE = 33071
- """Clamp to the edge of the texture."""
- MIRRORED_REPEAT = 33648
- """Mirror the texture."""
- REPEAT = 10497
- """Repeat the texture."""
- POINTS = 0
- """Render as points."""
- LINES = 1
- """Render as lines."""
- LINE_LOOP = 2
- """Render as a line loop."""
- LINE_STRIP = 3
- """Render as a line strip."""
- TRIANGLES = 4
- """Render as triangles."""
- TRIANGLE_STRIP = 5
- """Render as a triangle strip."""
- TRIANGLE_FAN = 6
- """Render as a triangle fan."""
-
-
-class BufFlags(object):
- POSITION = 0
- NORMAL = 1
- TANGENT = 2
- TEXCOORD_0 = 4
- TEXCOORD_1 = 8
- COLOR_0 = 16
- JOINTS_0 = 32
- WEIGHTS_0 = 64
-
-
-class TexFlags(object):
- NONE = 0
- NORMAL = 1
- OCCLUSION = 2
- EMISSIVE = 4
- BASE_COLOR = 8
- METALLIC_ROUGHNESS = 16
- DIFFUSE = 32
- SPECULAR_GLOSSINESS = 64
-
-
-class ProgramFlags:
- NONE = 0
- USE_MATERIAL = 1
- VERTEX_NORMALS = 2
- FACE_NORMALS = 4
-
-
-__all__ = ['RenderFlags', 'TextAlign', 'GLTF']
diff --git a/spaces/aicg/Moxxie-Proxy/Dockerfile b/spaces/aicg/Moxxie-Proxy/Dockerfile
deleted file mode 100644
index 4cb0ce42128d9a2ad33a395883f5e5455a38c707..0000000000000000000000000000000000000000
--- a/spaces/aicg/Moxxie-Proxy/Dockerfile
+++ /dev/null
@@ -1,11 +0,0 @@
-FROM node:18-bullseye-slim
-RUN apt-get update && \
- apt-get install -y git
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-WORKDIR /app
-RUN npm install
-COPY Dockerfile greeting.md* .env* ./
-RUN npm run build
-EXPOSE 7860
-ENV NODE_ENV=production
-CMD [ "npm", "start" ]
\ No newline at end of file
diff --git a/spaces/aiditi/nvidia_denoiser/denoise.py b/spaces/aiditi/nvidia_denoiser/denoise.py
deleted file mode 100644
index 99244db8f70f4c5fc6fe7d16fb7ffea98fcec4f7..0000000000000000000000000000000000000000
--- a/spaces/aiditi/nvidia_denoiser/denoise.py
+++ /dev/null
@@ -1,124 +0,0 @@
-import os
-import argparse
-import json
-from tqdm import tqdm
-from copy import deepcopy
-
-import numpy as np
-import torch
-
-import random
-random.seed(0)
-torch.manual_seed(0)
-np.random.seed(0)
-
-from scipy.io.wavfile import write as wavwrite
-
-from dataset import load_CleanNoisyPairDataset
-from util import find_max_epoch, print_size, sampling
-from network import CleanUNet
-
-
-def denoise(output_directory, ckpt_iter, subset, dump=False):
- """
- Denoise audio
-
- Parameters:
- output_directory (str): save generated speeches to this path
- ckpt_iter (int or 'max'): the pretrained checkpoint to be loaded;
- automitically selects the maximum iteration if 'max' is selected
- subset (str): training, testing, validation
- dump (bool): whether save enhanced (denoised) audio
- """
-
- # setup local experiment path
- exp_path = train_config["exp_path"]
- print('exp_path:', exp_path)
-
- # load data
- loader_config = deepcopy(trainset_config)
- loader_config["crop_length_sec"] = 0
- dataloader = load_CleanNoisyPairDataset(
- **loader_config,
- subset=subset,
- batch_size=1,
- num_gpus=1
- )
-
- # predefine model
- net = CleanUNet(**network_config).cuda()
- print_size(net)
-
- # load checkpoint
- ckpt_directory = os.path.join(train_config["log"]["directory"], exp_path, 'checkpoint')
- if ckpt_iter == 'max':
- ckpt_iter = find_max_epoch(ckpt_directory)
- if ckpt_iter != 'pretrained':
- ckpt_iter = int(ckpt_iter)
- model_path = os.path.join(ckpt_directory, '{}.pkl'.format(ckpt_iter))
- checkpoint = torch.load(model_path, map_location='cpu')
- net.load_state_dict(checkpoint['model_state_dict'])
- net.eval()
-
- # get output directory ready
- if ckpt_iter == "pretrained":
- speech_directory = os.path.join(output_directory, exp_path, 'speech', ckpt_iter)
- else:
- speech_directory = os.path.join(output_directory, exp_path, 'speech', '{}k'.format(ckpt_iter//1000))
- if dump and not os.path.isdir(speech_directory):
- os.makedirs(speech_directory)
- os.chmod(speech_directory, 0o775)
- print("speech_directory: ", speech_directory, flush=True)
-
- # inference
- all_generated_audio = []
- all_clean_audio = []
- sortkey = lambda name: '_'.join(name.split('/')[-1].split('_')[1:])
- for clean_audio, noisy_audio, fileid in tqdm(dataloader):
- filename = sortkey(fileid[0][0])
-
- noisy_audio = noisy_audio.cuda()
- LENGTH = len(noisy_audio[0].squeeze())
- generated_audio = sampling(net, noisy_audio)
-
- if dump:
- wavwrite(os.path.join(speech_directory, 'enhanced_{}'.format(filename)),
- trainset_config["sample_rate"],
- generated_audio[0].squeeze().cpu().numpy())
- else:
- all_clean_audio.append(clean_audio[0].squeeze().cpu().numpy())
- all_generated_audio.append(generated_audio[0].squeeze().cpu().numpy())
-
- return all_clean_audio, all_generated_audio
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument('-c', '--config', type=str, default='config.json',
- help='JSON file for configuration')
- parser.add_argument('-ckpt_iter', '--ckpt_iter', default='max',
- help='Which checkpoint to use; assign a number or "max" or "pretrained"')
- parser.add_argument('-subset', '--subset', type=str, choices=['training', 'testing', 'validation'],
- default='testing', help='subset for denoising')
- args = parser.parse_args()
-
- # Parse configs. Globals nicer in this case
- with open(args.config) as f:
- data = f.read()
- config = json.loads(data)
- gen_config = config["gen_config"]
- global network_config
- network_config = config["network_config"] # to define wavenet
- global train_config
- train_config = config["train_config"] # train config
- global trainset_config
- trainset_config = config["trainset_config"] # to read trainset configurations
-
- torch.backends.cudnn.enabled = True
- torch.backends.cudnn.benchmark = True
-
- if args.subset == "testing":
- denoise(gen_config["output_directory"],
- subset=args.subset,
- ckpt_iter=args.ckpt_iter,
- dump=True)
\ No newline at end of file
diff --git a/spaces/aijack/jojo/e4e/criteria/lpips/networks.py b/spaces/aijack/jojo/e4e/criteria/lpips/networks.py
deleted file mode 100644
index 3a0d13ad2d560278f16586da68d3a5eadb26e746..0000000000000000000000000000000000000000
--- a/spaces/aijack/jojo/e4e/criteria/lpips/networks.py
+++ /dev/null
@@ -1,96 +0,0 @@
-from typing import Sequence
-
-from itertools import chain
-
-import torch
-import torch.nn as nn
-from torchvision import models
-
-from criteria.lpips.utils import normalize_activation
-
-
-def get_network(net_type: str):
- if net_type == 'alex':
- return AlexNet()
- elif net_type == 'squeeze':
- return SqueezeNet()
- elif net_type == 'vgg':
- return VGG16()
- else:
- raise NotImplementedError('choose net_type from [alex, squeeze, vgg].')
-
-
-class LinLayers(nn.ModuleList):
- def __init__(self, n_channels_list: Sequence[int]):
- super(LinLayers, self).__init__([
- nn.Sequential(
- nn.Identity(),
- nn.Conv2d(nc, 1, 1, 1, 0, bias=False)
- ) for nc in n_channels_list
- ])
-
- for param in self.parameters():
- param.requires_grad = False
-
-
-class BaseNet(nn.Module):
- def __init__(self):
- super(BaseNet, self).__init__()
-
- # register buffer
- self.register_buffer(
- 'mean', torch.Tensor([-.030, -.088, -.188])[None, :, None, None])
- self.register_buffer(
- 'std', torch.Tensor([.458, .448, .450])[None, :, None, None])
-
- def set_requires_grad(self, state: bool):
- for param in chain(self.parameters(), self.buffers()):
- param.requires_grad = state
-
- def z_score(self, x: torch.Tensor):
- return (x - self.mean) / self.std
-
- def forward(self, x: torch.Tensor):
- x = self.z_score(x)
-
- output = []
- for i, (_, layer) in enumerate(self.layers._modules.items(), 1):
- x = layer(x)
- if i in self.target_layers:
- output.append(normalize_activation(x))
- if len(output) == len(self.target_layers):
- break
- return output
-
-
-class SqueezeNet(BaseNet):
- def __init__(self):
- super(SqueezeNet, self).__init__()
-
- self.layers = models.squeezenet1_1(True).features
- self.target_layers = [2, 5, 8, 10, 11, 12, 13]
- self.n_channels_list = [64, 128, 256, 384, 384, 512, 512]
-
- self.set_requires_grad(False)
-
-
-class AlexNet(BaseNet):
- def __init__(self):
- super(AlexNet, self).__init__()
-
- self.layers = models.alexnet(True).features
- self.target_layers = [2, 5, 8, 10, 12]
- self.n_channels_list = [64, 192, 384, 256, 256]
-
- self.set_requires_grad(False)
-
-
-class VGG16(BaseNet):
- def __init__(self):
- super(VGG16, self).__init__()
-
- self.layers = models.vgg16(True).features
- self.target_layers = [4, 9, 16, 23, 30]
- self.n_channels_list = [64, 128, 256, 512, 512]
-
- self.set_requires_grad(False)
\ No newline at end of file
diff --git a/spaces/akhaliq/Mask2Former/mask2former_video/data_video/datasets/ytvis_api/ytvoseval.py b/spaces/akhaliq/Mask2Former/mask2former_video/data_video/datasets/ytvis_api/ytvoseval.py
deleted file mode 100644
index f2cb8be6c8d009c2509a13b52437c7dec3b3ec0a..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Mask2Former/mask2former_video/data_video/datasets/ytvis_api/ytvoseval.py
+++ /dev/null
@@ -1,567 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Modified by Bowen Cheng from https://github.com/youtubevos/cocoapi
-
-__author__ = 'ychfan'
-
-import numpy as np
-import datetime
-import time
-from collections import defaultdict
-from pycocotools import mask as maskUtils
-import copy
-
-class YTVOSeval:
- # Interface for evaluating video instance segmentation on the YouTubeVIS dataset.
- #
- # The usage for YTVOSeval is as follows:
- # cocoGt=..., cocoDt=... # load dataset and results
- # E = YTVOSeval(cocoGt,cocoDt); # initialize YTVOSeval object
- # E.params.recThrs = ...; # set parameters as desired
- # E.evaluate(); # run per image evaluation
- # E.accumulate(); # accumulate per image results
- # E.summarize(); # display summary metrics of results
- # For example usage see evalDemo.m and http://mscoco.org/.
- #
- # The evaluation parameters are as follows (defaults in brackets):
- # imgIds - [all] N img ids to use for evaluation
- # catIds - [all] K cat ids to use for evaluation
- # iouThrs - [.5:.05:.95] T=10 IoU thresholds for evaluation
- # recThrs - [0:.01:1] R=101 recall thresholds for evaluation
- # areaRng - [...] A=4 object area ranges for evaluation
- # maxDets - [1 10 100] M=3 thresholds on max detections per image
- # iouType - ['segm'] set iouType to 'segm', 'bbox' or 'keypoints'
- # iouType replaced the now DEPRECATED useSegm parameter.
- # useCats - [1] if true use category labels for evaluation
- # Note: if useCats=0 category labels are ignored as in proposal scoring.
- # Note: multiple areaRngs [Ax2] and maxDets [Mx1] can be specified.
- #
- # evaluate(): evaluates detections on every image and every category and
- # concats the results into the "evalImgs" with fields:
- # dtIds - [1xD] id for each of the D detections (dt)
- # gtIds - [1xG] id for each of the G ground truths (gt)
- # dtMatches - [TxD] matching gt id at each IoU or 0
- # gtMatches - [TxG] matching dt id at each IoU or 0
- # dtScores - [1xD] confidence of each dt
- # gtIgnore - [1xG] ignore flag for each gt
- # dtIgnore - [TxD] ignore flag for each dt at each IoU
- #
- # accumulate(): accumulates the per-image, per-category evaluation
- # results in "evalImgs" into the dictionary "eval" with fields:
- # params - parameters used for evaluation
- # date - date evaluation was performed
- # counts - [T,R,K,A,M] parameter dimensions (see above)
- # precision - [TxRxKxAxM] precision for every evaluation setting
- # recall - [TxKxAxM] max recall for every evaluation setting
- # Note: precision and recall==-1 for settings with no gt objects.
- #
- # See also coco, mask, pycocoDemo, pycocoEvalDemo
- #
- # Microsoft COCO Toolbox. version 2.0
- # Data, paper, and tutorials available at: http://mscoco.org/
- # Code written by Piotr Dollar and Tsung-Yi Lin, 2015.
- # Licensed under the Simplified BSD License [see coco/license.txt]
- def __init__(self, cocoGt=None, cocoDt=None, iouType='segm'):
- '''
- Initialize CocoEval using coco APIs for gt and dt
- :param cocoGt: coco object with ground truth annotations
- :param cocoDt: coco object with detection results
- :return: None
- '''
- if not iouType:
- print('iouType not specified. use default iouType segm')
- self.cocoGt = cocoGt # ground truth COCO API
- self.cocoDt = cocoDt # detections COCO API
- self.params = {} # evaluation parameters
- self.evalVids = defaultdict(list) # per-image per-category evaluation results [KxAxI] elements
- self.eval = {} # accumulated evaluation results
- self._gts = defaultdict(list) # gt for evaluation
- self._dts = defaultdict(list) # dt for evaluation
- self.params = Params(iouType=iouType) # parameters
- self._paramsEval = {} # parameters for evaluation
- self.stats = [] # result summarization
- self.ious = {} # ious between all gts and dts
- if not cocoGt is None:
- self.params.vidIds = sorted(cocoGt.getVidIds())
- self.params.catIds = sorted(cocoGt.getCatIds())
-
-
- def _prepare(self):
- '''
- Prepare ._gts and ._dts for evaluation based on params
- :return: None
- '''
- def _toMask(anns, coco):
- # modify ann['segmentation'] by reference
- for ann in anns:
- for i, a in enumerate(ann['segmentations']):
- if a:
- rle = coco.annToRLE(ann, i)
- ann['segmentations'][i] = rle
- l = [a for a in ann['areas'] if a]
- if len(l)==0:
- ann['avg_area'] = 0
- else:
- ann['avg_area'] = np.array(l).mean()
- p = self.params
- if p.useCats:
- gts=self.cocoGt.loadAnns(self.cocoGt.getAnnIds(vidIds=p.vidIds, catIds=p.catIds))
- dts=self.cocoDt.loadAnns(self.cocoDt.getAnnIds(vidIds=p.vidIds, catIds=p.catIds))
- else:
- gts=self.cocoGt.loadAnns(self.cocoGt.getAnnIds(vidIds=p.vidIds))
- dts=self.cocoDt.loadAnns(self.cocoDt.getAnnIds(vidIds=p.vidIds))
-
- # convert ground truth to mask if iouType == 'segm'
- if p.iouType == 'segm':
- _toMask(gts, self.cocoGt)
- _toMask(dts, self.cocoDt)
- # set ignore flag
- for gt in gts:
- gt['ignore'] = gt['ignore'] if 'ignore' in gt else 0
- gt['ignore'] = 'iscrowd' in gt and gt['iscrowd']
- if p.iouType == 'keypoints':
- gt['ignore'] = (gt['num_keypoints'] == 0) or gt['ignore']
- self._gts = defaultdict(list) # gt for evaluation
- self._dts = defaultdict(list) # dt for evaluation
- for gt in gts:
- self._gts[gt['video_id'], gt['category_id']].append(gt)
- for dt in dts:
- self._dts[dt['video_id'], dt['category_id']].append(dt)
- self.evalVids = defaultdict(list) # per-image per-category evaluation results
- self.eval = {} # accumulated evaluation results
-
- def evaluate(self):
- '''
- Run per image evaluation on given images and store results (a list of dict) in self.evalVids
- :return: None
- '''
- tic = time.time()
- print('Running per image evaluation...')
- p = self.params
- # add backward compatibility if useSegm is specified in params
- if not p.useSegm is None:
- p.iouType = 'segm' if p.useSegm == 1 else 'bbox'
- print('useSegm (deprecated) is not None. Running {} evaluation'.format(p.iouType))
- print('Evaluate annotation type *{}*'.format(p.iouType))
- p.vidIds = list(np.unique(p.vidIds))
- if p.useCats:
- p.catIds = list(np.unique(p.catIds))
- p.maxDets = sorted(p.maxDets)
- self.params=p
-
- self._prepare()
- # loop through images, area range, max detection number
- catIds = p.catIds if p.useCats else [-1]
-
- if p.iouType == 'segm' or p.iouType == 'bbox':
- computeIoU = self.computeIoU
- elif p.iouType == 'keypoints':
- computeIoU = self.computeOks
- self.ious = {(vidId, catId): computeIoU(vidId, catId) \
- for vidId in p.vidIds
- for catId in catIds}
-
- evaluateVid = self.evaluateVid
- maxDet = p.maxDets[-1]
-
-
- self.evalImgs = [evaluateVid(vidId, catId, areaRng, maxDet)
- for catId in catIds
- for areaRng in p.areaRng
- for vidId in p.vidIds
- ]
- self._paramsEval = copy.deepcopy(self.params)
- toc = time.time()
- print('DONE (t={:0.2f}s).'.format(toc-tic))
-
- def computeIoU(self, vidId, catId):
- p = self.params
- if p.useCats:
- gt = self._gts[vidId,catId]
- dt = self._dts[vidId,catId]
- else:
- gt = [_ for cId in p.catIds for _ in self._gts[vidId,cId]]
- dt = [_ for cId in p.catIds for _ in self._dts[vidId,cId]]
- if len(gt) == 0 and len(dt) ==0:
- return []
- inds = np.argsort([-d['score'] for d in dt], kind='mergesort')
- dt = [dt[i] for i in inds]
- if len(dt) > p.maxDets[-1]:
- dt=dt[0:p.maxDets[-1]]
-
- if p.iouType == 'segm':
- g = [g['segmentations'] for g in gt]
- d = [d['segmentations'] for d in dt]
- elif p.iouType == 'bbox':
- g = [g['bboxes'] for g in gt]
- d = [d['bboxes'] for d in dt]
- else:
- raise Exception('unknown iouType for iou computation')
-
- # compute iou between each dt and gt region
- iscrowd = [int(o['iscrowd']) for o in gt]
- #ious = maskUtils.iou(d,g,iscrowd)
- def iou_seq(d_seq, g_seq):
- i = .0
- u = .0
- for d, g in zip(d_seq, g_seq):
- if d and g:
- i += maskUtils.area(maskUtils.merge([d, g], True))
- u += maskUtils.area(maskUtils.merge([d, g], False))
- elif not d and g:
- u += maskUtils.area(g)
- elif d and not g:
- u += maskUtils.area(d)
- if not u > .0:
- print("Mask sizes in video {} and category {} may not match!".format(vidId, catId))
- iou = i / u if u > .0 else .0
- return iou
- ious = np.zeros([len(d), len(g)])
- for i, j in np.ndindex(ious.shape):
- ious[i, j] = iou_seq(d[i], g[j])
- #print(vidId, catId, ious.shape, ious)
- return ious
-
- def computeOks(self, imgId, catId):
- p = self.params
- # dimention here should be Nxm
- gts = self._gts[imgId, catId]
- dts = self._dts[imgId, catId]
- inds = np.argsort([-d['score'] for d in dts], kind='mergesort')
- dts = [dts[i] for i in inds]
- if len(dts) > p.maxDets[-1]:
- dts = dts[0:p.maxDets[-1]]
- # if len(gts) == 0 and len(dts) == 0:
- if len(gts) == 0 or len(dts) == 0:
- return []
- ious = np.zeros((len(dts), len(gts)))
- sigmas = np.array([.26, .25, .25, .35, .35, .79, .79, .72, .72, .62,.62, 1.07, 1.07, .87, .87, .89, .89])/10.0
- vars = (sigmas * 2)**2
- k = len(sigmas)
- # compute oks between each detection and ground truth object
- for j, gt in enumerate(gts):
- # create bounds for ignore regions(double the gt bbox)
- g = np.array(gt['keypoints'])
- xg = g[0::3]; yg = g[1::3]; vg = g[2::3]
- k1 = np.count_nonzero(vg > 0)
- bb = gt['bbox']
- x0 = bb[0] - bb[2]; x1 = bb[0] + bb[2] * 2
- y0 = bb[1] - bb[3]; y1 = bb[1] + bb[3] * 2
- for i, dt in enumerate(dts):
- d = np.array(dt['keypoints'])
- xd = d[0::3]; yd = d[1::3]
- if k1>0:
- # measure the per-keypoint distance if keypoints visible
- dx = xd - xg
- dy = yd - yg
- else:
- # measure minimum distance to keypoints in (x0,y0) & (x1,y1)
- z = np.zeros((k))
- dx = np.max((z, x0-xd),axis=0)+np.max((z, xd-x1),axis=0)
- dy = np.max((z, y0-yd),axis=0)+np.max((z, yd-y1),axis=0)
- e = (dx**2 + dy**2) / vars / (gt['avg_area']+np.spacing(1)) / 2
- if k1 > 0:
- e=e[vg > 0]
- ious[i, j] = np.sum(np.exp(-e)) / e.shape[0]
- return ious
-
- def evaluateVid(self, vidId, catId, aRng, maxDet):
- '''
- perform evaluation for single category and image
- :return: dict (single image results)
- '''
- p = self.params
- if p.useCats:
- gt = self._gts[vidId,catId]
- dt = self._dts[vidId,catId]
- else:
- gt = [_ for cId in p.catIds for _ in self._gts[vidId,cId]]
- dt = [_ for cId in p.catIds for _ in self._dts[vidId,cId]]
- if len(gt) == 0 and len(dt) ==0:
- return None
-
- for g in gt:
- if g['ignore'] or (g['avg_area']aRng[1]):
- g['_ignore'] = 1
- else:
- g['_ignore'] = 0
-
- # sort dt highest score first, sort gt ignore last
- gtind = np.argsort([g['_ignore'] for g in gt], kind='mergesort')
- gt = [gt[i] for i in gtind]
- dtind = np.argsort([-d['score'] for d in dt], kind='mergesort')
- dt = [dt[i] for i in dtind[0:maxDet]]
- iscrowd = [int(o['iscrowd']) for o in gt]
- # load computed ious
- ious = self.ious[vidId, catId][:, gtind] if len(self.ious[vidId, catId]) > 0 else self.ious[vidId, catId]
-
- T = len(p.iouThrs)
- G = len(gt)
- D = len(dt)
- gtm = np.zeros((T,G))
- dtm = np.zeros((T,D))
- gtIg = np.array([g['_ignore'] for g in gt])
- dtIg = np.zeros((T,D))
- if not len(ious)==0:
- for tind, t in enumerate(p.iouThrs):
- for dind, d in enumerate(dt):
- # information about best match so far (m=-1 -> unmatched)
- iou = min([t,1-1e-10])
- m = -1
- for gind, g in enumerate(gt):
- # if this gt already matched, and not a crowd, continue
- if gtm[tind,gind]>0 and not iscrowd[gind]:
- continue
- # if dt matched to reg gt, and on ignore gt, stop
- if m>-1 and gtIg[m]==0 and gtIg[gind]==1:
- break
- # continue to next gt unless better match made
- if ious[dind,gind] < iou:
- continue
- # if match successful and best so far, store appropriately
- iou=ious[dind,gind]
- m=gind
- # if match made store id of match for both dt and gt
- if m ==-1:
- continue
- dtIg[tind,dind] = gtIg[m]
- dtm[tind,dind] = gt[m]['id']
- gtm[tind,m] = d['id']
- # set unmatched detections outside of area range to ignore
- a = np.array([d['avg_area']aRng[1] for d in dt]).reshape((1, len(dt)))
- dtIg = np.logical_or(dtIg, np.logical_and(dtm==0, np.repeat(a,T,0)))
- # store results for given image and category
- return {
- 'video_id': vidId,
- 'category_id': catId,
- 'aRng': aRng,
- 'maxDet': maxDet,
- 'dtIds': [d['id'] for d in dt],
- 'gtIds': [g['id'] for g in gt],
- 'dtMatches': dtm,
- 'gtMatches': gtm,
- 'dtScores': [d['score'] for d in dt],
- 'gtIgnore': gtIg,
- 'dtIgnore': dtIg,
- }
-
- def accumulate(self, p = None):
- '''
- Accumulate per image evaluation results and store the result in self.eval
- :param p: input params for evaluation
- :return: None
- '''
- print('Accumulating evaluation results...')
- tic = time.time()
- if not self.evalImgs:
- print('Please run evaluate() first')
- # allows input customized parameters
- if p is None:
- p = self.params
- p.catIds = p.catIds if p.useCats == 1 else [-1]
- T = len(p.iouThrs)
- R = len(p.recThrs)
- K = len(p.catIds) if p.useCats else 1
- A = len(p.areaRng)
- M = len(p.maxDets)
- precision = -np.ones((T,R,K,A,M)) # -1 for the precision of absent categories
- recall = -np.ones((T,K,A,M))
- scores = -np.ones((T,R,K,A,M))
-
- # create dictionary for future indexing
- _pe = self._paramsEval
- catIds = _pe.catIds if _pe.useCats else [-1]
- setK = set(catIds)
- setA = set(map(tuple, _pe.areaRng))
- setM = set(_pe.maxDets)
- setI = set(_pe.vidIds)
- # get inds to evaluate
- k_list = [n for n, k in enumerate(p.catIds) if k in setK]
- m_list = [m for n, m in enumerate(p.maxDets) if m in setM]
- a_list = [n for n, a in enumerate(map(lambda x: tuple(x), p.areaRng)) if a in setA]
- i_list = [n for n, i in enumerate(p.vidIds) if i in setI]
- I0 = len(_pe.vidIds)
- A0 = len(_pe.areaRng)
- # retrieve E at each category, area range, and max number of detections
- for k, k0 in enumerate(k_list):
- Nk = k0*A0*I0
- for a, a0 in enumerate(a_list):
- Na = a0*I0
- for m, maxDet in enumerate(m_list):
- E = [self.evalImgs[Nk + Na + i] for i in i_list]
- E = [e for e in E if not e is None]
- if len(E) == 0:
- continue
- dtScores = np.concatenate([e['dtScores'][0:maxDet] for e in E])
-
- # different sorting method generates slightly different results.
- # mergesort is used to be consistent as Matlab implementation.
- inds = np.argsort(-dtScores, kind='mergesort')
- dtScoresSorted = dtScores[inds]
-
- dtm = np.concatenate([e['dtMatches'][:,0:maxDet] for e in E], axis=1)[:,inds]
- dtIg = np.concatenate([e['dtIgnore'][:,0:maxDet] for e in E], axis=1)[:,inds]
- gtIg = np.concatenate([e['gtIgnore'] for e in E])
- npig = np.count_nonzero(gtIg==0 )
- if npig == 0:
- continue
- tps = np.logical_and( dtm, np.logical_not(dtIg) )
- fps = np.logical_and(np.logical_not(dtm), np.logical_not(dtIg) )
-
- tp_sum = np.cumsum(tps, axis=1).astype(dtype=np.float)
- fp_sum = np.cumsum(fps, axis=1).astype(dtype=np.float)
- for t, (tp, fp) in enumerate(zip(tp_sum, fp_sum)):
- tp = np.array(tp)
- fp = np.array(fp)
- nd = len(tp)
- rc = tp / npig
- pr = tp / (fp+tp+np.spacing(1))
- q = np.zeros((R,))
- ss = np.zeros((R,))
-
- if nd:
- recall[t,k,a,m] = rc[-1]
- else:
- recall[t,k,a,m] = 0
-
- # numpy is slow without cython optimization for accessing elements
- # use python array gets significant speed improvement
- pr = pr.tolist(); q = q.tolist()
-
- for i in range(nd-1, 0, -1):
- if pr[i] > pr[i-1]:
- pr[i-1] = pr[i]
-
- inds = np.searchsorted(rc, p.recThrs, side='left')
- try:
- for ri, pi in enumerate(inds):
- q[ri] = pr[pi]
- ss[ri] = dtScoresSorted[pi]
- except:
- pass
- precision[t,:,k,a,m] = np.array(q)
- scores[t,:,k,a,m] = np.array(ss)
- self.eval = {
- 'params': p,
- 'counts': [T, R, K, A, M],
- 'date': datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'),
- 'precision': precision,
- 'recall': recall,
- 'scores': scores,
- }
- toc = time.time()
- print('DONE (t={:0.2f}s).'.format( toc-tic))
-
- def summarize(self):
- '''
- Compute and display summary metrics for evaluation results.
- Note this functin can *only* be applied on the default parameter setting
- '''
- def _summarize( ap=1, iouThr=None, areaRng='all', maxDets=100 ):
- p = self.params
- iStr = ' {:<18} {} @[ IoU={:<9} | area={:>6s} | maxDets={:>3d} ] = {:0.3f}'
- titleStr = 'Average Precision' if ap == 1 else 'Average Recall'
- typeStr = '(AP)' if ap==1 else '(AR)'
- iouStr = '{:0.2f}:{:0.2f}'.format(p.iouThrs[0], p.iouThrs[-1]) \
- if iouThr is None else '{:0.2f}'.format(iouThr)
-
- aind = [i for i, aRng in enumerate(p.areaRngLbl) if aRng == areaRng]
- mind = [i for i, mDet in enumerate(p.maxDets) if mDet == maxDets]
- if ap == 1:
- # dimension of precision: [TxRxKxAxM]
- s = self.eval['precision']
- # IoU
- if iouThr is not None:
- t = np.where(iouThr == p.iouThrs)[0]
- s = s[t]
- s = s[:,:,:,aind,mind]
- else:
- # dimension of recall: [TxKxAxM]
- s = self.eval['recall']
- if iouThr is not None:
- t = np.where(iouThr == p.iouThrs)[0]
- s = s[t]
- s = s[:,:,aind,mind]
- if len(s[s>-1])==0:
- mean_s = -1
- else:
- mean_s = np.mean(s[s>-1])
- print(iStr.format(titleStr, typeStr, iouStr, areaRng, maxDets, mean_s))
- return mean_s
- def _summarizeDets():
- stats = np.zeros((12,))
- stats[0] = _summarize(1)
- stats[1] = _summarize(1, iouThr=.5, maxDets=self.params.maxDets[2])
- stats[2] = _summarize(1, iouThr=.75, maxDets=self.params.maxDets[2])
- stats[3] = _summarize(1, areaRng='small', maxDets=self.params.maxDets[2])
- stats[4] = _summarize(1, areaRng='medium', maxDets=self.params.maxDets[2])
- stats[5] = _summarize(1, areaRng='large', maxDets=self.params.maxDets[2])
- stats[6] = _summarize(0, maxDets=self.params.maxDets[0])
- stats[7] = _summarize(0, maxDets=self.params.maxDets[1])
- stats[8] = _summarize(0, maxDets=self.params.maxDets[2])
- stats[9] = _summarize(0, areaRng='small', maxDets=self.params.maxDets[2])
- stats[10] = _summarize(0, areaRng='medium', maxDets=self.params.maxDets[2])
- stats[11] = _summarize(0, areaRng='large', maxDets=self.params.maxDets[2])
- return stats
- def _summarizeKps():
- stats = np.zeros((10,))
- stats[0] = _summarize(1, maxDets=20)
- stats[1] = _summarize(1, maxDets=20, iouThr=.5)
- stats[2] = _summarize(1, maxDets=20, iouThr=.75)
- stats[3] = _summarize(1, maxDets=20, areaRng='medium')
- stats[4] = _summarize(1, maxDets=20, areaRng='large')
- stats[5] = _summarize(0, maxDets=20)
- stats[6] = _summarize(0, maxDets=20, iouThr=.5)
- stats[7] = _summarize(0, maxDets=20, iouThr=.75)
- stats[8] = _summarize(0, maxDets=20, areaRng='medium')
- stats[9] = _summarize(0, maxDets=20, areaRng='large')
- return stats
- if not self.eval:
- raise Exception('Please run accumulate() first')
- iouType = self.params.iouType
- if iouType == 'segm' or iouType == 'bbox':
- summarize = _summarizeDets
- elif iouType == 'keypoints':
- summarize = _summarizeKps
- self.stats = summarize()
-
- def __str__(self):
- self.summarize()
-
-class Params:
- '''
- Params for coco evaluation api
- '''
- def setDetParams(self):
- self.vidIds = []
- self.catIds = []
- # np.arange causes trouble. the data point on arange is slightly larger than the true value
- #self.iouThrs = np.linspace(.5, 0.95, np.round((0.95 - .5) / .05) + 1, endpoint=True)
- #self.recThrs = np.linspace(.0, 1.00, np.round((1.00 - .0) / .01) + 1, endpoint=True)
- self.iouThrs = np.linspace(.5, 0.95, int(np.round((0.95 - .5) / .05)) + 1, endpoint=True)
- self.recThrs = np.linspace(.0, 1.00, int(np.round((1.00 - .0) / .01)) + 1, endpoint=True)
- self.maxDets = [1, 10, 100]
- self.areaRng = [[0 ** 2, 1e5 ** 2], [0 ** 2, 128 ** 2], [ 128 ** 2, 256 ** 2], [256 ** 2, 1e5 ** 2]]
- self.areaRngLbl = ['all', 'small', 'medium', 'large']
- self.useCats = 1
-
- def setKpParams(self):
- self.vidIds = []
- self.catIds = []
- # np.arange causes trouble. the data point on arange is slightly larger than the true value
- self.iouThrs = np.linspace(.5, 0.95, np.round((0.95 - .5) / .05) + 1, endpoint=True)
- self.recThrs = np.linspace(.0, 1.00, np.round((1.00 - .0) / .01) + 1, endpoint=True)
- self.maxDets = [20]
- self.areaRng = [[0 ** 2, 1e5 ** 2], [32 ** 2, 96 ** 2], [96 ** 2, 1e5 ** 2]]
- self.areaRngLbl = ['all', 'medium', 'large']
- self.useCats = 1
-
- def __init__(self, iouType='segm'):
- if iouType == 'segm' or iouType == 'bbox':
- self.setDetParams()
- elif iouType == 'keypoints':
- self.setKpParams()
- else:
- raise Exception('iouType not supported')
- self.iouType = iouType
- # useSegm is deprecated
- self.useSegm = None
diff --git a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/NamedNodeMap.pod b/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/NamedNodeMap.pod
deleted file mode 100644
index 62c276272a8483b0bfc2966ba7a990ae96175363..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/NamedNodeMap.pod
+++ /dev/null
@@ -1,130 +0,0 @@
-=head1 NAME
-
-XML::DOM::NamedNodeMap - A hash table interface for XML::DOM
-
-=head1 DESCRIPTION
-
-Objects implementing the NamedNodeMap interface are used to represent
-collections of nodes that can be accessed by name. Note that
-NamedNodeMap does not inherit from NodeList; NamedNodeMaps are not
-maintained in any particular order. Objects contained in an object
-implementing NamedNodeMap may also be accessed by an ordinal index, but
-this is simply to allow convenient enumeration of the contents of a
-NamedNodeMap, and does not imply that the DOM specifies an order to
-these Nodes.
-
-Note that in this implementation, the objects added to a NamedNodeMap
-are kept in order.
-
-=head2 METHODS
-
-=over 4
-
-=item getNamedItem (name)
-
-Retrieves a node specified by name.
-
-Return Value: A Node (of any type) with the specified name, or undef if
-the specified name did not identify any node in the map.
-
-=item setNamedItem (arg)
-
-Adds a node using its nodeName attribute.
-
-As the nodeName attribute is used to derive the name which
-the node must be stored under, multiple nodes of certain
-types (those that have a "special" string value) cannot be
-stored as the names would clash. This is seen as preferable
-to allowing nodes to be aliased.
-
-Parameters:
- I A node to store in a named node map.
-
-The node will later be accessible using the value of the nodeName
-attribute of the node. If a node with that name is
-already present in the map, it is replaced by the new one.
-
-Return Value: If the new Node replaces an existing node with the same
-name the previously existing Node is returned, otherwise undef is returned.
-
-DOMExceptions:
-
-=over 4
-
-=item * WRONG_DOCUMENT_ERR
-
-Raised if arg was created from a different document than the one that
-created the NamedNodeMap.
-
-=item * NO_MODIFICATION_ALLOWED_ERR
-
-Raised if this NamedNodeMap is readonly.
-
-=item * INUSE_ATTRIBUTE_ERR
-
-Raised if arg is an Attr that is already an attribute of another Element object.
-The DOM user must explicitly clone Attr nodes to re-use them in other elements.
-
-=back
-
-=item removeNamedItem (name)
-
-Removes a node specified by name. If the removed node is an
-Attr with a default value it is immediately replaced.
-
-Return Value: The node removed from the map or undef if no node with
-such a name exists.
-
-DOMException:
-
-=over 4
-
-=item * NOT_FOUND_ERR
-
-Raised if there is no node named name in the map.
-
-=back
-
-=item item (index)
-
-Returns the indexth item in the map. If index is greater than
-or equal to the number of nodes in the map, this returns undef.
-
-Return Value: The node at the indexth position in the NamedNodeMap, or
-undef if that is not a valid index.
-
-=item getLength
-
-Returns the number of nodes in the map. The range of valid child node
-indices is 0 to length-1 inclusive.
-
-=back
-
-=head2 Additional methods not in the DOM Spec
-
-=over 4
-
-=item getValues
-
-Returns a NodeList with the nodes contained in the NamedNodeMap.
-The NodeList is "live", in that it reflects changes made to the NamedNodeMap.
-
-When this method is called in a list context, it returns a regular perl list
-containing the values. Note that this list is not "live". E.g.
-
- @list = $map->getValues; # returns a perl list
- $nodelist = $map->getValues; # returns a NodeList (object ref.)
- for my $val ($map->getValues) # iterate over the values
-
-=item getChildIndex (node)
-
-Returns the index of the node in the NodeList as returned by getValues, or -1
-if the node is not in the NamedNodeMap.
-
-=item dispose
-
-Removes all circular references in this NamedNodeMap and its descendants so the
-objects can be claimed for garbage collection. The objects should not be used
-afterwards.
-
-=back
diff --git a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/pyrouge/utils/log.py b/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/pyrouge/utils/log.py
deleted file mode 100644
index 45cab71be33d658b084e8f81f4d3901bd0c7dae6..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/pyrouge/utils/log.py
+++ /dev/null
@@ -1,9 +0,0 @@
-import logging
-
-
-def get_console_logger(name, level=logging.WARNING):
- return logging.getLogger("pyrouge")
-
-
-def get_global_console_logger(level=logging.WARNING):
- return logging.getLogger("pyrouge")
diff --git a/spaces/akhaliq/mlsd/static/css/app.css b/spaces/akhaliq/mlsd/static/css/app.css
deleted file mode 100644
index b8dcee2e81d09edfee44fdae4c28f3622d7fefe6..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/mlsd/static/css/app.css
+++ /dev/null
@@ -1,11 +0,0 @@
-#app {
- padding: 20px;
-}
-
-#result .item {
- padding-bottom: 20px;
-}
-
-.form-content-container {
- padding-left: 20px;
-}
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/utils/egg_link.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/utils/egg_link.py
deleted file mode 100644
index 9e0da8d2d29d94d15dfbf49dff90df7eafd68bac..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/utils/egg_link.py
+++ /dev/null
@@ -1,75 +0,0 @@
-# The following comment should be removed at some point in the future.
-# mypy: strict-optional=False
-
-import os
-import re
-import sys
-from typing import Optional
-
-from pip._internal.locations import site_packages, user_site
-from pip._internal.utils.virtualenv import (
- running_under_virtualenv,
- virtualenv_no_global,
-)
-
-__all__ = [
- "egg_link_path_from_sys_path",
- "egg_link_path_from_location",
-]
-
-
-def _egg_link_name(raw_name: str) -> str:
- """
- Convert a Name metadata value to a .egg-link name, by applying
- the same substitution as pkg_resources's safe_name function.
- Note: we cannot use canonicalize_name because it has a different logic.
- """
- return re.sub("[^A-Za-z0-9.]+", "-", raw_name) + ".egg-link"
-
-
-def egg_link_path_from_sys_path(raw_name: str) -> Optional[str]:
- """
- Look for a .egg-link file for project name, by walking sys.path.
- """
- egg_link_name = _egg_link_name(raw_name)
- for path_item in sys.path:
- egg_link = os.path.join(path_item, egg_link_name)
- if os.path.isfile(egg_link):
- return egg_link
- return None
-
-
-def egg_link_path_from_location(raw_name: str) -> Optional[str]:
- """
- Return the path for the .egg-link file if it exists, otherwise, None.
-
- There's 3 scenarios:
- 1) not in a virtualenv
- try to find in site.USER_SITE, then site_packages
- 2) in a no-global virtualenv
- try to find in site_packages
- 3) in a yes-global virtualenv
- try to find in site_packages, then site.USER_SITE
- (don't look in global location)
-
- For #1 and #3, there could be odd cases, where there's an egg-link in 2
- locations.
-
- This method will just return the first one found.
- """
- sites = []
- if running_under_virtualenv():
- sites.append(site_packages)
- if not virtualenv_no_global() and user_site:
- sites.append(user_site)
- else:
- if user_site:
- sites.append(user_site)
- sites.append(site_packages)
-
- egg_link_name = _egg_link_name(raw_name)
- for site in sites:
- egglink = os.path.join(site, egg_link_name)
- if os.path.isfile(egglink):
- return egglink
- return None
diff --git a/spaces/ali-ghamdan/deoldify/fastai/vision/data.py b/spaces/ali-ghamdan/deoldify/fastai/vision/data.py
deleted file mode 100644
index 20f584dd28d8f102ca079f031e9faec6c755773d..0000000000000000000000000000000000000000
--- a/spaces/ali-ghamdan/deoldify/fastai/vision/data.py
+++ /dev/null
@@ -1,461 +0,0 @@
-"Manages data input pipeline - folderstransformbatch input. Includes support for classification, segmentation and bounding boxes"
-from numbers import Integral
-from ..torch_core import *
-from .image import *
-from .transform import *
-from ..data_block import *
-from ..basic_data import *
-from ..layers import *
-from .learner import *
-from torchvision import transforms as tvt
-
-__all__ = ['get_image_files', 'denormalize', 'get_annotations', 'ImageDataBunch',
- 'ImageList', 'normalize', 'normalize_funcs', 'resize_to',
- 'channel_view', 'mnist_stats', 'cifar_stats', 'imagenet_stats', 'imagenet_stats_inception', 'download_images',
- 'verify_images', 'bb_pad_collate', 'ImageImageList', 'PointsLabelList',
- 'ObjectCategoryList', 'ObjectItemList', 'SegmentationLabelList', 'SegmentationItemList', 'PointsItemList']
-
-image_extensions = set(k for k,v in mimetypes.types_map.items() if v.startswith('image/'))
-
-def get_image_files(c:PathOrStr, check_ext:bool=True, recurse=False)->FilePathList:
- "Return list of files in `c` that are images. `check_ext` will filter to `image_extensions`."
- return get_files(c, extensions=(image_extensions if check_ext else None), recurse=recurse)
-
-def get_annotations(fname, prefix=None):
- "Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
- annot_dict = json.load(open(fname))
- id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
- classes = {}
- for o in annot_dict['categories']:
- classes[o['id']] = o['name']
- for o in annot_dict['annotations']:
- bb = o['bbox']
- id2bboxes[o['image_id']].append([bb[1],bb[0], bb[3]+bb[1], bb[2]+bb[0]])
- id2cats[o['image_id']].append(classes[o['category_id']])
- for o in annot_dict['images']:
- if o['id'] in id2bboxes:
- id2images[o['id']] = ifnone(prefix, '') + o['file_name']
- ids = list(id2images.keys())
- return [id2images[k] for k in ids], [[id2bboxes[k], id2cats[k]] for k in ids]
-
-def bb_pad_collate(samples:BatchSamples, pad_idx:int=0) -> Tuple[FloatTensor, Tuple[LongTensor, LongTensor]]:
- "Function that collect `samples` of labelled bboxes and adds padding with `pad_idx`."
- if isinstance(samples[0][1], int): return data_collate(samples)
- max_len = max([len(s[1].data[1]) for s in samples])
- bboxes = torch.zeros(len(samples), max_len, 4)
- labels = torch.zeros(len(samples), max_len).long() + pad_idx
- imgs = []
- for i,s in enumerate(samples):
- imgs.append(s[0].data[None])
- bbs, lbls = s[1].data
- if not (bbs.nelement() == 0):
- bboxes[i,-len(lbls):] = bbs
- labels[i,-len(lbls):] = tensor(lbls)
- return torch.cat(imgs,0), (bboxes,labels)
-
-def normalize(x:TensorImage, mean,std:Tensor)->TensorImage:
- "Normalize `x` with `mean` and `std`."
- return (x-mean[...,None,None]) / std[...,None,None]
-
-def denormalize(x:TensorImage, mean,std:Tensor, do_x:bool=True)->TensorImage:
- "Denormalize `x` with `mean` and `std`."
- return x.cpu().float()*std[...,None,None] + mean[...,None,None] if do_x else x.cpu()
-
-def _normalize_batch(b:Tuple[Tensor,Tensor], mean:Tensor, std:Tensor, do_x:bool=True, do_y:bool=False)->Tuple[Tensor,Tensor]:
- "`b` = `x`,`y` - normalize `x` array of imgs and `do_y` optionally `y`."
- x,y = b
- mean,std = mean.to(x.device),std.to(x.device)
- if do_x: x = normalize(x,mean,std)
- if do_y and len(y.shape) == 4: y = normalize(y,mean,std)
- return x,y
-
-def normalize_funcs(mean:Tensor, std:Tensor, do_x:bool=True, do_y:bool=False)->Tuple[Callable,Callable]:
- "Create normalize/denormalize func using `mean` and `std`, can specify `do_y` and `device`."
- mean,std = tensor(mean),tensor(std)
- return (partial(_normalize_batch, mean=mean, std=std, do_x=do_x, do_y=do_y),
- partial(denormalize, mean=mean, std=std, do_x=do_x))
-
-cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
-imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
-imagenet_stats_inception = ([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
-mnist_stats = ([0.15]*3, [0.15]*3)
-
-def channel_view(x:Tensor)->Tensor:
- "Make channel the first axis of `x` and flatten remaining axes"
- return x.transpose(0,1).contiguous().view(x.shape[1],-1)
-
-class ImageDataBunch(DataBunch):
- "DataBunch suitable for computer vision."
- _square_show = True
-
- @classmethod
- def create_from_ll(cls, lls:LabelLists, bs:int=64, val_bs:int=None, ds_tfms:Optional[TfmList]=None,
- num_workers:int=defaults.cpus, dl_tfms:Optional[Collection[Callable]]=None, device:torch.device=None,
- test:Optional[PathOrStr]=None, collate_fn:Callable=data_collate, size:int=None, no_check:bool=False,
- resize_method:ResizeMethod=None, mult:int=None, padding_mode:str='reflection',
- mode:str='bilinear', tfm_y:bool=False)->'ImageDataBunch':
- "Create an `ImageDataBunch` from `LabelLists` `lls` with potential `ds_tfms`."
- lls = lls.transform(tfms=ds_tfms, size=size, resize_method=resize_method, mult=mult, padding_mode=padding_mode,
- mode=mode, tfm_y=tfm_y)
- if test is not None: lls.add_test_folder(test)
- return lls.databunch(bs=bs, val_bs=val_bs, dl_tfms=dl_tfms, num_workers=num_workers, collate_fn=collate_fn,
- device=device, no_check=no_check)
-
- @classmethod
- def from_folder(cls, path:PathOrStr, train:PathOrStr='train', valid:PathOrStr='valid',
- valid_pct=None, seed:int=None, classes:Collection=None, **kwargs:Any)->'ImageDataBunch':
- "Create from imagenet style dataset in `path` with `train`,`valid`,`test` subfolders (or provide `valid_pct`)."
- path=Path(path)
- il = ImageList.from_folder(path)
- if valid_pct is None: src = il.split_by_folder(train=train, valid=valid)
- else: src = il.split_by_rand_pct(valid_pct, seed)
- src = src.label_from_folder(classes=classes)
- return cls.create_from_ll(src, **kwargs)
-
- @classmethod
- def from_df(cls, path:PathOrStr, df:pd.DataFrame, folder:PathOrStr=None, label_delim:str=None, valid_pct:float=0.2,
- seed:int=None, fn_col:IntsOrStrs=0, label_col:IntsOrStrs=1, suffix:str='', **kwargs:Any)->'ImageDataBunch':
- "Create from a `DataFrame` `df`."
- src = (ImageList.from_df(df, path=path, folder=folder, suffix=suffix, cols=fn_col)
- .split_by_rand_pct(valid_pct, seed)
- .label_from_df(label_delim=label_delim, cols=label_col))
- return cls.create_from_ll(src, **kwargs)
-
- @classmethod
- def from_csv(cls, path:PathOrStr, folder:PathOrStr=None, label_delim:str=None, csv_labels:PathOrStr='labels.csv',
- valid_pct:float=0.2, seed:int=None, fn_col:int=0, label_col:int=1, suffix:str='', delimiter:str=None,
- header:Optional[Union[int,str]]='infer', **kwargs:Any)->'ImageDataBunch':
- "Create from a csv file in `path/csv_labels`."
- path = Path(path)
- df = pd.read_csv(path/csv_labels, header=header, delimiter=delimiter)
- return cls.from_df(path, df, folder=folder, label_delim=label_delim, valid_pct=valid_pct, seed=seed,
- fn_col=fn_col, label_col=label_col, suffix=suffix, **kwargs)
-
- @classmethod
- def from_lists(cls, path:PathOrStr, fnames:FilePathList, labels:Collection[str], valid_pct:float=0.2, seed:int=None,
- item_cls:Callable=None, **kwargs):
- "Create from list of `fnames` in `path`."
- item_cls = ifnone(item_cls, ImageList)
- fname2label = {f:l for (f,l) in zip(fnames, labels)}
- src = (item_cls(fnames, path=path).split_by_rand_pct(valid_pct, seed)
- .label_from_func(lambda x:fname2label[x]))
- return cls.create_from_ll(src, **kwargs)
-
- @classmethod
- def from_name_func(cls, path:PathOrStr, fnames:FilePathList, label_func:Callable, valid_pct:float=0.2, seed:int=None,
- **kwargs):
- "Create from list of `fnames` in `path` with `label_func`."
- src = ImageList(fnames, path=path).split_by_rand_pct(valid_pct, seed)
- return cls.create_from_ll(src.label_from_func(label_func), **kwargs)
-
- @classmethod
- def from_name_re(cls, path:PathOrStr, fnames:FilePathList, pat:str, valid_pct:float=0.2, **kwargs):
- "Create from list of `fnames` in `path` with re expression `pat`."
- pat = re.compile(pat)
- def _get_label(fn):
- if isinstance(fn, Path): fn = fn.as_posix()
- res = pat.search(str(fn))
- assert res,f'Failed to find "{pat}" in "{fn}"'
- return res.group(1)
- return cls.from_name_func(path, fnames, _get_label, valid_pct=valid_pct, **kwargs)
-
- @staticmethod
- def single_from_classes(path:Union[Path, str], classes:Collection[str], ds_tfms:TfmList=None, **kwargs):
- "Create an empty `ImageDataBunch` in `path` with `classes`. Typically used for inference."
- warn("""This method is deprecated and will be removed in a future version, use `load_learner` after
- `Learner.export()`""", DeprecationWarning)
- sd = ImageList([], path=path, ignore_empty=True).split_none()
- return sd.label_const(0, label_cls=CategoryList, classes=classes).transform(ds_tfms, **kwargs).databunch()
-
- def batch_stats(self, funcs:Collection[Callable]=None, ds_type:DatasetType=DatasetType.Train)->Tensor:
- "Grab a batch of data and call reduction function `func` per channel"
- funcs = ifnone(funcs, [torch.mean,torch.std])
- x = self.one_batch(ds_type=ds_type, denorm=False)[0].cpu()
- return [func(channel_view(x), 1) for func in funcs]
-
- def normalize(self, stats:Collection[Tensor]=None, do_x:bool=True, do_y:bool=False)->None:
- "Add normalize transform using `stats` (defaults to `DataBunch.batch_stats`)"
- if getattr(self,'norm',False): raise Exception('Can not call normalize twice')
- if stats is None: self.stats = self.batch_stats()
- else: self.stats = stats
- self.norm,self.denorm = normalize_funcs(*self.stats, do_x=do_x, do_y=do_y)
- self.add_tfm(self.norm)
- return self
-
-def download_image(url,dest, timeout=4):
- try: r = download_url(url, dest, overwrite=True, show_progress=False, timeout=timeout)
- except Exception as e: print(f"Error {url} {e}")
-
-def _download_image_inner(dest, url, i, timeout=4):
- suffix = re.findall(r'\.\w+?(?=(?:\?|$))', url)
- suffix = suffix[0] if len(suffix)>0 else '.jpg'
- download_image(url, dest/f"{i:08d}{suffix}", timeout=timeout)
-
-def download_images(urls:Collection[str], dest:PathOrStr, max_pics:int=1000, max_workers:int=8, timeout=4):
- "Download images listed in text file `urls` to path `dest`, at most `max_pics`"
- urls = open(urls).read().strip().split("\n")[:max_pics]
- dest = Path(dest)
- dest.mkdir(exist_ok=True)
- parallel(partial(_download_image_inner, dest, timeout=timeout), urls, max_workers=max_workers)
-
-def resize_to(img, targ_sz:int, use_min:bool=False):
- "Size to resize to, to hit `targ_sz` at same aspect ratio, in PIL coords (i.e w*h)"
- w,h = img.size
- min_sz = (min if use_min else max)(w,h)
- ratio = targ_sz/min_sz
- return int(w*ratio),int(h*ratio)
-
-def verify_image(file:Path, idx:int, delete:bool, max_size:Union[int,Tuple[int,int]]=None, dest:Path=None, n_channels:int=3,
- interp=PIL.Image.BILINEAR, ext:str=None, img_format:str=None, resume:bool=False, **kwargs):
- "Check if the image in `file` exists, maybe resize it and copy it in `dest`."
- try:
- # deal with partially broken images as indicated by PIL warnings
- with warnings.catch_warnings():
- warnings.filterwarnings('error')
- try:
- with open(file, 'rb') as img_file: PIL.Image.open(img_file)
- except Warning as w:
- if "Possibly corrupt EXIF data" in str(w):
- if delete: # green light to modify files
- print(f"{file}: Removing corrupt EXIF data")
- warnings.simplefilter("ignore")
- # save EXIF-cleaned up image, which happens automatically
- PIL.Image.open(file).save(file)
- else: # keep user's files intact
- print(f"{file}: Not removing corrupt EXIF data, pass `delete=True` to do that")
- else: warnings.warn(w)
-
- img = PIL.Image.open(file)
- imgarr = np.array(img)
- img_channels = 1 if len(imgarr.shape) == 2 else imgarr.shape[2]
- if (max_size is not None and (img.height > max_size or img.width > max_size)) or img_channels != n_channels:
- assert isinstance(dest, Path), "You should provide `dest` Path to save resized image"
- dest_fname = dest/file.name
- if ext is not None: dest_fname=dest_fname.with_suffix(ext)
- if resume and os.path.isfile(dest_fname): return
- if max_size is not None:
- new_sz = resize_to(img, max_size)
- img = img.resize(new_sz, resample=interp)
- if n_channels == 3: img = img.convert("RGB")
- img.save(dest_fname, img_format, **kwargs)
- except Exception as e:
- print(f'{e}')
- if delete: file.unlink()
-
-def verify_images(path:PathOrStr, delete:bool=True, max_workers:int=4, max_size:Union[int]=None, recurse:bool=False,
- dest:PathOrStr='.', n_channels:int=3, interp=PIL.Image.BILINEAR, ext:str=None, img_format:str=None,
- resume:bool=None, **kwargs):
- "Check if the images in `path` aren't broken, maybe resize them and copy it in `dest`."
- path = Path(path)
- if resume is None and dest == '.': resume=False
- dest = path/Path(dest)
- os.makedirs(dest, exist_ok=True)
- files = get_image_files(path, recurse=recurse)
- func = partial(verify_image, delete=delete, max_size=max_size, dest=dest, n_channels=n_channels, interp=interp,
- ext=ext, img_format=img_format, resume=resume, **kwargs)
- parallel(func, files, max_workers=max_workers)
-
-class ImageList(ItemList):
- "`ItemList` suitable for computer vision."
- _bunch,_square_show,_square_show_res = ImageDataBunch,True,True
- def __init__(self, *args, convert_mode='RGB', after_open:Callable=None, **kwargs):
- super().__init__(*args, **kwargs)
- self.convert_mode,self.after_open = convert_mode,after_open
- self.copy_new += ['convert_mode', 'after_open']
- self.c,self.sizes = 3,{}
-
- def open(self, fn):
- "Open image in `fn`, subclass and overwrite for custom behavior."
- return open_image(fn, convert_mode=self.convert_mode, after_open=self.after_open)
-
- def get(self, i):
- fn = super().get(i)
- res = self.open(fn)
- self.sizes[i] = res.size
- return res
-
- @classmethod
- def from_folder(cls, path:PathOrStr='.', extensions:Collection[str]=None, **kwargs)->ItemList:
- "Get the list of files in `path` that have an image suffix. `recurse` determines if we search subfolders."
- extensions = ifnone(extensions, image_extensions)
- return super().from_folder(path=path, extensions=extensions, **kwargs)
-
- @classmethod
- def from_df(cls, df:DataFrame, path:PathOrStr, cols:IntsOrStrs=0, folder:PathOrStr=None, suffix:str='', **kwargs)->'ItemList':
- "Get the filenames in `cols` of `df` with `folder` in front of them, `suffix` at the end."
- suffix = suffix or ''
- res = super().from_df(df, path=path, cols=cols, **kwargs)
- pref = f'{res.path}{os.path.sep}'
- if folder is not None: pref += f'{folder}{os.path.sep}'
- res.items = np.char.add(np.char.add(pref, res.items.astype(str)), suffix)
- return res
-
- @classmethod
- def from_csv(cls, path:PathOrStr, csv_name:str, header:str='infer', delimiter:str=None, **kwargs)->'ItemList':
- "Get the filenames in `path/csv_name` opened with `header`."
- path = Path(path)
- df = pd.read_csv(path/csv_name, header=header, delimiter=delimiter)
- return cls.from_df(df, path=path, **kwargs)
-
- def reconstruct(self, t:Tensor): return Image(t.float().clamp(min=0,max=1))
-
- def show_xys(self, xs, ys, imgsize:int=4, figsize:Optional[Tuple[int,int]]=None, **kwargs):
- "Show the `xs` (inputs) and `ys` (targets) on a figure of `figsize`."
- rows = int(np.ceil(math.sqrt(len(xs))))
- axs = subplots(rows, rows, imgsize=imgsize, figsize=figsize)
- for x,y,ax in zip(xs, ys, axs.flatten()): x.show(ax=ax, y=y, **kwargs)
- for ax in axs.flatten()[len(xs):]: ax.axis('off')
- plt.tight_layout()
-
- def show_xyzs(self, xs, ys, zs, imgsize:int=4, figsize:Optional[Tuple[int,int]]=None, **kwargs):
- "Show `xs` (inputs), `ys` (targets) and `zs` (predictions) on a figure of `figsize`."
- if self._square_show_res:
- title = 'Ground truth\nPredictions'
- rows = int(np.ceil(math.sqrt(len(xs))))
- axs = subplots(rows, rows, imgsize=imgsize, figsize=figsize, title=title, weight='bold', size=12)
- for x,y,z,ax in zip(xs,ys,zs,axs.flatten()): x.show(ax=ax, title=f'{str(y)}\n{str(z)}', **kwargs)
- for ax in axs.flatten()[len(xs):]: ax.axis('off')
- else:
- title = 'Ground truth/Predictions'
- axs = subplots(len(xs), 2, imgsize=imgsize, figsize=figsize, title=title, weight='bold', size=14)
- for i,(x,y,z) in enumerate(zip(xs,ys,zs)):
- x.show(ax=axs[i,0], y=y, **kwargs)
- x.show(ax=axs[i,1], y=z, **kwargs)
-
-class ObjectCategoryProcessor(MultiCategoryProcessor):
- "`PreProcessor` for labelled bounding boxes."
- def __init__(self, ds:ItemList, pad_idx:int=0):
- super().__init__(ds)
- self.pad_idx = pad_idx
- self.state_attrs.append('pad_idx')
-
- def process(self, ds:ItemList):
- ds.pad_idx = self.pad_idx
- super().process(ds)
-
- def process_one(self,item): return [item[0], [self.c2i.get(o,None) for o in item[1]]]
-
- def generate_classes(self, items):
- "Generate classes from unique `items` and add `background`."
- classes = super().generate_classes([o[1] for o in items])
- classes = ['background'] + list(classes)
- return classes
-
-def _get_size(xs,i):
- size = xs.sizes.get(i,None)
- if size is None:
- # Image hasn't been accessed yet, so we don't know its size
- _ = xs[i]
- size = xs.sizes[i]
- return size
-
-class ObjectCategoryList(MultiCategoryList):
- "`ItemList` for labelled bounding boxes."
- _processor = ObjectCategoryProcessor
-
- def get(self, i):
- return ImageBBox.create(*_get_size(self.x,i), *self.items[i], classes=self.classes, pad_idx=self.pad_idx)
-
- def analyze_pred(self, pred): return pred
-
- def reconstruct(self, t, x):
- (bboxes, labels) = t
- if len((labels - self.pad_idx).nonzero()) == 0: return
- i = (labels - self.pad_idx).nonzero().min()
- bboxes,labels = bboxes[i:],labels[i:]
- return ImageBBox.create(*x.size, bboxes, labels=labels, classes=self.classes, scale=False)
-
-class ObjectItemList(ImageList):
- "`ItemList` suitable for object detection."
- _label_cls,_square_show_res = ObjectCategoryList,False
-
-class SegmentationProcessor(PreProcessor):
- "`PreProcessor` that stores the classes for segmentation."
- def __init__(self, ds:ItemList): self.classes = ds.classes
- def process(self, ds:ItemList): ds.classes,ds.c = self.classes,len(self.classes)
-
-class SegmentationLabelList(ImageList):
- "`ItemList` for segmentation masks."
- _processor=SegmentationProcessor
- def __init__(self, items:Iterator, classes:Collection=None, **kwargs):
- super().__init__(items, **kwargs)
- self.copy_new.append('classes')
- self.classes,self.loss_func = classes,CrossEntropyFlat(axis=1)
-
- def open(self, fn): return open_mask(fn)
- def analyze_pred(self, pred, thresh:float=0.5): return pred.argmax(dim=0)[None]
- def reconstruct(self, t:Tensor): return ImageSegment(t)
-
-class SegmentationItemList(ImageList):
- "`ItemList` suitable for segmentation tasks."
- _label_cls,_square_show_res = SegmentationLabelList,False
-
-class PointsProcessor(PreProcessor):
- "`PreProcessor` that stores the number of targets for point regression."
- def __init__(self, ds:ItemList): self.c = len(ds.items[0].reshape(-1))
- def process(self, ds:ItemList): ds.c = self.c
-
-class PointsLabelList(ItemList):
- "`ItemList` for points."
- _processor = PointsProcessor
- def __init__(self, items:Iterator, **kwargs):
- super().__init__(items, **kwargs)
- self.loss_func = MSELossFlat()
-
- def get(self, i):
- o = super().get(i)
- return ImagePoints(FlowField(_get_size(self.x,i), o), scale=True)
-
- def analyze_pred(self, pred, thresh:float=0.5): return pred.view(-1,2)
- def reconstruct(self, t, x): return ImagePoints(FlowField(x.size, t), scale=False)
-
-class PointsItemList(ImageList):
- "`ItemList` for `Image` to `ImagePoints` tasks."
- _label_cls,_square_show_res = PointsLabelList,False
-
-class ImageImageList(ImageList):
- "`ItemList` suitable for `Image` to `Image` tasks."
- _label_cls,_square_show,_square_show_res = ImageList,False,False
-
- def show_xys(self, xs, ys, imgsize:int=4, figsize:Optional[Tuple[int,int]]=None, **kwargs):
- "Show the `xs` (inputs) and `ys`(targets) on a figure of `figsize`."
- axs = subplots(len(xs), 2, imgsize=imgsize, figsize=figsize)
- for i, (x,y) in enumerate(zip(xs,ys)):
- x.show(ax=axs[i,0], **kwargs)
- y.show(ax=axs[i,1], **kwargs)
- plt.tight_layout()
-
- def show_xyzs(self, xs, ys, zs, imgsize:int=4, figsize:Optional[Tuple[int,int]]=None, **kwargs):
- "Show `xs` (inputs), `ys` (targets) and `zs` (predictions) on a figure of `figsize`."
- title = 'Input / Prediction / Target'
- axs = subplots(len(xs), 3, imgsize=imgsize, figsize=figsize, title=title, weight='bold', size=14)
- for i,(x,y,z) in enumerate(zip(xs,ys,zs)):
- x.show(ax=axs[i,0], **kwargs)
- y.show(ax=axs[i,2], **kwargs)
- z.show(ax=axs[i,1], **kwargs)
-
-
-def _ll_pre_transform(self, train_tfm:List[Callable], valid_tfm:List[Callable]):
- "Call `train_tfm` and `valid_tfm` after opening image, before converting from `PIL.Image`"
- self.train.x.after_open = compose(train_tfm)
- self.valid.x.after_open = compose(valid_tfm)
- return self
-
-def _db_pre_transform(self, train_tfm:List[Callable], valid_tfm:List[Callable]):
- "Call `train_tfm` and `valid_tfm` after opening image, before converting from `PIL.Image`"
- self.train_ds.x.after_open = compose(train_tfm)
- self.valid_ds.x.after_open = compose(valid_tfm)
- return self
-
-def _presize(self, size:int, val_xtra_size:int=32, scale:Tuple[float]=(0.08, 1.0), ratio:Tuple[float]=(0.75, 4./3.),
- interpolation:int=2):
- "Resize images to `size` using `RandomResizedCrop`, passing along `kwargs` to train transform"
- return self.pre_transform(
- tvt.RandomResizedCrop(size, scale=scale, ratio=ratio, interpolation=interpolation),
- [tvt.Resize(size+val_xtra_size), tvt.CenterCrop(size)])
-
-LabelLists.pre_transform = _ll_pre_transform
-DataBunch.pre_transform = _db_pre_transform
-LabelLists.presize = _presize
-DataBunch.presize = _presize
-
diff --git a/spaces/all-things-vits/CLIPGroundingExplainability/clip_grounding/utils/visualize.py b/spaces/all-things-vits/CLIPGroundingExplainability/clip_grounding/utils/visualize.py
deleted file mode 100644
index aaee90b5be63568dbcde91da84e9560a580c7f89..0000000000000000000000000000000000000000
--- a/spaces/all-things-vits/CLIPGroundingExplainability/clip_grounding/utils/visualize.py
+++ /dev/null
@@ -1,183 +0,0 @@
-"""Helpers for visualization"""
-import numpy as np
-import matplotlib
-import matplotlib.pyplot as plt
-import cv2
-from PIL import Image
-
-
-# define predominanat colors
-COLORS = {
- "pink": (242, 116, 223),
- "cyan": (46, 242, 203),
- "red": (255, 0, 0),
- "green": (0, 255, 0),
- "blue": (0, 0, 255),
- "yellow": (255, 255, 0),
-}
-
-
-def show_single_image(image: np.ndarray, figsize: tuple = (8, 8), title: str = None, titlesize=18, cmap: str = None, ticks=False, save=False, save_path=None):
- """Show a single image."""
- fig, ax = plt.subplots(1, 1, figsize=figsize)
-
- if isinstance(image, Image.Image):
- image = np.asarray(image)
-
- ax.set_title(title, fontsize=titlesize)
- ax.imshow(image, cmap=cmap)
-
- if not ticks:
- ax.set_xticks([])
- ax.set_yticks([])
-
- if save:
- plt.savefig(save_path, bbox_inches='tight')
-
- plt.show()
-
-
-def show_grid_of_images(
- images: np.ndarray, n_cols: int = 4, figsize: tuple = (8, 8),
- cmap=None, subtitles=None, title=None, subtitlesize=18,
- save=False, save_path=None, titlesize=20,
- ):
- """Show a grid of images."""
- n_cols = min(n_cols, len(images))
-
- copy_of_images = images.copy()
- for i, image in enumerate(copy_of_images):
- if isinstance(image, Image.Image):
- image = np.asarray(image)
- images[i] = image
-
- if subtitles is None:
- subtitles = [None] * len(images)
-
- n_rows = int(np.ceil(len(images) / n_cols))
- fig, axes = plt.subplots(n_rows, n_cols, figsize=figsize)
- for i, ax in enumerate(axes.flat):
- if i < len(images):
- if len(images[i].shape) == 2 and cmap is None:
- cmap="gray"
- ax.imshow(images[i], cmap=cmap)
- ax.set_title(subtitles[i], fontsize=subtitlesize)
- ax.axis('off')
- fig.set_tight_layout(True)
- plt.suptitle(title, y=0.8, fontsize=titlesize)
-
- if save:
- plt.savefig(save_path, bbox_inches='tight')
- plt.close()
- else:
- plt.show()
-
-
-def show_keypoint_matches(
- img1, kp1, img2, kp2, matches,
- K=10, figsize=(10, 5), drawMatches_args=dict(matchesThickness=3, singlePointColor=(0, 0, 0)),
- choose_matches="random",
- ):
- """Displays matches found in the pair of images"""
- if choose_matches == "random":
- selected_matches = np.random.choice(matches, K)
- elif choose_matches == "all":
- K = len(matches)
- selected_matches = matches
- elif choose_matches == "topk":
- selected_matches = matches[:K]
- else:
- raise ValueError(f"Unknown value for choose_matches: {choose_matches}")
-
- # color each match with a different color
- cmap = matplotlib.cm.get_cmap('gist_rainbow', K)
- colors = [[int(x*255) for x in cmap(i)[:3]] for i in np.arange(0,K)]
- drawMatches_args.update({"matchColor": -1, "singlePointColor": (100, 100, 100)})
-
- img3 = cv2.drawMatches(img1, kp1, img2, kp2, selected_matches, outImg=None, **drawMatches_args)
- show_single_image(
- img3,
- figsize=figsize,
- title=f"[{choose_matches.upper()}] Selected K = {K} matches between the pair of images.",
- )
- return img3
-
-
-def draw_kps_on_image(image: np.ndarray, kps: np.ndarray, color=COLORS["red"], radius=3, thickness=-1, return_as="numpy"):
- """
- Draw keypoints on image.
-
- Args:
- image: Image to draw keypoints on.
- kps: Keypoints to draw. Note these should be in (x, y) format.
- """
- if isinstance(image, Image.Image):
- image = np.asarray(image)
-
- for kp in kps:
- image = cv2.circle(
- image, (int(kp[0]), int(kp[1])), radius=radius, color=color, thickness=thickness)
-
- if return_as == "PIL":
- return Image.fromarray(image)
-
- return image
-
-
-def get_concat_h(im1, im2):
- """Concatenate two images horizontally"""
- dst = Image.new('RGB', (im1.width + im2.width, im1.height))
- dst.paste(im1, (0, 0))
- dst.paste(im2, (im1.width, 0))
- return dst
-
-
-def get_concat_v(im1, im2):
- """Concatenate two images vertically"""
- dst = Image.new('RGB', (im1.width, im1.height + im2.height))
- dst.paste(im1, (0, 0))
- dst.paste(im2, (0, im1.height))
- return dst
-
-
-def show_images_with_keypoints(images: list, kps: list, radius=15, color=(0, 220, 220), figsize=(10, 8), return_images=False, save=False, save_path="sample.png"):
- assert len(images) == len(kps)
-
- # generate
- images_with_kps = []
- for i in range(len(images)):
- img_with_kps = draw_kps_on_image(images[i], kps[i], radius=radius, color=color, return_as="PIL")
- images_with_kps.append(img_with_kps)
-
- # show
- show_grid_of_images(images_with_kps, n_cols=len(images), figsize=figsize, save=save, save_path=save_path)
-
- if return_images:
- return images_with_kps
-
-
-def set_latex_fonts(usetex=True, fontsize=14, show_sample=False, **kwargs):
- try:
- plt.rcParams.update({
- "text.usetex": usetex,
- "font.family": "serif",
- "font.serif": ["Computer Modern Roman"],
- "font.size": fontsize,
- **kwargs,
- })
- if show_sample:
- plt.figure()
- plt.title("Sample $y = x^2$")
- plt.plot(np.arange(0, 10), np.arange(0, 10)**2, "--o")
- plt.grid()
- plt.show()
- except:
- print("Failed to setup LaTeX fonts. Proceeding without.")
- pass
-
-
-def get_colors(num_colors, palette="jet"):
- cmap = plt.get_cmap(palette)
- colors = [cmap(i) for i in np.linspace(0, 1, num_colors)]
- return colors
-
diff --git a/spaces/amankishore/sjc/ncsn/ema.py b/spaces/amankishore/sjc/ncsn/ema.py
deleted file mode 100644
index 5c67b81c00cdd1e1bf8fd1d80d25c7b1bab5c554..0000000000000000000000000000000000000000
--- a/spaces/amankishore/sjc/ncsn/ema.py
+++ /dev/null
@@ -1,47 +0,0 @@
-import copy
-import torch.nn as nn
-
-class EMAHelper(object):
- def __init__(self, mu=0.999):
- self.mu = mu
- self.shadow = {}
-
- def register(self, module):
- if isinstance(module, nn.DataParallel):
- module = module.module
- for name, param in module.named_parameters():
- if param.requires_grad:
- self.shadow[name] = param.data.clone()
-
- def update(self, module):
- if isinstance(module, nn.DataParallel):
- module = module.module
- for name, param in module.named_parameters():
- if param.requires_grad:
- self.shadow[name].data = (1. - self.mu) * param.data + self.mu * self.shadow[name].data
-
- def ema(self, module):
- if isinstance(module, nn.DataParallel):
- module = module.module
- for name, param in module.named_parameters():
- if param.requires_grad:
- param.data.copy_(self.shadow[name].data)
-
- def ema_copy(self, module):
- if isinstance(module, nn.DataParallel):
- inner_module = module.module
- module_copy = type(inner_module)(inner_module.config).to(inner_module.config.device)
- module_copy.load_state_dict(inner_module.state_dict())
- module_copy = nn.DataParallel(module_copy)
- else:
- module_copy = type(module)(module.config).to(module.config.device)
- module_copy.load_state_dict(module.state_dict())
- # module_copy = copy.deepcopy(module)
- self.ema(module_copy)
- return module_copy
-
- def state_dict(self):
- return self.shadow
-
- def load_state_dict(self, state_dict):
- self.shadow = state_dict
diff --git a/spaces/amgad59/Keras_cv_wedding_dress/app.py b/spaces/amgad59/Keras_cv_wedding_dress/app.py
deleted file mode 100644
index 6b05576c3dc6ca7caeee91e364aa4a42a0039c95..0000000000000000000000000000000000000000
--- a/spaces/amgad59/Keras_cv_wedding_dress/app.py
+++ /dev/null
@@ -1,65 +0,0 @@
-from tensorflow import keras
-
-keras.mixed_precision.set_global_policy("mixed_float16")
-
-import time
-
-import gradio as gr
-import keras_cv
-
-from constants import css, examples, img_height, img_width, num_images_to_gen
-from share_btn import community_icon_html, loading_icon_html, share_js
-
-# Load model.
-weights_path = keras.utils.get_file(
- origin="https://huggingface.co/mayve/GP/resolve/main/ckpt_epoch_96.h5",
- file_hash="4b4348297aa9853ff9dc4da7f52dcb240210564400f164e5155e5f4dc1866626"
-)
-pokemon_model = keras_cv.models.StableDiffusion(
- img_width=img_width, img_height=img_height
-)
-pokemon_model.diffusion_model.load_weights(weights_path)
-
-pokemon_model.diffusion_model.compile(jit_compile=True)
-pokemon_model.decoder.compile(jit_compile=True)
-pokemon_model.text_encoder.compile(jit_compile=True)
-
-# Warm-up the model.
-#_ = pokemon_model.text_to_image("Teddy bear", batch_size=num_images_to_gen)
-
-
-def generate_image_fn(prompt: str, unconditional_guidance_scale: int) -> list:
- start_time = time.time()
- # `images is an `np.ndarray`. So we convert it to a list of ndarrays.
- # Each ndarray represents a generated image.
- # Reference: https://gradio.app/docs/#gallery
- images = pokemon_model.text_to_image(
- prompt,
- batch_size=num_images_to_gen,
- unconditional_guidance_scale=unconditional_guidance_scale,
- num_steps = 100,
- )
- end_time = time.time()
- print(f"Time taken: {end_time - start_time} seconds.")
- return [image for image in images]
-
-
-description = "This Space demonstrates a fine-tuned Stable Diffusion model. You can use it for generating custom pokemons. To get started, either enter a prompt and pick one from the examples below. For details on the fine-tuning procedure, refer to [this repository](https://github.com/sayakpaul/stable-diffusion-keras-ft/)."
-article = "This Space leverages a T4 GPU to run the predictions. We use mixed-precision to speed up the inference latency. We further use XLA to carve out maximum performance from TensorFlow."
-gr.Interface(
- generate_image_fn,
- inputs=[
- gr.Textbox(
- label="Enter your prompt",
- max_lines=1,
- placeholder="cute Sundar Pichai creature",
- ),
- gr.Slider(value=10, minimum=8, maximum=50, step=1),
- ],
- outputs=[gr.outputs.Image(type="pil"),gr.outputs.Image(type="pil"),gr.outputs.Image(type="pil"),gr.outputs.Image(type="pil")],
- title="Generate custom pokemons",
- description=description,
- article=article,
- examples=[["cute Sundar Pichai creature", 40], ["Hello kitty", 40]],
- allow_flagging=False,
-).launch()
\ No newline at end of file
diff --git a/spaces/anakin87/who-killed-laura-palmer/app_utils/config.py b/spaces/anakin87/who-killed-laura-palmer/app_utils/config.py
deleted file mode 100644
index e28ac80e372699ec76504d4ea0acacf0f119ff8e..0000000000000000000000000000000000000000
--- a/spaces/anakin87/who-killed-laura-palmer/app_utils/config.py
+++ /dev/null
@@ -1,10 +0,0 @@
-
-INDEX_DIR = 'data/index'
-QUESTIONS_PATH = 'data/questions/selected_questions.txt'
-RETRIEVER_MODEL = "sentence-transformers/multi-qa-mpnet-base-dot-v1"
-RETRIEVER_MODEL_FORMAT = "sentence_transformers"
-READER_MODEL = "deepset/roberta-base-squad2"
-READER_CONFIG_THRESHOLD = 0.15
-RETRIEVER_TOP_K = 10
-READER_TOP_K = 5
-LOW_RELEVANCE_THRESHOLD = 0.5
\ No newline at end of file
diff --git a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/client/css/typing.css b/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/client/css/typing.css
deleted file mode 100644
index f998ebe7f2172e4ac23cdeff6ba6fd811b67a145..0000000000000000000000000000000000000000
--- a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/client/css/typing.css
+++ /dev/null
@@ -1,15 +0,0 @@
-.typing {
- position: absolute;
- top: -25px;
- left: 0;
- font-size: 14px;
- animation: show_popup 0.4s;
-}
-
-.typing-hiding {
- animation: hide_popup 0.4s;
-}
-
-.typing-hidden {
- display: none;
-}
diff --git a/spaces/andyssj/entregable2/app.py b/spaces/andyssj/entregable2/app.py
deleted file mode 100644
index 7ea38fe09805b75a09050d1f6cffc1f18c71f821..0000000000000000000000000000000000000000
--- a/spaces/andyssj/entregable2/app.py
+++ /dev/null
@@ -1,20 +0,0 @@
-from huggingface_hub import from_pretrained_fastai
-import gradio as gr
-from fastai.vision.all import *
-
-
-
-# repo_id = "YOUR_USERNAME/YOUR_LEARNER_NAME"
-repo_id = "andyssj/entregable2"
-
-learner = from_pretrained_fastai(repo_id)
-labels = learner.dls.vocab
-
-# Definimos una función que se encarga de llevar a cabo las predicciones
-def predict(img):
- #img = PILImage.create(img)
- pred,pred_idx,probs = learner.predict(img)
- return {labels[i]: float(probs[i]) for i in range(len(labels))}
-
-# Creamos la interfaz y la lanzamos.
-gr.Interface(fn=predict, inputs=gr.inputs.Image(shape=(128, 128)), outputs=gr.outputs.Label(num_top_classes=3),examples=['american_football_158.jpg','football_369.jpg','baseball_108.jpg']).launch(share=False)
\ No newline at end of file
diff --git a/spaces/antonovmaxim/text-generation-webui-space/api-example.py b/spaces/antonovmaxim/text-generation-webui-space/api-example.py
deleted file mode 100644
index f35ea1db76f291bf1cae90a1a7801d2d19be3acc..0000000000000000000000000000000000000000
--- a/spaces/antonovmaxim/text-generation-webui-space/api-example.py
+++ /dev/null
@@ -1,44 +0,0 @@
-import requests
-
-# For local streaming, the websockets are hosted without ssl - http://
-HOST = 'localhost:5000'
-URI = f'http://{HOST}/api/v1/generate'
-
-# For reverse-proxied streaming, the remote will likely host with ssl - https://
-# URI = 'https://your-uri-here.trycloudflare.com/api/v1/generate'
-
-
-def run(prompt):
- request = {
- 'prompt': prompt,
- 'max_new_tokens': 250,
- 'do_sample': True,
- 'temperature': 1.3,
- 'top_p': 0.1,
- 'typical_p': 1,
- 'repetition_penalty': 1.18,
- 'top_k': 40,
- 'min_length': 0,
- 'no_repeat_ngram_size': 0,
- 'num_beams': 1,
- 'penalty_alpha': 0,
- 'length_penalty': 1,
- 'early_stopping': False,
- 'seed': -1,
- 'add_bos_token': True,
- 'truncation_length': 2048,
- 'ban_eos_token': False,
- 'skip_special_tokens': True,
- 'stopping_strings': []
- }
-
- response = requests.post(URI, json=request)
-
- if response.status_code == 200:
- result = response.json()['results'][0]['text']
- print(prompt + result)
-
-
-if __name__ == '__main__':
- prompt = "In order to make homemade bread, follow these steps:\n1)"
- run(prompt)
diff --git a/spaces/anzorq/openai_whisper_stt/README.md b/spaces/anzorq/openai_whisper_stt/README.md
deleted file mode 100644
index 1dbf53ca9080426d623d1dccddb4704de960d1a0..0000000000000000000000000000000000000000
--- a/spaces/anzorq/openai_whisper_stt/README.md
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: OpenAI's Whisper Real-time Demo
-emoji: 🎙️
-colorFrom: indigo
-colorTo: red
-sdk: gradio
-sdk_version: 3.3.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-OpenAI's Whisper Real-time Demo
-
-A simple demo of OpenAI's [**Whisper**](https://github.com/openai/whisper) speech recognition model.
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/extras.py b/spaces/aodianyun/stable-diffusion-webui/modules/extras.py
deleted file mode 100644
index 6a9af2d8e641fdf1ebd29045078d29b5aeae3d6f..0000000000000000000000000000000000000000
--- a/spaces/aodianyun/stable-diffusion-webui/modules/extras.py
+++ /dev/null
@@ -1,258 +0,0 @@
-import os
-import re
-import shutil
-
-
-import torch
-import tqdm
-
-from modules import shared, images, sd_models, sd_vae, sd_models_config
-from modules.ui_common import plaintext_to_html
-import gradio as gr
-import safetensors.torch
-
-
-def run_pnginfo(image):
- if image is None:
- return '', '', ''
-
- geninfo, items = images.read_info_from_image(image)
- items = {**{'parameters': geninfo}, **items}
-
- info = ''
- for key, text in items.items():
- info += f"""
-
-
{plaintext_to_html(str(key))}
-
{plaintext_to_html(str(text))}
-
-""".strip()+"\n"
-
- if len(info) == 0:
- message = "Nothing found in the image."
- info = f"
{message}
"
-
- return '', geninfo, info
-
-
-def create_config(ckpt_result, config_source, a, b, c):
- def config(x):
- res = sd_models_config.find_checkpoint_config_near_filename(x) if x else None
- return res if res != shared.sd_default_config else None
-
- if config_source == 0:
- cfg = config(a) or config(b) or config(c)
- elif config_source == 1:
- cfg = config(b)
- elif config_source == 2:
- cfg = config(c)
- else:
- cfg = None
-
- if cfg is None:
- return
-
- filename, _ = os.path.splitext(ckpt_result)
- checkpoint_filename = filename + ".yaml"
-
- print("Copying config:")
- print(" from:", cfg)
- print(" to:", checkpoint_filename)
- shutil.copyfile(cfg, checkpoint_filename)
-
-
-checkpoint_dict_skip_on_merge = ["cond_stage_model.transformer.text_model.embeddings.position_ids"]
-
-
-def to_half(tensor, enable):
- if enable and tensor.dtype == torch.float:
- return tensor.half()
-
- return tensor
-
-
-def run_modelmerger(id_task, primary_model_name, secondary_model_name, tertiary_model_name, interp_method, multiplier, save_as_half, custom_name, checkpoint_format, config_source, bake_in_vae, discard_weights):
- shared.state.begin()
- shared.state.job = 'model-merge'
-
- def fail(message):
- shared.state.textinfo = message
- shared.state.end()
- return [*[gr.update() for _ in range(4)], message]
-
- def weighted_sum(theta0, theta1, alpha):
- return ((1 - alpha) * theta0) + (alpha * theta1)
-
- def get_difference(theta1, theta2):
- return theta1 - theta2
-
- def add_difference(theta0, theta1_2_diff, alpha):
- return theta0 + (alpha * theta1_2_diff)
-
- def filename_weighted_sum():
- a = primary_model_info.model_name
- b = secondary_model_info.model_name
- Ma = round(1 - multiplier, 2)
- Mb = round(multiplier, 2)
-
- return f"{Ma}({a}) + {Mb}({b})"
-
- def filename_add_difference():
- a = primary_model_info.model_name
- b = secondary_model_info.model_name
- c = tertiary_model_info.model_name
- M = round(multiplier, 2)
-
- return f"{a} + {M}({b} - {c})"
-
- def filename_nothing():
- return primary_model_info.model_name
-
- theta_funcs = {
- "Weighted sum": (filename_weighted_sum, None, weighted_sum),
- "Add difference": (filename_add_difference, get_difference, add_difference),
- "No interpolation": (filename_nothing, None, None),
- }
- filename_generator, theta_func1, theta_func2 = theta_funcs[interp_method]
- shared.state.job_count = (1 if theta_func1 else 0) + (1 if theta_func2 else 0)
-
- if not primary_model_name:
- return fail("Failed: Merging requires a primary model.")
-
- primary_model_info = sd_models.checkpoints_list[primary_model_name]
-
- if theta_func2 and not secondary_model_name:
- return fail("Failed: Merging requires a secondary model.")
-
- secondary_model_info = sd_models.checkpoints_list[secondary_model_name] if theta_func2 else None
-
- if theta_func1 and not tertiary_model_name:
- return fail(f"Failed: Interpolation method ({interp_method}) requires a tertiary model.")
-
- tertiary_model_info = sd_models.checkpoints_list[tertiary_model_name] if theta_func1 else None
-
- result_is_inpainting_model = False
- result_is_instruct_pix2pix_model = False
-
- if theta_func2:
- shared.state.textinfo = f"Loading B"
- print(f"Loading {secondary_model_info.filename}...")
- theta_1 = sd_models.read_state_dict(secondary_model_info.filename, map_location='cpu')
- else:
- theta_1 = None
-
- if theta_func1:
- shared.state.textinfo = f"Loading C"
- print(f"Loading {tertiary_model_info.filename}...")
- theta_2 = sd_models.read_state_dict(tertiary_model_info.filename, map_location='cpu')
-
- shared.state.textinfo = 'Merging B and C'
- shared.state.sampling_steps = len(theta_1.keys())
- for key in tqdm.tqdm(theta_1.keys()):
- if key in checkpoint_dict_skip_on_merge:
- continue
-
- if 'model' in key:
- if key in theta_2:
- t2 = theta_2.get(key, torch.zeros_like(theta_1[key]))
- theta_1[key] = theta_func1(theta_1[key], t2)
- else:
- theta_1[key] = torch.zeros_like(theta_1[key])
-
- shared.state.sampling_step += 1
- del theta_2
-
- shared.state.nextjob()
-
- shared.state.textinfo = f"Loading {primary_model_info.filename}..."
- print(f"Loading {primary_model_info.filename}...")
- theta_0 = sd_models.read_state_dict(primary_model_info.filename, map_location='cpu')
-
- print("Merging...")
- shared.state.textinfo = 'Merging A and B'
- shared.state.sampling_steps = len(theta_0.keys())
- for key in tqdm.tqdm(theta_0.keys()):
- if theta_1 and 'model' in key and key in theta_1:
-
- if key in checkpoint_dict_skip_on_merge:
- continue
-
- a = theta_0[key]
- b = theta_1[key]
-
- # this enables merging an inpainting model (A) with another one (B);
- # where normal model would have 4 channels, for latenst space, inpainting model would
- # have another 4 channels for unmasked picture's latent space, plus one channel for mask, for a total of 9
- if a.shape != b.shape and a.shape[0:1] + a.shape[2:] == b.shape[0:1] + b.shape[2:]:
- if a.shape[1] == 4 and b.shape[1] == 9:
- raise RuntimeError("When merging inpainting model with a normal one, A must be the inpainting model.")
- if a.shape[1] == 4 and b.shape[1] == 8:
- raise RuntimeError("When merging instruct-pix2pix model with a normal one, A must be the instruct-pix2pix model.")
-
- if a.shape[1] == 8 and b.shape[1] == 4:#If we have an Instruct-Pix2Pix model...
- theta_0[key][:, 0:4, :, :] = theta_func2(a[:, 0:4, :, :], b, multiplier)#Merge only the vectors the models have in common. Otherwise we get an error due to dimension mismatch.
- result_is_instruct_pix2pix_model = True
- else:
- assert a.shape[1] == 9 and b.shape[1] == 4, f"Bad dimensions for merged layer {key}: A={a.shape}, B={b.shape}"
- theta_0[key][:, 0:4, :, :] = theta_func2(a[:, 0:4, :, :], b, multiplier)
- result_is_inpainting_model = True
- else:
- theta_0[key] = theta_func2(a, b, multiplier)
-
- theta_0[key] = to_half(theta_0[key], save_as_half)
-
- shared.state.sampling_step += 1
-
- del theta_1
-
- bake_in_vae_filename = sd_vae.vae_dict.get(bake_in_vae, None)
- if bake_in_vae_filename is not None:
- print(f"Baking in VAE from {bake_in_vae_filename}")
- shared.state.textinfo = 'Baking in VAE'
- vae_dict = sd_vae.load_vae_dict(bake_in_vae_filename, map_location='cpu')
-
- for key in vae_dict.keys():
- theta_0_key = 'first_stage_model.' + key
- if theta_0_key in theta_0:
- theta_0[theta_0_key] = to_half(vae_dict[key], save_as_half)
-
- del vae_dict
-
- if save_as_half and not theta_func2:
- for key in theta_0.keys():
- theta_0[key] = to_half(theta_0[key], save_as_half)
-
- if discard_weights:
- regex = re.compile(discard_weights)
- for key in list(theta_0):
- if re.search(regex, key):
- theta_0.pop(key, None)
-
- ckpt_dir = shared.cmd_opts.ckpt_dir or sd_models.model_path
-
- filename = filename_generator() if custom_name == '' else custom_name
- filename += ".inpainting" if result_is_inpainting_model else ""
- filename += ".instruct-pix2pix" if result_is_instruct_pix2pix_model else ""
- filename += "." + checkpoint_format
-
- output_modelname = os.path.join(ckpt_dir, filename)
-
- shared.state.nextjob()
- shared.state.textinfo = "Saving"
- print(f"Saving to {output_modelname}...")
-
- _, extension = os.path.splitext(output_modelname)
- if extension.lower() == ".safetensors":
- safetensors.torch.save_file(theta_0, output_modelname, metadata={"format": "pt"})
- else:
- torch.save(theta_0, output_modelname)
-
- sd_models.list_models()
-
- create_config(output_modelname, config_source, primary_model_info, secondary_model_info, tertiary_model_info)
-
- print(f"Checkpoint saved to {output_modelname}.")
- shared.state.textinfo = "Checkpoint saved"
- shared.state.end()
-
- return [*[gr.Dropdown.update(choices=sd_models.checkpoint_tiles()) for _ in range(4)], "Checkpoint saved to " + output_modelname]
diff --git a/spaces/apokalis/Apokalis/README.md b/spaces/apokalis/Apokalis/README.md
deleted file mode 100644
index 00634f356197a46ce9abfb83011f20e4a8d74e74..0000000000000000000000000000000000000000
--- a/spaces/apokalis/Apokalis/README.md
+++ /dev/null
@@ -1,20 +0,0 @@
----
-title: Shiny for Python template
-emoji: 🌍
-colorFrom: yellow
-colorTo: indigo
-sdk: docker
-pinned: false
-license: openrail
----
-
-This is a templated Space for [Shiny for Python](https://shiny.rstudio.com/py/).
-
-
-To get started with a new app do the following:
-
-1) Install Shiny with `pip install shiny`
-2) Create a new app with `shiny create .`
-3) Then run the app with `shiny run --reload`
-
-To learn more about this framework please see the [Documentation](https://shiny.rstudio.com/py/docs/overview.html).
diff --git a/spaces/appl044/Chat-GPT-LangChain/README.md b/spaces/appl044/Chat-GPT-LangChain/README.md
deleted file mode 100644
index f3a4fd48d889dd9732f397f53552637a0818f390..0000000000000000000000000000000000000000
--- a/spaces/appl044/Chat-GPT-LangChain/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: GPT+WolframAlpha+Whisper
-emoji: 👀
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.16.1
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: JavaFXpert/Chat-GPT-LangChain
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/apsys/hetfit/module_name.md b/spaces/apsys/hetfit/module_name.md
deleted file mode 100644
index b90957d96651ccbe50c114fac80f1722c84f0072..0000000000000000000000000000000000000000
--- a/spaces/apsys/hetfit/module_name.md
+++ /dev/null
@@ -1,456 +0,0 @@
-# Table of Contents
-
-- [Table of Contents](#table-of-contents)
-- [main](#main)
-- [:orange\[PINN\]](#orangepinn)
- - [PINN.pinns](#pinnpinns)
- - [PINNd\_p Objects](#pinnd_p-objects)
- - [PINNhd\_ma Objects](#pinnhd_ma-objects)
- - [PINNT\_ma Objects](#pinnt_ma-objects)
-- [:orange\[utils\]](#orangeutils)
- - [utils.test](#utilstest)
- - [utils.dataset\_loader](#utilsdataset_loader)
- - [get\_dataset](#get_dataset)
- - [utils.ndgan](#utilsndgan)
- - [DCGAN Objects](#dcgan-objects)
- - [define\_discriminator](#define_discriminator)
- - [generate\_latent\_points](#generate_latent_points)
- - [define\_gan](#define_gan)
- - [summarize\_performance](#summarize_performance)
- - [train\_gan](#train_gan)
- - [utils.data\_augmentation](#utilsdata_augmentation)
- - [dataset Objects](#dataset-objects)
- - [\_\_init\_\_](#__init__)
-- [:orange\[nets\]](#orangenets)
- - [nets.envs](#netsenvs)
- - [SCI Objects](#sci-objects)
- - [data\_flow](#data_flow)
- - [init\_seed](#init_seed)
- - [compile](#compile)
- - [train](#train)
- - [inference](#inference)
- - [RCI Objects](#rci-objects)
- - [data\_flow](#data_flow-1)
- - [compile](#compile-1)
- - [nets.dense](#netsdense)
- - [Net Objects](#net-objects)
- - [\_\_init\_\_](#__init__-1)
- - [nets.design](#netsdesign)
- - [B\_field\_norm](#b_field_norm)
- - [nets.deep\_dense](#netsdeep_dense)
- - [dmodel Objects](#dmodel-objects)
- - [\_\_init\_\_](#__init__-2)
-
-
-
-# main
-
-
-
-# :orange[PINN]
-
-
-
-## PINN.pinns
-
-
-
-## PINNd\_p Objects
-
-```python
-class PINNd_p(nn.Module)
-```
-
-$d \mapsto P$
-
-
-
-## PINNhd\_ma Objects
-
-```python
-class PINNhd_ma(nn.Module)
-```
-
-$h,d \mapsto m_a $
-
-
-
-## PINNT\_ma Objects
-
-```python
-class PINNT_ma(nn.Module)
-```
-
-$ m_a, U \mapsto T$
-
-
-
----
-# :orange[utils]
-
-
-
-## utils.test
-
-
-
-## utils.dataset\_loader
-
-
-
-#### get\_dataset
-
-```python
-def get_dataset(raw: bool = False,
- sample_size: int = 1000,
- name: str = 'dataset.pkl',
- source: str = 'dataset.csv',
- boundary_conditions: list = None) -> _pickle
-```
-
-Gets augmented dataset
-
-**Arguments**:
-
-- `raw` _bool, optional_ - either to use source data or augmented. Defaults to False.
-- `sample_size` _int, optional_ - sample size. Defaults to 1000.
-- `name` _str, optional_ - name of wanted dataset. Defaults to 'dataset.pkl'.
-- `boundary_conditions` _list,optional_ - y1,y2,x1,x2.
-
-**Returns**:
-
-- `_pickle` - pickle buffer
-
-
-
-## utils.ndgan
-
-
-
-### DCGAN Objects
-
-```python
-class DCGAN()
-```
-
-
-
-#### define\_discriminator
-
-```python
-def define_discriminator(inputs=8)
-```
-
-function to return the compiled discriminator model
-
-
-
-#### generate\_latent\_points
-
-```python
-def generate_latent_points(latent_dim, n)
-```
-
-generate points in latent space as input for the generator
-
-
-
-#### define\_gan
-
-```python
-def define_gan(generator, discriminator)
-```
-
-define the combined generator and discriminator model
-
-
-
-#### summarize\_performance
-
-```python
-def summarize_performance(epoch, generator, discriminator, latent_dim, n=200)
-```
-
-evaluate the discriminator and plot real and fake samples
-
-
-
-#### train\_gan
-
-```python
-def train_gan(g_model,
- d_model,
- gan_model,
- latent_dim,
- num_epochs=2500,
- num_eval=2500,
- batch_size=2)
-```
-
-function to train gan model
-
-
-
-## utils.data\_augmentation
-
-
-
-## dataset Objects
-
-```python
-class dataset()
-```
-
-Creates dataset from input source
-
-
-
-#### \_\_init\_\_
-
-```python
-def __init__(number_samples: int,
- name: str,
- source: str,
- boundary_conditions: list = None)
-```
-
-_summary_
-
-**Arguments**:
-
-- `number_samples` _int_ - _description_
-- `name` _str_ - _description_
-- `source` _str_ - _description_
-- `boundary_conditions` _list_ - y1,y2,x1,x2
-
-
-
-# :orange[nets]
-
-
-
-## nets.envs
-
-
-
-### SCI Objects
-
-```python
-class SCI()
-```
-
-
-
-#### data\_flow
-
-```python
-def data_flow(columns_idx: tuple = (1, 3, 3, 5),
- idx: tuple = None,
- split_idx: int = 800) -> torch.utils.data.DataLoader
-```
-
-Data prep pipeline
-
-**Arguments**:
-
-- `columns_idx` _tuple, optional_ - Columns to be selected (sliced 1:2 3:4) for feature fitting. Defaults to (1,3,3,5).
-- `idx` _tuple, optional_ - 2|3 indexes to be selected for feature fitting. Defaults to None. Use either idx or columns_idx (for F:R->R idx, for F:R->R2 columns_idx)
- split_idx (int) : Index to split for training
-
-
-**Returns**:
-
-- `torch.utils.data.DataLoader` - Torch native dataloader
-
-
-
-#### init\_seed
-
-```python
-def init_seed(seed)
-```
-
-Initializes seed for torch optional()
-
-
-
-#### compile
-
-```python
-def compile(columns: tuple = None,
- idx: tuple = None,
- optim: torch.optim = torch.optim.AdamW,
- loss: nn = nn.L1Loss,
- model: nn.Module = dmodel,
- custom: bool = False) -> None
-```
-
-Builds model, loss, optimizer. Has defaults
-
-**Arguments**:
-
-- `columns` _tuple, optional_ - Columns to be selected for feature fitting. Defaults to (1,3,3,5).
- optim - torch Optimizer
- loss - torch Loss function (nn)
-
-
-
-#### train
-
-```python
-def train(epochs: int = 10) -> None
-```
-
-Train model
-If sklearn instance uses .fit()
-
-
-
-#### inference
-
-```python
-def inference(X: tensor, model_name: str = None) -> np.ndarray
-```
-
-Inference of (pre-)trained model
-
-**Arguments**:
-
-- `X` _tensor_ - your data in domain of train
-
-
-**Returns**:
-
-- `np.ndarray` - predictions
-
-
-
-### RCI Objects
-
-```python
-class RCI(SCI)
-```
-
-
-
-#### data\_flow
-
-```python
-def data_flow(columns_idx: tuple = (1, 3, 3, 5),
- idx: tuple = None,
- split_idx: int = 800) -> torch.utils.data.DataLoader
-```
-
-Data prep pipeline
-
-**Arguments**:
-
-- `columns_idx` _tuple, optional_ - Columns to be selected (sliced 1:2 3:4) for feature fitting. Defaults to (1,3,3,5).
-- `idx` _tuple, optional_ - 2|3 indexes to be selected for feature fitting. Defaults to None. Use either idx or columns_idx (for F:R->R idx, for F:R->R2 columns_idx)
- split_idx (int) : Index to split for training
-
-
-**Returns**:
-
-- `torch.utils.data.DataLoader` - Torch native dataloader
-
-
-
-#### compile
-
-```python
-def compile(columns: tuple = None,
- idx: tuple = (3, 1),
- optim: torch.optim = torch.optim.AdamW,
- loss: nn = nn.L1Loss,
- model: nn.Module = PINNd_p,
- lr: float = 0.001) -> None
-```
-
-Builds model, loss, optimizer. Has defaults
-
-**Arguments**:
-
-- `columns` _tuple, optional_ - Columns to be selected for feature fitting. Defaults to None.
-- `idx` _tuple, optional_ - indexes to be selected Default (3,1)
- optim - torch Optimizer
- loss - torch Loss function (nn)
-
-
-
-## nets.dense
-
-
-
-### Net Objects
-
-```python
-class Net(nn.Module)
-```
-
-4 layer model, different activations and neurons count on layer
-
-
-
-#### \_\_init\_\_
-
-```python
-def __init__(input_dim: int = 2, hidden_dim: int = 200)
-```
-
-Init
-
-**Arguments**:
-
-- `input_dim` _int, optional_ - Defaults to 2.
-- `hidden_dim` _int, optional_ - Defaults to 200.
-
-
-
-## nets.design
-
-
-
-#### B\_field\_norm
-
-```python
-def B_field_norm(Bmax, L, k=16, plot=True)
-```
-
-Returns vec B_z
-
-**Arguments**:
-
-- `Bmax` _any_ - maximum B in thruster
- k - magnetic field profile number
-
-
-
-## nets.deep\_dense
-
-
-
-### dmodel Objects
-
-```python
-class dmodel(nn.Module)
-```
-
-4 layers Torch model. Relu activations, hidden layers are same size.
-
-
-
-#### \_\_init\_\_
-
-```python
-def __init__(in_features=1, hidden_features=200, out_features=1)
-```
-
-Init
-
-**Arguments**:
-
-- `in_features` _int, optional_ - Input features. Defaults to 1.
-- `hidden_features` _int, optional_ - Hidden dims. Defaults to 200.
-- `out_features` _int, optional_ - Output dims. Defaults to 1.
-
diff --git a/spaces/arch-123/bingo/src/state/index.ts b/spaces/arch-123/bingo/src/state/index.ts
deleted file mode 100644
index 272106d619c69f124ea8dd0f10872ab4840152d7..0000000000000000000000000000000000000000
--- a/spaces/arch-123/bingo/src/state/index.ts
+++ /dev/null
@@ -1,118 +0,0 @@
-import { BingWebBot } from '@/lib/bots/bing'
-import { BingConversationStyle, ChatMessageModel, BotId } from '@/lib/bots/bing/types'
-import { nanoid } from '@/lib/utils'
-import { atom } from 'jotai'
-import { atomWithImmer } from 'jotai-immer'
-import { atomWithStorage } from 'jotai/utils'
-import { atomFamily } from 'jotai/utils'
-import { atomWithHash, atomWithLocation } from 'jotai-location'
-
-const initialMessages: ChatMessageModel[] = [
- { author: 'system', text: 'conversation between user and robot', id: '1' },
- { author: 'user', text: '销量最高的 3 种宠物吸尘器有哪些优点和缺点? ', id: '2' },
- {
- author: 'bot', text: `
-您好,这是必应。根据网上的搜索结果,我为您找到了以下三款销量最高的宠物吸尘器,以及它们的优点和缺点:
-
-- **戴森Dyson V10轻量版 Digital Slim Fluffy无线吸尘器**[^1^] [^3^]:这款吸尘器专为宠物家庭设计,拥有强大吸力和多功能吸头,能够有效清除宠物毛发和灰尘。
- - 优点:轻便易用,续航时间长,过滤系统先进,能够有效去除过敏原;
- - 缺点:价格较高,噪音较大,需要定期清洁滤网。
-- **飞利浦FC6908/82(W7)无线吸尘器**[^2^] [^4^]:这是一款手持式干吸无线吸尘器,专门为宠物家庭设计。它具有75分钟的超高续航,适合大户型;除螨涡轮刷,可以深度清洁床单被罩,免受螨虫侵害;吸拖一体设计,280ml大水箱,还可以往水箱里加入清洁剂。
- - 优点:清洁性能强劲,操作方便,适用多种场景;
- - 缺点:噪音较大,达到84dB(A);尘杯容量较小,只有0.6L;充电时长较长,需要5小时。
-- **必胜(Bissell) 宠物吸尘器**[^2^]:这是一款便携式手持真空吸尘器,专为清理宠物毛发而设计。它配备特殊的橡胶喷嘴和软毛刷头,可以轻松拾取宠物毛发和灰尘;几乎静音运行。
- - 优点:轻巧易用,重量只有1.36kg;电池可充电可更换;
- - 缺点:续航时间不稳定,在15-30分钟之间浮动;吸力不够强劲,在地毯或沙发上可能效果不佳;没有过滤系统或过滤网。
-
-希望这些信息对您有所帮助。如果您还有其他问题,请随时提问。😊
- `, id: '3' },
- { author: 'user', text: '今天的新闻', id: '4' },
- {
- author: 'bot', text: `你好,这是Bing。我根据你的搜索查询“今天的新闻”为你找到了以下信息:
-
- # 中国新闻
- - 《求是》杂志发表习近平总书记重要文章《深化党和国家机构改革 推进国家治理体系和治理能力现代化》[^1^]
- - 梦之队第5金! 全红婵陈芋汐女双10米台夺冠[^1^]
- - 中央气象台7月16日18时发布台风橙色预警[^1^]
- - 贵州石阡:暑期旅游带动乡村振兴[^1^]
- - 激活大科学装置的“人才红利”[^1^]
- - 聚才用才留才 让希望的田野成为智慧的田野[^1^]
- - 科学家发现粮食系统增效可有效支撑我国碳中和[^1^]
- - 成都以赛为媒提升城市美誉度[^1^]
- - 香江观澜:“抢人才”成效显著 香港竞争力依旧[^1^]
- - 浙江建德:新安江上享清凉[^1^]
- - 定向资金持续精准滴灌 高质量发展力量不断积累[^1^]
- - 企业信心恢复加大中长期投资 为稳增长奠定基础[^1^]
- - 武汉渡江节开赛 千名“泳士”竞渡长江[^1^]
- - 金融更加重视对实体经济支持 降低融资成本提振市场信心[^1^]
- - 大运来了丨成都迎大运 全民健身动起来[^1^]
- - 藏在高校里的宝藏博物馆[^1^]
- - 中国汽车工业用70年赢得三个“全球第一”[^1^]
- - 辽宁阜新市委常委、副市长赖化平涉嫌严重违纪违法接受审查调查[^1^]
-
- # 国际新闻
- - 联合国秘书长:可持续发展面临风险 各国需加强行动[^2^]
- - 国际航运业加快绿色转型[^2^]
- - 美企反对收紧对华芯片出口限制[^2^]
- - 欧洲加大气候科技领域投资[^2^]
- - 中企助力丹麦发展清洁能源[^2^]
- - 中国代表呼吁国际社会共同努力防止乌克兰局势失控[^2^]
- - 中国和阿尔及利亚共同构建新型国际关系典范[^2^]
-
- 以上信息仅供参考,具体详情请点击以下链接查看:
-
- [1] [中国新闻_央视网(cctv.com)](https://news.cctv.com/china/)
- [2] [人民网_网上的人民日报](http://www.people.com.cn/)`, id: '5' },
- { author: 'user', text: '写一个快排', id: '6' },
- { author: 'bot', text: "你好,这是Bing。我可以帮你用python写一个快排。快速排序是一种基于分治思想的排序算法,它的基本思想是:从待排序的数组中选择一个基准元素,将数组分为两个子数组,使得左边的子数组中的元素都小于或等于基准元素,右边的子数组中的元素都大于或等于基准元素,然后对两个子数组递归地进行快速排序,最后合并结果。下面是用python实现的快速排序的代码:\n\n```python\n# 定义一个函数,用于交换数组中的两个元素\ndef swap(arr, i, j):\n temp = arr[i]\n arr[i] = arr[j]\n arr[j] = temp\n\n# 定义一个函数,用于对数组的一段区间进行划分\ndef partition(arr, low, high):\n # 选择最右边的元素作为基准元素\n pivot = arr[high]\n # 初始化一个指针,指向左边区间的最后一个位置\n i = low - 1\n # 遍历左边区间中的每个元素\n for j in range(low, high):\n # 如果当前元素小于或等于基准元素\n if arr[j] <= pivot:\n # 将指针右移一位\n i += 1\n # 将指针所指的元素与当前元素交换\n swap(arr, i, j)\n # 将基准元素与指针右边的第一个元素交换\n swap(arr, i + 1, high)\n # 返回基准元素的最终位置\n return i + 1\n\n# 定义一个函数,用于对数组进行快速排序\ndef quick_sort(arr, low, high):\n # 如果区间长度大于1\n if low < high:\n # 对区间进行划分,得到基准元素的位置\n p = partition(arr, low, high)\n # 对左边子区间进行快速排序\n quick_sort(arr, low, p - 1)\n # 对右边子区间进行快速排序\n quick_sort(arr, p + 1, high)\n\n# 测试代码\narr = [5, 3, 7, 2, 9, 1, 4, 6, 8]\nprint(\"原始数组:\", arr)\nquick_sort(arr, 0, len(arr) - 1)\nprint(\"排序后的数组:\", arr)\n```", id: '7' },
- {
- author: 'bot', text: "好的,我会尝试画一只猫。\n > ",
- id: '8'
- }
-]
-
-export const GreetMessages = [
- '谢谢你! 知道你什么时候准备好继续前进总是很有帮助的。我现在能为你回答什么问题?',
- '重新开始总是很棒。问我任何问题!',
- '当然,我很乐意重新开始。我现在可以为你提供哪些帮助?',
- '当然,我已准备好进行新的挑战。我现在可以为你做什么?',
- '很好,让我们来更改主题。你在想什么?',
- '不用担心,我很高兴尝试一些新内容。我现在可以为你回答什么问题?',
- '好的,我准备好了!感谢重置。我们应该了解哪些内容?',
- '感谢刷新!你有新的话题吗?',
- '明白了,让我们重新开始。接下来应该讨论什么?',
- '下一步!我可以为你做什么?',
- '好的,我已准备好新话题。我们应该一起了解哪些内容?'
-]
-
-export const bingConversationStyleAtom = atomWithStorage('bingConversationStyle', BingConversationStyle.Creative, undefined, { unstable_getOnInit: true })
-export const voiceAtom = atomWithStorage('enableTTS', false, undefined, { unstable_getOnInit: true })
-
-type Param = { botId: BotId; page: string }
-
-const createBotInstance = () => {
- return new BingWebBot({
- cookie: ' ',
- ua: ' ',
- })
-}
-
-export const chatFamily = atomFamily(
- (param: Param) => {
- return atomWithImmer({
- botId: param.botId,
- bot: createBotInstance(),
- messages: [] as ChatMessageModel[],
- generatingMessageId: '',
- abortController: undefined as AbortController | undefined,
- conversationId: nanoid(),
- })
- },
- (a, b) => a.botId === b.botId && a.page === b.page,
-)
-
-export const hashAtom = atomWithHash('dialog', '')
-
-export const locationAtom = atomWithLocation()
-
-export const voiceListenAtom = atom(false)
diff --git a/spaces/artificialguybr/video-dubbing/TTS/docs/source/main_classes/trainer_api.md b/spaces/artificialguybr/video-dubbing/TTS/docs/source/main_classes/trainer_api.md
deleted file mode 100644
index 876e09e5b6e75298657f17a289860038cc87f122..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/docs/source/main_classes/trainer_api.md
+++ /dev/null
@@ -1,3 +0,0 @@
-# Trainer API
-
-We made the trainer a separate project on https://github.com/coqui-ai/Trainer
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/JpegPresets.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/JpegPresets.py
deleted file mode 100644
index a678e248e9ab2465738ea79f7f5c4bbc260c1919..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/JpegPresets.py
+++ /dev/null
@@ -1,240 +0,0 @@
-"""
-JPEG quality settings equivalent to the Photoshop settings.
-Can be used when saving JPEG files.
-
-The following presets are available by default:
-``web_low``, ``web_medium``, ``web_high``, ``web_very_high``, ``web_maximum``,
-``low``, ``medium``, ``high``, ``maximum``.
-More presets can be added to the :py:data:`presets` dict if needed.
-
-To apply the preset, specify::
-
- quality="preset_name"
-
-To apply only the quantization table::
-
- qtables="preset_name"
-
-To apply only the subsampling setting::
-
- subsampling="preset_name"
-
-Example::
-
- im.save("image_name.jpg", quality="web_high")
-
-Subsampling
------------
-
-Subsampling is the practice of encoding images by implementing less resolution
-for chroma information than for luma information.
-(ref.: https://en.wikipedia.org/wiki/Chroma_subsampling)
-
-Possible subsampling values are 0, 1 and 2 that correspond to 4:4:4, 4:2:2 and
-4:2:0.
-
-You can get the subsampling of a JPEG with the
-:func:`.JpegImagePlugin.get_sampling` function.
-
-In JPEG compressed data a JPEG marker is used instead of an EXIF tag.
-(ref.: https://exiv2.org/tags.html)
-
-
-Quantization tables
--------------------
-
-They are values use by the DCT (Discrete cosine transform) to remove
-*unnecessary* information from the image (the lossy part of the compression).
-(ref.: https://en.wikipedia.org/wiki/Quantization_matrix#Quantization_matrices,
-https://en.wikipedia.org/wiki/JPEG#Quantization)
-
-You can get the quantization tables of a JPEG with::
-
- im.quantization
-
-This will return a dict with a number of lists. You can pass this dict
-directly as the qtables argument when saving a JPEG.
-
-The quantization table format in presets is a list with sublists. These formats
-are interchangeable.
-
-Libjpeg ref.:
-https://web.archive.org/web/20120328125543/http://www.jpegcameras.com/libjpeg/libjpeg-3.html
-
-"""
-
-# fmt: off
-presets = {
- 'web_low': {'subsampling': 2, # "4:2:0"
- 'quantization': [
- [20, 16, 25, 39, 50, 46, 62, 68,
- 16, 18, 23, 38, 38, 53, 65, 68,
- 25, 23, 31, 38, 53, 65, 68, 68,
- 39, 38, 38, 53, 65, 68, 68, 68,
- 50, 38, 53, 65, 68, 68, 68, 68,
- 46, 53, 65, 68, 68, 68, 68, 68,
- 62, 65, 68, 68, 68, 68, 68, 68,
- 68, 68, 68, 68, 68, 68, 68, 68],
- [21, 25, 32, 38, 54, 68, 68, 68,
- 25, 28, 24, 38, 54, 68, 68, 68,
- 32, 24, 32, 43, 66, 68, 68, 68,
- 38, 38, 43, 53, 68, 68, 68, 68,
- 54, 54, 66, 68, 68, 68, 68, 68,
- 68, 68, 68, 68, 68, 68, 68, 68,
- 68, 68, 68, 68, 68, 68, 68, 68,
- 68, 68, 68, 68, 68, 68, 68, 68]
- ]},
- 'web_medium': {'subsampling': 2, # "4:2:0"
- 'quantization': [
- [16, 11, 11, 16, 23, 27, 31, 30,
- 11, 12, 12, 15, 20, 23, 23, 30,
- 11, 12, 13, 16, 23, 26, 35, 47,
- 16, 15, 16, 23, 26, 37, 47, 64,
- 23, 20, 23, 26, 39, 51, 64, 64,
- 27, 23, 26, 37, 51, 64, 64, 64,
- 31, 23, 35, 47, 64, 64, 64, 64,
- 30, 30, 47, 64, 64, 64, 64, 64],
- [17, 15, 17, 21, 20, 26, 38, 48,
- 15, 19, 18, 17, 20, 26, 35, 43,
- 17, 18, 20, 22, 26, 30, 46, 53,
- 21, 17, 22, 28, 30, 39, 53, 64,
- 20, 20, 26, 30, 39, 48, 64, 64,
- 26, 26, 30, 39, 48, 63, 64, 64,
- 38, 35, 46, 53, 64, 64, 64, 64,
- 48, 43, 53, 64, 64, 64, 64, 64]
- ]},
- 'web_high': {'subsampling': 0, # "4:4:4"
- 'quantization': [
- [6, 4, 4, 6, 9, 11, 12, 16,
- 4, 5, 5, 6, 8, 10, 12, 12,
- 4, 5, 5, 6, 10, 12, 14, 19,
- 6, 6, 6, 11, 12, 15, 19, 28,
- 9, 8, 10, 12, 16, 20, 27, 31,
- 11, 10, 12, 15, 20, 27, 31, 31,
- 12, 12, 14, 19, 27, 31, 31, 31,
- 16, 12, 19, 28, 31, 31, 31, 31],
- [7, 7, 13, 24, 26, 31, 31, 31,
- 7, 12, 16, 21, 31, 31, 31, 31,
- 13, 16, 17, 31, 31, 31, 31, 31,
- 24, 21, 31, 31, 31, 31, 31, 31,
- 26, 31, 31, 31, 31, 31, 31, 31,
- 31, 31, 31, 31, 31, 31, 31, 31,
- 31, 31, 31, 31, 31, 31, 31, 31,
- 31, 31, 31, 31, 31, 31, 31, 31]
- ]},
- 'web_very_high': {'subsampling': 0, # "4:4:4"
- 'quantization': [
- [2, 2, 2, 2, 3, 4, 5, 6,
- 2, 2, 2, 2, 3, 4, 5, 6,
- 2, 2, 2, 2, 4, 5, 7, 9,
- 2, 2, 2, 4, 5, 7, 9, 12,
- 3, 3, 4, 5, 8, 10, 12, 12,
- 4, 4, 5, 7, 10, 12, 12, 12,
- 5, 5, 7, 9, 12, 12, 12, 12,
- 6, 6, 9, 12, 12, 12, 12, 12],
- [3, 3, 5, 9, 13, 15, 15, 15,
- 3, 4, 6, 11, 14, 12, 12, 12,
- 5, 6, 9, 14, 12, 12, 12, 12,
- 9, 11, 14, 12, 12, 12, 12, 12,
- 13, 14, 12, 12, 12, 12, 12, 12,
- 15, 12, 12, 12, 12, 12, 12, 12,
- 15, 12, 12, 12, 12, 12, 12, 12,
- 15, 12, 12, 12, 12, 12, 12, 12]
- ]},
- 'web_maximum': {'subsampling': 0, # "4:4:4"
- 'quantization': [
- [1, 1, 1, 1, 1, 1, 1, 1,
- 1, 1, 1, 1, 1, 1, 1, 1,
- 1, 1, 1, 1, 1, 1, 1, 2,
- 1, 1, 1, 1, 1, 1, 2, 2,
- 1, 1, 1, 1, 1, 2, 2, 3,
- 1, 1, 1, 1, 2, 2, 3, 3,
- 1, 1, 1, 2, 2, 3, 3, 3,
- 1, 1, 2, 2, 3, 3, 3, 3],
- [1, 1, 1, 2, 2, 3, 3, 3,
- 1, 1, 1, 2, 3, 3, 3, 3,
- 1, 1, 1, 3, 3, 3, 3, 3,
- 2, 2, 3, 3, 3, 3, 3, 3,
- 2, 3, 3, 3, 3, 3, 3, 3,
- 3, 3, 3, 3, 3, 3, 3, 3,
- 3, 3, 3, 3, 3, 3, 3, 3,
- 3, 3, 3, 3, 3, 3, 3, 3]
- ]},
- 'low': {'subsampling': 2, # "4:2:0"
- 'quantization': [
- [18, 14, 14, 21, 30, 35, 34, 17,
- 14, 16, 16, 19, 26, 23, 12, 12,
- 14, 16, 17, 21, 23, 12, 12, 12,
- 21, 19, 21, 23, 12, 12, 12, 12,
- 30, 26, 23, 12, 12, 12, 12, 12,
- 35, 23, 12, 12, 12, 12, 12, 12,
- 34, 12, 12, 12, 12, 12, 12, 12,
- 17, 12, 12, 12, 12, 12, 12, 12],
- [20, 19, 22, 27, 20, 20, 17, 17,
- 19, 25, 23, 14, 14, 12, 12, 12,
- 22, 23, 14, 14, 12, 12, 12, 12,
- 27, 14, 14, 12, 12, 12, 12, 12,
- 20, 14, 12, 12, 12, 12, 12, 12,
- 20, 12, 12, 12, 12, 12, 12, 12,
- 17, 12, 12, 12, 12, 12, 12, 12,
- 17, 12, 12, 12, 12, 12, 12, 12]
- ]},
- 'medium': {'subsampling': 2, # "4:2:0"
- 'quantization': [
- [12, 8, 8, 12, 17, 21, 24, 17,
- 8, 9, 9, 11, 15, 19, 12, 12,
- 8, 9, 10, 12, 19, 12, 12, 12,
- 12, 11, 12, 21, 12, 12, 12, 12,
- 17, 15, 19, 12, 12, 12, 12, 12,
- 21, 19, 12, 12, 12, 12, 12, 12,
- 24, 12, 12, 12, 12, 12, 12, 12,
- 17, 12, 12, 12, 12, 12, 12, 12],
- [13, 11, 13, 16, 20, 20, 17, 17,
- 11, 14, 14, 14, 14, 12, 12, 12,
- 13, 14, 14, 14, 12, 12, 12, 12,
- 16, 14, 14, 12, 12, 12, 12, 12,
- 20, 14, 12, 12, 12, 12, 12, 12,
- 20, 12, 12, 12, 12, 12, 12, 12,
- 17, 12, 12, 12, 12, 12, 12, 12,
- 17, 12, 12, 12, 12, 12, 12, 12]
- ]},
- 'high': {'subsampling': 0, # "4:4:4"
- 'quantization': [
- [6, 4, 4, 6, 9, 11, 12, 16,
- 4, 5, 5, 6, 8, 10, 12, 12,
- 4, 5, 5, 6, 10, 12, 12, 12,
- 6, 6, 6, 11, 12, 12, 12, 12,
- 9, 8, 10, 12, 12, 12, 12, 12,
- 11, 10, 12, 12, 12, 12, 12, 12,
- 12, 12, 12, 12, 12, 12, 12, 12,
- 16, 12, 12, 12, 12, 12, 12, 12],
- [7, 7, 13, 24, 20, 20, 17, 17,
- 7, 12, 16, 14, 14, 12, 12, 12,
- 13, 16, 14, 14, 12, 12, 12, 12,
- 24, 14, 14, 12, 12, 12, 12, 12,
- 20, 14, 12, 12, 12, 12, 12, 12,
- 20, 12, 12, 12, 12, 12, 12, 12,
- 17, 12, 12, 12, 12, 12, 12, 12,
- 17, 12, 12, 12, 12, 12, 12, 12]
- ]},
- 'maximum': {'subsampling': 0, # "4:4:4"
- 'quantization': [
- [2, 2, 2, 2, 3, 4, 5, 6,
- 2, 2, 2, 2, 3, 4, 5, 6,
- 2, 2, 2, 2, 4, 5, 7, 9,
- 2, 2, 2, 4, 5, 7, 9, 12,
- 3, 3, 4, 5, 8, 10, 12, 12,
- 4, 4, 5, 7, 10, 12, 12, 12,
- 5, 5, 7, 9, 12, 12, 12, 12,
- 6, 6, 9, 12, 12, 12, 12, 12],
- [3, 3, 5, 9, 13, 15, 15, 15,
- 3, 4, 6, 10, 14, 12, 12, 12,
- 5, 6, 9, 14, 12, 12, 12, 12,
- 9, 10, 14, 12, 12, 12, 12, 12,
- 13, 14, 12, 12, 12, 12, 12, 12,
- 15, 12, 12, 12, 12, 12, 12, 12,
- 15, 12, 12, 12, 12, 12, 12, 12,
- 15, 12, 12, 12, 12, 12, 12, 12]
- ]},
-}
-# fmt: on
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/data_cfg.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/data_cfg.py
deleted file mode 100644
index fba36dfcf07c35ac21fd77f0b58837fd002a3e3a..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/data_cfg.py
+++ /dev/null
@@ -1,299 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-from argparse import Namespace
-from pathlib import Path
-from typing import Dict, Optional
-
-from fairseq.data import Dictionary
-
-
-def get_config_from_yaml(yaml_path: Path):
- try:
- import yaml
- except ImportError:
- print("Please install PyYAML: pip install PyYAML")
- config = {}
- if yaml_path.is_file():
- try:
- with open(yaml_path) as f:
- config = yaml.load(f, Loader=yaml.FullLoader)
- except Exception as e:
- raise Exception(f"Failed to load config from {yaml_path.as_posix()}: {e}")
- else:
- raise FileNotFoundError(f"{yaml_path.as_posix()} not found")
-
- return config
-
-
-class S2TDataConfig(object):
- """Wrapper class for data config YAML"""
-
- def __init__(self, yaml_path: Path):
- self.config = get_config_from_yaml(yaml_path)
- self.root = yaml_path.parent
-
- def _auto_convert_to_abs_path(self, x):
- if isinstance(x, str):
- if not Path(x).exists() and (self.root / x).exists():
- return (self.root / x).as_posix()
- elif isinstance(x, dict):
- return {k: self._auto_convert_to_abs_path(v) for k, v in x.items()}
- return x
-
- @property
- def vocab_filename(self):
- """fairseq vocabulary file under data root"""
- return self.config.get("vocab_filename", "dict.txt")
-
- @property
- def speaker_set_filename(self):
- """speaker set file under data root"""
- return self.config.get("speaker_set_filename", None)
-
- @property
- def shuffle(self) -> bool:
- """Shuffle dataset samples before batching"""
- return self.config.get("shuffle", False)
-
- @property
- def pre_tokenizer(self) -> Dict:
- """Pre-tokenizer to apply before subword tokenization. Returning
- a dictionary with `tokenizer` providing the tokenizer name and
- the other items providing the tokenizer-specific arguments.
- Tokenizers are defined in `fairseq.data.encoders.*`"""
- tokenizer = self.config.get("pre_tokenizer", {"tokenizer": None})
- return self._auto_convert_to_abs_path(tokenizer)
-
- @property
- def bpe_tokenizer(self) -> Dict:
- """Subword tokenizer to apply after pre-tokenization. Returning
- a dictionary with `bpe` providing the tokenizer name and
- the other items providing the tokenizer-specific arguments.
- Tokenizers are defined in `fairseq.data.encoders.*`"""
- tokenizer = self.config.get("bpe_tokenizer", {"bpe": None})
- return self._auto_convert_to_abs_path(tokenizer)
-
- @property
- def prepend_tgt_lang_tag(self) -> bool:
- """Prepend target lang ID token as the target BOS (e.g. for to-many
- multilingual setting). During inference, this requires `--prefix-size 1`
- to force BOS to be lang ID token."""
- return self.config.get("prepend_tgt_lang_tag", False)
-
- @property
- def prepend_bos_and_append_tgt_lang_tag(self) -> bool:
- """Prepend BOS and append target lang ID token to the target (e.g. mBART with language token pretraining)."""
- return self.config.get("prepend_bos_and_append_tgt_lang_tag", False)
-
- @property
- def input_feat_per_channel(self):
- """The dimension of input features (per audio channel)"""
- return self.config.get("input_feat_per_channel", 80)
-
- @property
- def input_channels(self):
- """The number of channels in the input audio"""
- return self.config.get("input_channels", 1)
-
- @property
- def sample_rate(self):
- return self.config.get("sample_rate", 16_000)
-
- @property
- def sampling_alpha(self):
- """Hyper-parameter alpha = 1/T for temperature-based resampling.
- (alpha = 1 for no resampling)"""
- return self.config.get("sampling_alpha", 1.0)
-
- @property
- def use_audio_input(self):
- """Needed by the dataset loader to see if the model requires
- raw audio as inputs."""
- return self.config.get("use_audio_input", False)
-
- def standardize_audio(self) -> bool:
- return self.use_audio_input and self.config.get("standardize_audio", False)
-
- @property
- def use_sample_rate(self):
- """Needed by the dataset loader to see if the model requires
- raw audio with specific sample rate as inputs."""
- return self.config.get("use_sample_rate", 16000)
-
- @property
- def audio_root(self):
- """Audio paths in the manifest TSV can be relative and this provides
- the root path. Set this to empty string when using absolute paths."""
- return self.config.get("audio_root", "")
-
- def get_feature_transforms(self, split, is_train):
- """Split-specific feature transforms. Allowing train set
- wildcard `_train`, evaluation set wildcard `_eval` and general
- wildcard `*` for matching."""
- from copy import deepcopy
-
- cfg = deepcopy(self.config)
- _cur = cfg.get("transforms", {})
- cur = _cur.get(split)
- cur = _cur.get("_train") if cur is None and is_train else cur
- cur = _cur.get("_eval") if cur is None and not is_train else cur
- cur = _cur.get("*") if cur is None else cur
- cfg["transforms"] = cur
- return cfg
-
- @property
- def global_cmvn_stats_npz(self) -> Optional[str]:
- path = self.config.get("global_cmvn", {}).get("stats_npz_path", None)
- return self._auto_convert_to_abs_path(path)
-
- @property
- def vocoder(self) -> Dict[str, str]:
- vocoder = self.config.get("vocoder", {"type": "griffin_lim"})
- return self._auto_convert_to_abs_path(vocoder)
-
- @property
- def hub(self) -> Dict[str, str]:
- return self.config.get("hub", {})
-
-
-class S2SDataConfig(S2TDataConfig):
- """Wrapper class for data config YAML"""
-
- @property
- def vocab_filename(self):
- """fairseq vocabulary file under data root"""
- return self.config.get("vocab_filename", None)
-
- @property
- def pre_tokenizer(self) -> Dict:
- return None
-
- @property
- def bpe_tokenizer(self) -> Dict:
- return None
-
- @property
- def input_transformed_channels(self):
- """The number of channels in the audio after feature transforms"""
- # TODO: move this into individual transforms
- _cur = self.config.get("transforms", {})
- cur = _cur.get("_train", [])
-
- _channels = self.input_channels
- if "delta_deltas" in cur:
- _channels *= 3
-
- return _channels
-
- @property
- def output_sample_rate(self):
- """The audio sample rate of output target speech"""
- return self.config.get("output_sample_rate", 22050)
-
- @property
- def target_speaker_embed(self):
- """Target speaker embedding file (one line per target audio sample)"""
- return self.config.get("target_speaker_embed", None)
-
- @property
- def prepend_tgt_lang_tag_as_bos(self) -> bool:
- """Prepend target lang ID token as the target BOS."""
- return self.config.get("prepend_tgt_lang_tag_as_bos", False)
-
-
-class MultitaskConfig(object):
- """Wrapper class for data config YAML"""
-
- def __init__(self, yaml_path: Path):
- config = get_config_from_yaml(yaml_path)
- self.config = {}
- for k, v in config.items():
- self.config[k] = SingleTaskConfig(k, v)
-
- def get_all_tasks(self):
- return self.config
-
- def get_single_task(self, name):
- assert name in self.config, f"multitask '{name}' does not exist!"
- return self.config[name]
-
-
-class SingleTaskConfig(object):
- def __init__(self, name, config):
- self.task_name = name
- self.config = config
- dict_path = config.get("dict", "")
- self.tgt_dict = Dictionary.load(dict_path) if Path(dict_path).exists() else None
-
- @property
- def data(self):
- return self.config.get("data", "")
-
- @property
- def decoder_type(self):
- return self.config.get("decoder_type", "transformer")
-
- @property
- def decoder_args(self):
- """Decoder arch related args"""
- args = self.config.get("decoder_args", {})
- return Namespace(**args)
-
- @property
- def criterion_cfg(self):
- """cfg for the multitask criterion"""
- if self.decoder_type == "ctc":
- from fairseq.criterions.ctc import CtcCriterionConfig
-
- cfg = CtcCriterionConfig
- cfg.zero_infinity = self.config.get("zero_infinity", True)
- else:
- from fairseq.criterions.label_smoothed_cross_entropy import (
- LabelSmoothedCrossEntropyCriterionConfig,
- )
-
- cfg = LabelSmoothedCrossEntropyCriterionConfig
- cfg.label_smoothing = self.config.get("label_smoothing", 0.2)
- return cfg
-
- @property
- def input_from(self):
- """Condition on encoder/decoder of the main model"""
- return "decoder" if "decoder_layer" in self.config else "encoder"
-
- @property
- def input_layer(self):
- if self.input_from == "decoder":
- return self.config["decoder_layer"] - 1
- else:
- # default using the output from the last encoder layer (-1)
- return self.config.get("encoder_layer", 0) - 1
-
- @property
- def loss_weight_schedule(self):
- return (
- "decay"
- if "loss_weight_max" in self.config
- and "loss_weight_decay_steps" in self.config
- else "fixed"
- )
-
- def get_loss_weight(self, num_updates):
- if self.loss_weight_schedule == "fixed":
- weight = self.config.get("loss_weight", 1.0)
- else: # "decay"
- assert (
- self.config.get("loss_weight_decay_steps", 0) > 0
- ), "loss_weight_decay_steps must be greater than 0 for a decay schedule"
- loss_weight_min = self.config.get("loss_weight_min", 0.0001)
- loss_weight_decay_stepsize = (
- self.config["loss_weight_max"] - loss_weight_min
- ) / self.config["loss_weight_decay_steps"]
- weight = max(
- self.config["loss_weight_max"]
- - loss_weight_decay_stepsize * num_updates,
- loss_weight_min,
- )
- return weight
diff --git a/spaces/autumn8/selectModel/app.py b/spaces/autumn8/selectModel/app.py
deleted file mode 100644
index 44992b942c7dc83bf066621d72a1e064625a37a3..0000000000000000000000000000000000000000
--- a/spaces/autumn8/selectModel/app.py
+++ /dev/null
@@ -1,760 +0,0 @@
-from transformers import TextClassificationPipeline
-from transformers import AutoTokenizer
-from transformers import pipeline
-import evaluate
-import gradio as gr
-import torch
-import random
-from transformers.file_utils import is_tf_available, is_torch_available, is_torch_tpu_available
-from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments
-from datasets import load_metric
-from sklearn.model_selection import train_test_split
-import pandas as pd
-import numpy as np
-import streamlit as st
-from textblob import TextBlob
-from streamlit_extras.switch_page_button import switch_page
-from transformers import YolosImageProcessor, YolosForObjectDetection
-from PIL import Image
-import torch
-import requests
-import numpy as np
-import torchvision
-from torchvision.io import read_image
-from torchvision.utils import draw_bounding_boxes
-from transformers import DetrImageProcessor, DetrForObjectDetection
-from transformers import DetrImageProcessor, DetrForObjectDetection
-from transformers import pipeline
-import torch
-from transformers import PegasusForConditionalGeneration, PegasusTokenizer
-
-
-st.set_page_config(layout="wide")
-def get_models(prompt):
- #prompt = input("Enter your AI task idea:")
- response = pipe(prompt)
- print("AI Model Idea: ", prompt,"\n")
-
- x = pd.json_normalize(response[0])
- # x.nlargest(3,['score'])["label"].values
- knowledge_base_tasks = ['depth-estimation', 'image-classification', 'image-segmentation',
- 'image-to-image', 'object-detection', 'video-classification',
- 'unconditional-image-generation', 'zero-shot-image-classification',
- 'conversational', 'fill-mask', 'question-answering',
- 'sentence-similarity', 'summarization', 'table-question-answering',
- 'text-classification', 'text-generation', 'token-classification',
- 'translation', 'zero-shot-classification']
-
- temp = []
- for label_code in x.nlargest(3,['score'])["label"].values:
- temp.append(label_code[6:])
- # temp
-
- cat_to_model = {}
- top_cats = []
-
- for i in range(len(temp)):
- print("Possible Category ",i+1," : ",knowledge_base_tasks[int(temp[i])])
- print("Top three models for this category are:",models_list[models_list["pipeline_tag"] == knowledge_base_tasks[int(temp[i])]].nlargest(3,"downloads")["modelId"].values)
- cat_to_model[knowledge_base_tasks[int(temp[i])]] = models_list[models_list["pipeline_tag"] == knowledge_base_tasks[int(temp[i])]].nlargest(3,"downloads")["modelId"].values
- top_cats.append(knowledge_base_tasks[int(temp[i])])
- # models_list[models_list["pipeline_tag"] == "image-classification"].nlargest(3,"downloads")["modelId"].values
- print()
- print("Returning category-models dictionary..")
- return top_cats,cat_to_model
-
-
-
-def get_top_3(top_cat):
-
- top_3_df = pd.read_csv("./Top_3_models.csv")
- top_3 = []
- for i in range(top_3_df.shape[0]):
- if top_3_df["Category"].iloc[i].lower() == top_cat:
- top_3.append(top_3_df["Model_1"].iloc[i])
- top_3.append(top_3_df["Model_2"].iloc[i])
- top_3.append(top_3_df["Model_3"].iloc[i])
- break
- return top_3
-
-
-
-
-
-def get_top_3_a(prompt,pipe):
- response = pipe(prompt)
- x = pd.json_normalize(response[0])
- temp = []
- for label_code in x.nlargest(3,['score'])["label"].values:
- temp.append(label_code[6:])
- knowledge_base_tasks = ['depth-estimation', 'image-classification', 'image-segmentation',
- 'image-to-image', 'object-detection', 'video-classification',
- 'unconditional-image-generation', 'zero-shot-image-classification',
- 'conversational', 'fill-mask', 'question-answering',
- 'sentence-similarity', 'summarization', 'table-question-answering',
- 'text-classification', 'text-generation', 'token-classification',
- 'translation', 'zero-shot-classification']
-
- top_cat = knowledge_base_tasks[int(temp[0])]
-
-
- top_3_df = pd.read_csv("./Top_3_models.csv")
- top_3 = []
- for i in range(top_3_df.shape[0]):
- if top_3_df["Category"].iloc[i] == top_cat:
- top_3.append(top_3_df["Model_1"].iloc[i])
- top_3.append(top_3_df["Model_2"].iloc[i])
- top_3.append(top_3_df["Model_3"].iloc[i])
- break
- return top_cat,top_3
-
-
-
-
-def get_response(input_text,model_name):
- torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
- tokenizer = PegasusTokenizer.from_pretrained(model_name)
- model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device)
- batch = tokenizer([input_text],truncation=True,padding='longest',max_length=1024, return_tensors="pt").to(torch_device)
- gen_out = model.generate(**batch,max_length=128,num_beams=5, num_return_sequences=1, temperature=1.5)
- output_text = tokenizer.batch_decode(gen_out, skip_special_tokens=True)
- return output_text
-
-
-def summarizer (models, data):
- model_Eval = {}
- for i in range (len(models)):
- # print(models[i])
- if models[i] == 'tuner007/pegasus_summarizer':
- model_name = 'tuner007/pegasus_summarizer'
-
- result = get_response(data,model_name)
- rouge = evaluate.load('rouge')
- # print("345",rouge.compute(predictions=[result],references=[data]))
- print(type(result), type([data]))
- quality = rouge.compute(predictions=[result[0]],references=[data])
- model_Eval[models[i]] = {"Score":quality,"Result": result}
- else:
- summarizer_model = pipeline("summarization", model = models[i])
- print(models[i], summarizer_model(data))
- try:
- result = summarizer_model(data)[0]["summary_text"]
- rouge = evaluate.load('rouge')
- # print("345",rouge.compute(predictions=[result],references=[data]))
- quality = rouge.compute(predictions=[result],references=[data])
- model_Eval[models[i]] = {"Score":quality,"Result": result}
- except:
- print("Model {} has issues.".format(models[i]))
-
- return model_Eval
-
-
-
-
-def best_model (analysis, data):
- best_model_score = 0
- best_model_name = ""
- best_model_result = ""
- temp2 = 0
- for model in analysis.keys():
- temp1 = analysis[model]["Score"]["rougeLsum"]
- if temp1 > temp2:
- temp2 = analysis[model]["Score"]["rougeLsum"]
- best_model_score = analysis[model]["Score"]
- best_model_name = model
- best_model_result = analysis[model]["Result"]
-
- return best_model_name, best_model_score,data[:50],best_model_result.replace("\n","")
-
-
-
-def text_summarization():
- top_models = get_top_3("summarization")
-# st.write("Upload your file: ")
-# uploaded_files = ""
-# uploaded_files = st.file_uploader("Choose your file", accept_multiple_files=True)
-
-
-
-
- option = st.selectbox(
- 'What text would you like AI to summarize for you now ?',
- ("Choose text files below:",'How to Win friends - Text', 'The Age of Intelligent Machines', 'The Singularity is Near - Ray Kurzweil.txt')) #add 2 other options of files here
-
- if option == 'How to Win friends - Text' or option == 'The Age of Intelligent Machines' or option == 'The Singularity is Near - Ray Kurzweil.txt':### update book text files here
- st.write('You selected:', option)
-
- if option == 'How to Win friends - Text': # add text
- name = "How_to_win_friends.txt"
- st.write("Selected file for analyis is: How_to_win_friends.txt")
- st.markdown(f'
{"Thank you for your patience. AI is generating 3 outputs to compare"}
', unsafe_allow_html=True)
-
- if option == 'The Age of Intelligent Machines':
- name = "The Age of Intelligent Machines.txt"
- st.write("Selected file for analyis is: The Age of Intelligent Machines.txt")
- st.markdown(f'
{"Thank you for your patience. AI is generating 3 outputs to compare"}
', unsafe_allow_html=True)
-
- if option == "The Singularity is Near - Ray Kurzweil.txt":
- name = "The Singularity is Near - Ray Kurzweil.txt"
- st.write("The Singularity is Near - Ray Kurzweil.txt")
- st.markdown(f'
{"Thank you for your patience. AI is generating 3 outputs to compare"}
', unsafe_allow_html=True)
-
- if st.button("Accept"):
- global file_data
-# st.write("filename:", uploaded_files)
-# for uploaded_file in uploaded_files:
-# # print("here")
-# file_data = open(uploaded_file.name,encoding="utf8").read()
-# st.write("filename:", uploaded_file.name)
-# # st.write(file_data[:500])
-# # print("before summarizer")
-# print(file_data[:50])
- file_data = open(name,encoding="utf8").read()
-
- analysis = summarizer(models = top_models, data = file_data[:500])
-
- x,c,v,b = best_model(analysis,file_data[:500])
-# st.write("Best model for Task: ",z)
-
- st.markdown(f'
{"Best Model with Summarization Results"}
', unsafe_allow_html=True)
- st.write("\nBest model name: ",x)
-# st.write("\nBest model Score: ",c)
-
- st.write("Best Model Rouge Scores: ")
- st.write("Rouge 1 Score: ",c["rouge1"])
- st.write("Rouge 2 Score: ",c["rouge2"])
- st.write("Rouge L Score: ",c["rougeL"])
- st.write("Rouge LSum Score: ",c["rougeLsum"])
-
- st.write("\nOriginal Data first 50 characters: ", v)
- st.write("\nBest Model Result: ",b)
-
-
-# print("between summarizer analysis")
- st.markdown(f'
', unsafe_allow_html=True)
-# st.write("Summarization Results for Model 3")
- st.write("Model name: sshleifer/distilbart-cnn-12-6")
- st.write("Rouge Scores: ")
- st.write("Rouge 1 Score: ",analysis["sshleifer/distilbart-cnn-12-6"]["Score"]["rouge1"])
- st.write("Rouge 2 Score: ",analysis["sshleifer/distilbart-cnn-12-6"]["Score"]["rouge2"])
- st.write("Rouge L Score: ",analysis["sshleifer/distilbart-cnn-12-6"]["Score"]["rougeL"])
- st.write("Rouge LSum Score: ",analysis["sshleifer/distilbart-cnn-12-6"]["Score"]["rougeLsum"])
-
- st.write("Result: ", analysis["sshleifer/distilbart-cnn-12-6"]["Result"])
-
-
-
-
-#OBJECT DETECTION
-
-def yolo_tiny(name):
- image = read_image(name)
-
- model = YolosForObjectDetection.from_pretrained('hustvl/yolos-tiny')
- image_processor = YolosImageProcessor.from_pretrained("hustvl/yolos-tiny")
-
- inputs = image_processor(images=image, return_tensors="pt")
- outputs = model(**inputs)
-
- # model predicts bounding boxes and corresponding COCO classes
- logits = outputs.logits
- bboxes = outputs.pred_boxes
-
-
- # print results
- target_sizes = torch.tensor([image.shape[::-1][:2]])
-
- results = image_processor.post_process_object_detection(outputs, threshold=0.7, target_sizes=target_sizes)[0]
-
- label_ = []
- bboxes = []
-
- for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
- box = [round(i, 2) for i in box.tolist()]
- print(
- f"Detected {model.config.id2label[label.item()]} with confidence "
- f"{round(score.item(), 3)} at location {box}"
- )
-
- label_.append(model.config.id2label[label.item()])
- bboxes.append(np.asarray(box,dtype="int"))
- bboxes = torch.tensor(bboxes, dtype=torch.int)
-
- img=draw_bounding_boxes(image, bboxes,labels = label_, width=3)
- img = torchvision.transforms.ToPILImage()(img)
- return img
-# img.show()
-
-
-
-def resnet_101(name):
- image = read_image(name)
- processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-101")
- model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-101")
-
- inputs = processor(images=image, return_tensors="pt")
- outputs = model(**inputs)
-
- # convert outputs (bounding boxes and class logits) to COCO API
- # let's only keep detections with score > 0.9
- target_sizes = torch.tensor([image.shape[::-1][:2]])
- results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.7)[0]
- label_ = []
- bboxes = []
- for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
- box = [round(i, 2) for i in box.tolist()]
- print(
- f"Detected {model.config.id2label[label.item()]} with confidence "
- f"{round(score.item(), 3)} at location {box}")
- label_.append(model.config.id2label[label.item()])
- bboxes.append(np.asarray(box,dtype="int"))
- bboxes = torch.tensor(bboxes, dtype=torch.int)
-
-
- bboxes = torch.tensor(bboxes, dtype=torch.int)
-
- img=draw_bounding_boxes(image, bboxes,labels = label_, width=3)
- img = torchvision.transforms.ToPILImage()(img)
- return img
-
-
-
-
-
-def resnet_50(name):
- image = read_image(name)
- processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50")
- model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50")
-
- inputs = processor(images=image, return_tensors="pt")
- outputs = model(**inputs)
-
- # convert outputs (bounding boxes and class logits) to COCO API
- # let's only keep detections with score > 0.9
- target_sizes = torch.tensor([image.shape[::-1][:2]])
- results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.7)[0]
- label_ = []
- bboxes = []
- for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
- box = [round(i, 2) for i in box.tolist()]
- print(
- f"Detected {model.config.id2label[label.item()]} with confidence "
- f"{round(score.item(), 3)} at location {box}"
- )
- label_.append(model.config.id2label[label.item()])
- bboxes.append(np.asarray(box,dtype="int"))
- bboxes = torch.tensor(bboxes, dtype=torch.int)
-
- bboxes = torch.tensor(bboxes, dtype=torch.int)
-
- img=draw_bounding_boxes(image, bboxes,labels = label_, width=3)
- img = torchvision.transforms.ToPILImage()(img)
- return img
-
-
-
-def object_detection():
-# st.write("Upload your image: ")
-# uploaded_files = ""
-# uploaded_files = st.file_uploader("Choose a image file", accept_multiple_files=True)
-
- option = st.selectbox(
- 'What image you want for analysis?',
- ("Choose an image for object detection analysis from the options below:",'Cat and Dog', '2 lazy cats chilling on a couch', 'An astronaut riding wild horse'))
-
- if option == 'Cat and Dog' or option == '2 lazy cats chilling on a couch' or option == 'An astronaut riding wild horse':
- st.write('You selected:', option)
- st.markdown(f'
{"Thank you for your patience. AI is generating 3 outputs to compare"}
', unsafe_allow_html=True)
-
- if option == 'Cat and Dog':
- name = "cat_dog.jpg"
- st.image("cat_dog.jpg")
- st.markdown(f'
{"Thank you for your patience. AI is generating 3 outputs to compare"}
', unsafe_allow_html=True)
-
- if option == '2 lazy cats chilling on a couch':
- name = "cat_remote.jpg"
- st.image("cat_remote.jpg")
- st.markdown(f'
{"Thank you for your patience. AI is generating 3 outputs to compare"}
', unsafe_allow_html=True)
-
- if option == 'An astronaut riding wild horse':
- name = "astronaut_rides_horse.png"
- st.image("astronaut_rides_horse.png")
- st.markdown(f'
{"Thank you for your patience. AI is generating 3 outputs to compare"}
', unsafe_allow_html=True)
-
- if st.button("Accept"):
- # global file_data
-# st.write("filename:", uploaded_files)
-# for uploaded_file in uploaded_files:
- # print("here")
- # file_data = open(uploaded_file.name).read()
- st.write("filename:", name)
-# name = uploaded_file.name
- st.image([yolo_tiny(name),resnet_101(name),resnet_50(name)],caption=["hustvl/yolos-tiny","facebook/detr-resnet-101","facebook/detr-resnet-50"])
-
-
-def task_categorization_model_predictions():
- st.image("./examples.png")
-
- # st.title("Text Analysis App")
-
- data = ""
-
- classifier = pipeline("zero-shot-classification",model="facebook/bart-large-mnli")
-
- global check
-
- st.markdown(f'
{"Write down below the description of your AI application in few sentences:"}
', unsafe_allow_html=True)
-
- prompt = st.text_input(" ")
-
- st.write("")
- st.write("")
-
- if prompt != "":
- # sbert_saved_model = torch.load("Sbert_saved_model", map_location=torch.device('cpu')).to("cpu")
- # model = sbert_saved_model.to("cpu")
- # tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/all-mpnet-base-v2")
- # pipe = TextClassificationPipeline(model= model, tokenizer=tokenizer, return_all_scores=True)
- # # outputs a list of dicts like [[{'label': 'NEGATIVE', 'score': 0.0001223755971295759}, {'label': 'POSITIVE', 'score': 0.9998776316642761}]]
-
- # # prompt = ["What is the the best ai for putting text report into data table?","How can I generate car sales agreement with ai model?","AI model to detect burglar on 48 hours of cctv video footage","I need Ai model help me with rewriting 50 financial statements emails into one summary report ?","I need a model for extracting person from an image"]
- # # responses = pipe(prompt)
-
-
- # models_list = pd.read_csv("models.csv")
- # # st.write(get_top_3(prompt))
-
- # top_cat, top_models = get_top_3(prompt)
- # # prompt = input("Enter your AI task idea:")
- # # top_cats,cat_to_models = get_models(prompt)
-
- # # top_models = cat_to_models[top_cats[0]]
-
- # top_cat = " " + top_cat[0].upper() + top_cat[1:]
-
-
-
- st.markdown(f'
', unsafe_allow_html=True)
- st.image("./buttons1.png")
-
-
-
-
-
-
-
-def model_selector_sbert():
- # st.title("Text Analysis App")
-
- data = ""
-
- st.title("Foundation Model Recommender")
-
- st.write("""Enter a brief description of your task, and this app will recommend an AI model for you!""")
-
- st.image("./examples.png")
- # st.markdown(f'
{"Please, describe your AI application below:"}
', unsafe_allow_html=True)
-
- prompt = st.text_area("Describe your task:")
-
- st.write("")
- st.write("")
-
- if st.button("Recommend Model"):
- if prompt != "":
- sbert_saved_model = torch.load("Sbert_saved_model", map_location=torch.device('cpu')).to("cpu")
- model = sbert_saved_model.to("cpu")
- tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/all-mpnet-base-v2")
- pipe = TextClassificationPipeline(model= model, tokenizer=tokenizer, return_all_scores=True)
- # outputs a list of dicts like [[{'label': 'NEGATIVE', 'score': 0.0001223755971295759}, {'label': 'POSITIVE', 'score': 0.9998776316642761}]]
-
- # prompt = ["What is the the best ai for putting text report into data table?","How can I generate car sales agreement with ai model?","AI model to detect burglar on 48 hours of cctv video footage","I need Ai model help me with rewriting 50 financial statements emails into one summary report ?","I need a model for extracting person from an image"]
- # responses = pipe(prompt)
-
-
- models_list = pd.read_csv("models.csv")
- # st.write(get_top_3(prompt))
-
- top_cat, top_models = get_top_3_a(prompt,pipe)
- # prompt = input("Enter your AI task idea:")
- # top_cats,cat_to_models = get_models(prompt)
-
- # top_models = cat_to_models[top_cats[0]]
-
- top_cat = " " + top_cat[0].upper() + top_cat[1:]
- st.markdown(f'
https://awakepress.com/stories/27441498-grimdawnashesofmalmouthcodexhacktooldownload. pdf https://wakelet.com/wake/aXaVY3Tn3O5kTPTEmm6qA https://coub.com/stories/3075809-grimdawnashesofmalmouthcodexhacktooldownload-better
-
https://awakepress.com/stories/34064359-grimdawnashesofmalmouthcodexhacktooldownload-upd. pdf https://wakelet.com/wake/VReYqk746Y2qqPzcYeLilg. https://coub.com/stories/3075809-grimdawnashesofmalmouthcodexhacktooldownload-better
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/blaziant/ysda_nlp_ops/templates/base.html b/spaces/blaziant/ysda_nlp_ops/templates/base.html
deleted file mode 100644
index cd1848a94cd12ffa1f9e9aff1b32cf8c50b29da5..0000000000000000000000000000000000000000
--- a/spaces/blaziant/ysda_nlp_ops/templates/base.html
+++ /dev/null
@@ -1,15 +0,0 @@
-
-
-
- dev_ops laba 5
-
-
-
-
-
-{% block body %}
-{% endblock %}
-
-
-
-
\ No newline at end of file
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/test_time_augmentation.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/test_time_augmentation.py
deleted file mode 100644
index 373e6bf00a39c040ff1da49d6dcd39a54a0b69a7..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/test_time_augmentation.py
+++ /dev/null
@@ -1,307 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import copy
-import numpy as np
-from contextlib import contextmanager
-from itertools import count
-from typing import List
-import torch
-from fvcore.transforms import HFlipTransform, NoOpTransform
-from torch import nn
-from torch.nn.parallel import DistributedDataParallel
-
-from detectron2.config import configurable
-from detectron2.data.detection_utils import read_image
-from detectron2.data.transforms import (
- RandomFlip,
- ResizeShortestEdge,
- ResizeTransform,
- apply_augmentations,
-)
-from detectron2.structures import Boxes, Instances
-
-from .meta_arch import GeneralizedRCNN
-from .postprocessing import detector_postprocess
-from .roi_heads.fast_rcnn import fast_rcnn_inference_single_image
-
-__all__ = ["DatasetMapperTTA", "GeneralizedRCNNWithTTA"]
-
-
-class DatasetMapperTTA:
- """
- Implement test-time augmentation for detection data.
- It is a callable which takes a dataset dict from a detection dataset,
- and returns a list of dataset dicts where the images
- are augmented from the input image by the transformations defined in the config.
- This is used for test-time augmentation.
- """
-
- @configurable
- def __init__(self, min_sizes: List[int], max_size: int, flip: bool):
- """
- Args:
- min_sizes: list of short-edge size to resize the image to
- max_size: maximum height or width of resized images
- flip: whether to apply flipping augmentation
- """
- self.min_sizes = min_sizes
- self.max_size = max_size
- self.flip = flip
-
- @classmethod
- def from_config(cls, cfg):
- return {
- "min_sizes": cfg.TEST.AUG.MIN_SIZES,
- "max_size": cfg.TEST.AUG.MAX_SIZE,
- "flip": cfg.TEST.AUG.FLIP,
- }
-
- def __call__(self, dataset_dict):
- """
- Args:
- dict: a dict in standard model input format. See tutorials for details.
-
- Returns:
- list[dict]:
- a list of dicts, which contain augmented version of the input image.
- The total number of dicts is ``len(min_sizes) * (2 if flip else 1)``.
- Each dict has field "transforms" which is a TransformList,
- containing the transforms that are used to generate this image.
- """
- numpy_image = dataset_dict["image"].permute(1, 2, 0).numpy()
- shape = numpy_image.shape
- orig_shape = (dataset_dict["height"], dataset_dict["width"])
- if shape[:2] != orig_shape:
- # It transforms the "original" image in the dataset to the input image
- pre_tfm = ResizeTransform(orig_shape[0], orig_shape[1], shape[0], shape[1])
- else:
- pre_tfm = NoOpTransform()
-
- # Create all combinations of augmentations to use
- aug_candidates = [] # each element is a list[Augmentation]
- for min_size in self.min_sizes:
- resize = ResizeShortestEdge(min_size, self.max_size)
- aug_candidates.append([resize]) # resize only
- if self.flip:
- flip = RandomFlip(prob=1.0)
- aug_candidates.append([resize, flip]) # resize + flip
-
- # Apply all the augmentations
- ret = []
- for aug in aug_candidates:
- new_image, tfms = apply_augmentations(aug, np.copy(numpy_image))
- torch_image = torch.from_numpy(np.ascontiguousarray(new_image.transpose(2, 0, 1)))
-
- dic = copy.deepcopy(dataset_dict)
- dic["transforms"] = pre_tfm + tfms
- dic["image"] = torch_image
- ret.append(dic)
- return ret
-
-
-class GeneralizedRCNNWithTTA(nn.Module):
- """
- A GeneralizedRCNN with test-time augmentation enabled.
- Its :meth:`__call__` method has the same interface as :meth:`GeneralizedRCNN.forward`.
- """
-
- def __init__(self, cfg, model, tta_mapper=None, batch_size=3):
- """
- Args:
- cfg (CfgNode):
- model (GeneralizedRCNN): a GeneralizedRCNN to apply TTA on.
- tta_mapper (callable): takes a dataset dict and returns a list of
- augmented versions of the dataset dict. Defaults to
- `DatasetMapperTTA(cfg)`.
- batch_size (int): batch the augmented images into this batch size for inference.
- """
- super().__init__()
- if isinstance(model, DistributedDataParallel):
- model = model.module
- assert isinstance(
- model, GeneralizedRCNN
- ), "TTA is only supported on GeneralizedRCNN. Got a model of type {}".format(type(model))
- self.cfg = cfg.clone()
- assert not self.cfg.MODEL.KEYPOINT_ON, "TTA for keypoint is not supported yet"
- assert (
- not self.cfg.MODEL.LOAD_PROPOSALS
- ), "TTA for pre-computed proposals is not supported yet"
-
- self.model = model
-
- if tta_mapper is None:
- tta_mapper = DatasetMapperTTA(cfg)
- self.tta_mapper = tta_mapper
- self.batch_size = batch_size
-
- @contextmanager
- def _turn_off_roi_heads(self, attrs):
- """
- Open a context where some heads in `model.roi_heads` are temporarily turned off.
- Args:
- attr (list[str]): the attribute in `model.roi_heads` which can be used
- to turn off a specific head, e.g., "mask_on", "keypoint_on".
- """
- roi_heads = self.model.roi_heads
- old = {}
- for attr in attrs:
- try:
- old[attr] = getattr(roi_heads, attr)
- except AttributeError:
- # The head may not be implemented in certain ROIHeads
- pass
-
- if len(old.keys()) == 0:
- yield
- else:
- for attr in old.keys():
- setattr(roi_heads, attr, False)
- yield
- for attr in old.keys():
- setattr(roi_heads, attr, old[attr])
-
- def _batch_inference(self, batched_inputs, detected_instances=None):
- """
- Execute inference on a list of inputs,
- using batch size = self.batch_size, instead of the length of the list.
-
- Inputs & outputs have the same format as :meth:`GeneralizedRCNN.inference`
- """
- if detected_instances is None:
- detected_instances = [None] * len(batched_inputs)
-
- outputs = []
- inputs, instances = [], []
- for idx, input, instance in zip(count(), batched_inputs, detected_instances):
- inputs.append(input)
- instances.append(instance)
- if len(inputs) == self.batch_size or idx == len(batched_inputs) - 1:
- outputs.extend(
- self.model.inference(
- inputs,
- instances if instances[0] is not None else None,
- do_postprocess=False,
- )
- )
- inputs, instances = [], []
- return outputs
-
- def __call__(self, batched_inputs):
- """
- Same input/output format as :meth:`GeneralizedRCNN.forward`
- """
-
- def _maybe_read_image(dataset_dict):
- ret = copy.copy(dataset_dict)
- if "image" not in ret:
- image = read_image(ret.pop("file_name"), self.model.input_format)
- image = torch.from_numpy(np.ascontiguousarray(image.transpose(2, 0, 1))) # CHW
- ret["image"] = image
- if "height" not in ret and "width" not in ret:
- ret["height"] = image.shape[1]
- ret["width"] = image.shape[2]
- return ret
-
- return [self._inference_one_image(_maybe_read_image(x)) for x in batched_inputs]
-
- def _inference_one_image(self, input):
- """
- Args:
- input (dict): one dataset dict with "image" field being a CHW tensor
-
- Returns:
- dict: one output dict
- """
- orig_shape = (input["height"], input["width"])
- augmented_inputs, tfms = self._get_augmented_inputs(input)
- # Detect boxes from all augmented versions
- with self._turn_off_roi_heads(["mask_on", "keypoint_on"]):
- # temporarily disable roi heads
- all_boxes, all_scores, all_classes = self._get_augmented_boxes(augmented_inputs, tfms)
- # merge all detected boxes to obtain final predictions for boxes
- merged_instances = self._merge_detections(all_boxes, all_scores, all_classes, orig_shape)
-
- if self.cfg.MODEL.MASK_ON:
- # Use the detected boxes to obtain masks
- augmented_instances = self._rescale_detected_boxes(
- augmented_inputs, merged_instances, tfms
- )
- # run forward on the detected boxes
- outputs = self._batch_inference(augmented_inputs, augmented_instances)
- # Delete now useless variables to avoid being out of memory
- del augmented_inputs, augmented_instances
- # average the predictions
- merged_instances.pred_masks = self._reduce_pred_masks(outputs, tfms)
- merged_instances = detector_postprocess(merged_instances, *orig_shape)
- return {"instances": merged_instances}
- else:
- return {"instances": merged_instances}
-
- def _get_augmented_inputs(self, input):
- augmented_inputs = self.tta_mapper(input)
- tfms = [x.pop("transforms") for x in augmented_inputs]
- return augmented_inputs, tfms
-
- def _get_augmented_boxes(self, augmented_inputs, tfms):
- # 1: forward with all augmented images
- outputs = self._batch_inference(augmented_inputs)
- # 2: union the results
- all_boxes = []
- all_scores = []
- all_classes = []
- for output, tfm in zip(outputs, tfms):
- # Need to inverse the transforms on boxes, to obtain results on original image
- pred_boxes = output.pred_boxes.tensor
- original_pred_boxes = tfm.inverse().apply_box(pred_boxes.cpu().numpy())
- all_boxes.append(torch.from_numpy(original_pred_boxes).to(pred_boxes.device))
-
- all_scores.extend(output.scores)
- all_classes.extend(output.pred_classes)
- all_boxes = torch.cat(all_boxes, dim=0)
- return all_boxes, all_scores, all_classes
-
- def _merge_detections(self, all_boxes, all_scores, all_classes, shape_hw):
- # select from the union of all results
- num_boxes = len(all_boxes)
- num_classes = self.cfg.MODEL.ROI_HEADS.NUM_CLASSES
- # +1 because fast_rcnn_inference expects background scores as well
- all_scores_2d = torch.zeros(num_boxes, num_classes + 1, device=all_boxes.device)
- for idx, cls, score in zip(count(), all_classes, all_scores):
- all_scores_2d[idx, cls] = score
-
- merged_instances, _ = fast_rcnn_inference_single_image(
- all_boxes,
- all_scores_2d,
- shape_hw,
- 1e-8,
- self.cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST,
- self.cfg.TEST.DETECTIONS_PER_IMAGE,
- )
-
- return merged_instances
-
- def _rescale_detected_boxes(self, augmented_inputs, merged_instances, tfms):
- augmented_instances = []
- for input, tfm in zip(augmented_inputs, tfms):
- # Transform the target box to the augmented image's coordinate space
- pred_boxes = merged_instances.pred_boxes.tensor.cpu().numpy()
- pred_boxes = torch.from_numpy(tfm.apply_box(pred_boxes))
-
- aug_instances = Instances(
- image_size=input["image"].shape[1:3],
- pred_boxes=Boxes(pred_boxes),
- pred_classes=merged_instances.pred_classes,
- scores=merged_instances.scores,
- )
- augmented_instances.append(aug_instances)
- return augmented_instances
-
- def _reduce_pred_masks(self, outputs, tfms):
- # Should apply inverse transforms on masks.
- # We assume only resize & flip are used. pred_masks is a scale-invariant
- # representation, so we handle flip specially
- for output, tfm in zip(outputs, tfms):
- if any(isinstance(t, HFlipTransform) for t in tfm.transforms):
- output.pred_masks = output.pred_masks.flip(dims=[3])
- all_pred_masks = torch.stack([o.pred_masks for o in outputs], dim=0)
- avg_pred_masks = torch.mean(all_pred_masks, dim=0)
- return avg_pred_masks
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/mask_ops.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/mask_ops.py
deleted file mode 100644
index 990d04abbb120e40fe07a21d024dfead471bc998..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/mask_ops.py
+++ /dev/null
@@ -1,275 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import numpy as np
-from typing import Tuple
-import torch
-from PIL import Image
-from torch.nn import functional as F
-
-__all__ = ["paste_masks_in_image"]
-
-
-BYTES_PER_FLOAT = 4
-# TODO: This memory limit may be too much or too little. It would be better to
-# determine it based on available resources.
-GPU_MEM_LIMIT = 1024**3 # 1 GB memory limit
-
-
-def _do_paste_mask(masks, boxes, img_h: int, img_w: int, skip_empty: bool = True):
- """
- Args:
- masks: N, 1, H, W
- boxes: N, 4
- img_h, img_w (int):
- skip_empty (bool): only paste masks within the region that
- tightly bound all boxes, and returns the results this region only.
- An important optimization for CPU.
-
- Returns:
- if skip_empty == False, a mask of shape (N, img_h, img_w)
- if skip_empty == True, a mask of shape (N, h', w'), and the slice
- object for the corresponding region.
- """
- # On GPU, paste all masks together (up to chunk size)
- # by using the entire image to sample the masks
- # Compared to pasting them one by one,
- # this has more operations but is faster on COCO-scale dataset.
- device = masks.device
-
- if skip_empty and not torch.jit.is_scripting():
- x0_int, y0_int = torch.clamp(boxes.min(dim=0).values.floor()[:2] - 1, min=0).to(
- dtype=torch.int32
- )
- x1_int = torch.clamp(boxes[:, 2].max().ceil() + 1, max=img_w).to(dtype=torch.int32)
- y1_int = torch.clamp(boxes[:, 3].max().ceil() + 1, max=img_h).to(dtype=torch.int32)
- else:
- x0_int, y0_int = 0, 0
- x1_int, y1_int = img_w, img_h
- x0, y0, x1, y1 = torch.split(boxes, 1, dim=1) # each is Nx1
-
- N = masks.shape[0]
-
- img_y = torch.arange(y0_int, y1_int, device=device, dtype=torch.float32) + 0.5
- img_x = torch.arange(x0_int, x1_int, device=device, dtype=torch.float32) + 0.5
- img_y = (img_y - y0) / (y1 - y0) * 2 - 1
- img_x = (img_x - x0) / (x1 - x0) * 2 - 1
- # img_x, img_y have shapes (N, w), (N, h)
-
- gx = img_x[:, None, :].expand(N, img_y.size(1), img_x.size(1))
- gy = img_y[:, :, None].expand(N, img_y.size(1), img_x.size(1))
- grid = torch.stack([gx, gy], dim=3)
-
- if not torch.jit.is_scripting():
- if not masks.dtype.is_floating_point:
- masks = masks.float()
- img_masks = F.grid_sample(masks, grid.to(masks.dtype), align_corners=False)
-
- if skip_empty and not torch.jit.is_scripting():
- return img_masks[:, 0], (slice(y0_int, y1_int), slice(x0_int, x1_int))
- else:
- return img_masks[:, 0], ()
-
-
-# Annotate boxes as Tensor (but not Boxes) in order to use scripting
-@torch.jit.script_if_tracing
-def paste_masks_in_image(
- masks: torch.Tensor, boxes: torch.Tensor, image_shape: Tuple[int, int], threshold: float = 0.5
-):
- """
- Paste a set of masks that are of a fixed resolution (e.g., 28 x 28) into an image.
- The location, height, and width for pasting each mask is determined by their
- corresponding bounding boxes in boxes.
-
- Note:
- This is a complicated but more accurate implementation. In actual deployment, it is
- often enough to use a faster but less accurate implementation.
- See :func:`paste_mask_in_image_old` in this file for an alternative implementation.
-
- Args:
- masks (tensor): Tensor of shape (Bimg, Hmask, Wmask), where Bimg is the number of
- detected object instances in the image and Hmask, Wmask are the mask width and mask
- height of the predicted mask (e.g., Hmask = Wmask = 28). Values are in [0, 1].
- boxes (Boxes or Tensor): A Boxes of length Bimg or Tensor of shape (Bimg, 4).
- boxes[i] and masks[i] correspond to the same object instance.
- image_shape (tuple): height, width
- threshold (float): A threshold in [0, 1] for converting the (soft) masks to
- binary masks.
-
- Returns:
- img_masks (Tensor): A tensor of shape (Bimg, Himage, Wimage), where Bimg is the
- number of detected object instances and Himage, Wimage are the image width
- and height. img_masks[i] is a binary mask for object instance i.
- """
-
- assert masks.shape[-1] == masks.shape[-2], "Only square mask predictions are supported"
- N = len(masks)
- if N == 0:
- return masks.new_empty((0,) + image_shape, dtype=torch.uint8)
- if not isinstance(boxes, torch.Tensor):
- boxes = boxes.tensor
- device = boxes.device
- assert len(boxes) == N, boxes.shape
-
- img_h, img_w = image_shape
-
- # The actual implementation split the input into chunks,
- # and paste them chunk by chunk.
- if device.type == "cpu" or torch.jit.is_scripting():
- # CPU is most efficient when they are pasted one by one with skip_empty=True
- # so that it performs minimal number of operations.
- num_chunks = N
- else:
- # GPU benefits from parallelism for larger chunks, but may have memory issue
- # int(img_h) because shape may be tensors in tracing
- num_chunks = int(np.ceil(N * int(img_h) * int(img_w) * BYTES_PER_FLOAT / GPU_MEM_LIMIT))
- assert (
- num_chunks <= N
- ), "Default GPU_MEM_LIMIT in mask_ops.py is too small; try increasing it"
- chunks = torch.chunk(torch.arange(N, device=device), num_chunks)
-
- img_masks = torch.zeros(
- N, img_h, img_w, device=device, dtype=torch.bool if threshold >= 0 else torch.uint8
- )
- for inds in chunks:
- masks_chunk, spatial_inds = _do_paste_mask(
- masks[inds, None, :, :], boxes[inds], img_h, img_w, skip_empty=device.type == "cpu"
- )
-
- if threshold >= 0:
- masks_chunk = (masks_chunk >= threshold).to(dtype=torch.bool)
- else:
- # for visualization and debugging
- masks_chunk = (masks_chunk * 255).to(dtype=torch.uint8)
-
- if torch.jit.is_scripting(): # Scripting does not use the optimized codepath
- img_masks[inds] = masks_chunk
- else:
- img_masks[(inds,) + spatial_inds] = masks_chunk
- return img_masks
-
-
-# The below are the original paste function (from Detectron1) which has
-# larger quantization error.
-# It is faster on CPU, while the aligned one is faster on GPU thanks to grid_sample.
-
-
-def paste_mask_in_image_old(mask, box, img_h, img_w, threshold):
- """
- Paste a single mask in an image.
- This is a per-box implementation of :func:`paste_masks_in_image`.
- This function has larger quantization error due to incorrect pixel
- modeling and is not used any more.
-
- Args:
- mask (Tensor): A tensor of shape (Hmask, Wmask) storing the mask of a single
- object instance. Values are in [0, 1].
- box (Tensor): A tensor of shape (4, ) storing the x0, y0, x1, y1 box corners
- of the object instance.
- img_h, img_w (int): Image height and width.
- threshold (float): Mask binarization threshold in [0, 1].
-
- Returns:
- im_mask (Tensor):
- The resized and binarized object mask pasted into the original
- image plane (a tensor of shape (img_h, img_w)).
- """
- # Conversion from continuous box coordinates to discrete pixel coordinates
- # via truncation (cast to int32). This determines which pixels to paste the
- # mask onto.
- box = box.to(dtype=torch.int32) # Continuous to discrete coordinate conversion
- # An example (1D) box with continuous coordinates (x0=0.7, x1=4.3) will map to
- # a discrete coordinates (x0=0, x1=4). Note that box is mapped to 5 = x1 - x0 + 1
- # pixels (not x1 - x0 pixels).
- samples_w = box[2] - box[0] + 1 # Number of pixel samples, *not* geometric width
- samples_h = box[3] - box[1] + 1 # Number of pixel samples, *not* geometric height
-
- # Resample the mask from it's original grid to the new samples_w x samples_h grid
- mask = Image.fromarray(mask.cpu().numpy())
- mask = mask.resize((samples_w, samples_h), resample=Image.BILINEAR)
- mask = np.array(mask, copy=False)
-
- if threshold >= 0:
- mask = np.array(mask > threshold, dtype=np.uint8)
- mask = torch.from_numpy(mask)
- else:
- # for visualization and debugging, we also
- # allow it to return an unmodified mask
- mask = torch.from_numpy(mask * 255).to(torch.uint8)
-
- im_mask = torch.zeros((img_h, img_w), dtype=torch.uint8)
- x_0 = max(box[0], 0)
- x_1 = min(box[2] + 1, img_w)
- y_0 = max(box[1], 0)
- y_1 = min(box[3] + 1, img_h)
-
- im_mask[y_0:y_1, x_0:x_1] = mask[
- (y_0 - box[1]) : (y_1 - box[1]), (x_0 - box[0]) : (x_1 - box[0])
- ]
- return im_mask
-
-
-# Our pixel modeling requires extrapolation for any continuous
-# coordinate < 0.5 or > length - 0.5. When sampling pixels on the masks,
-# we would like this extrapolation to be an interpolation between boundary values and zero,
-# instead of using absolute zero or boundary values.
-# Therefore `paste_mask_in_image_old` is often used with zero padding around the masks like this:
-# masks, scale = pad_masks(masks[:, 0, :, :], 1)
-# boxes = scale_boxes(boxes.tensor, scale)
-
-
-def pad_masks(masks, padding):
- """
- Args:
- masks (tensor): A tensor of shape (B, M, M) representing B masks.
- padding (int): Number of cells to pad on all sides.
-
- Returns:
- The padded masks and the scale factor of the padding size / original size.
- """
- B = masks.shape[0]
- M = masks.shape[-1]
- pad2 = 2 * padding
- scale = float(M + pad2) / M
- padded_masks = masks.new_zeros((B, M + pad2, M + pad2))
- padded_masks[:, padding:-padding, padding:-padding] = masks
- return padded_masks, scale
-
-
-def scale_boxes(boxes, scale):
- """
- Args:
- boxes (tensor): A tensor of shape (B, 4) representing B boxes with 4
- coords representing the corners x0, y0, x1, y1,
- scale (float): The box scaling factor.
-
- Returns:
- Scaled boxes.
- """
- w_half = (boxes[:, 2] - boxes[:, 0]) * 0.5
- h_half = (boxes[:, 3] - boxes[:, 1]) * 0.5
- x_c = (boxes[:, 2] + boxes[:, 0]) * 0.5
- y_c = (boxes[:, 3] + boxes[:, 1]) * 0.5
-
- w_half *= scale
- h_half *= scale
-
- scaled_boxes = torch.zeros_like(boxes)
- scaled_boxes[:, 0] = x_c - w_half
- scaled_boxes[:, 2] = x_c + w_half
- scaled_boxes[:, 1] = y_c - h_half
- scaled_boxes[:, 3] = y_c + h_half
- return scaled_boxes
-
-
-@torch.jit.script_if_tracing
-def _paste_masks_tensor_shape(
- masks: torch.Tensor,
- boxes: torch.Tensor,
- image_shape: Tuple[torch.Tensor, torch.Tensor],
- threshold: float = 0.5,
-):
- """
- A wrapper of paste_masks_in_image where image_shape is Tensor.
- During tracing, shapes might be tensors instead of ints. The Tensor->int
- conversion should be scripted rather than traced.
- """
- return paste_masks_in_image(masks, boxes, (int(image_shape[0]), int(image_shape[1])), threshold)
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/structures/__init__.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/structures/__init__.py
deleted file mode 100644
index f3ee6057e3ec2731984ce8203c6eaf5348d08260..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/structures/__init__.py
+++ /dev/null
@@ -1,17 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from .boxes import Boxes, BoxMode, pairwise_iou, pairwise_ioa, pairwise_point_box_distance
-from .image_list import ImageList
-
-from .instances import Instances
-from .keypoints import Keypoints, heatmaps_to_keypoints
-from .masks import BitMasks, PolygonMasks, polygons_to_bitmask, ROIMasks
-from .rotated_boxes import RotatedBoxes
-from .rotated_boxes import pairwise_iou as pairwise_iou_rotated
-
-__all__ = [k for k in globals().keys() if not k.startswith("_")]
-
-
-from detectron2.utils.env import fixup_module_metadata
-
-fixup_module_metadata(__name__, globals(), __all__)
-del fixup_module_metadata
diff --git a/spaces/cccc-c/bingo/src/components/chat.tsx b/spaces/cccc-c/bingo/src/components/chat.tsx
deleted file mode 100644
index a37ab1cc96ca2e6bfd9acbe313a8d946bfd5c3d4..0000000000000000000000000000000000000000
--- a/spaces/cccc-c/bingo/src/components/chat.tsx
+++ /dev/null
@@ -1,93 +0,0 @@
-'use client'
-
-import { useCallback, useEffect, useMemo, useState } from 'react'
-import { useAtom } from 'jotai'
-import Image from 'next/image'
-import { cn } from '@/lib/utils'
-import { ChatList } from '@/components/chat-list'
-import { ChatPanel } from '@/components/chat-panel'
-import { WelcomeScreen } from '@/components/welcome-screen'
-import { ChatScrollAnchor } from '@/components/chat-scroll-anchor'
-import { ToneSelector } from './tone-selector'
-import { ChatHeader } from './chat-header'
-import { ChatSuggestions } from './chat-suggestions'
-import { bingConversationStyleAtom } from '@/state'
-import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom'
-import StopIcon from '@/assets/images/stop.svg'
-import { useBing } from '@/lib/hooks/use-bing'
-import { ChatMessageModel } from '@/lib/bots/bing/types'
-import { ChatNotification } from './chat-notification'
-import { Settings } from './settings'
-import { ChatHistory } from './chat-history'
-
-export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] }
-
-export default function Chat({ className }: ChatProps) {
-
- const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom)
- const {
- messages,
- sendMessage,
- resetConversation,
- stopGenerating,
- setInput,
- bot,
- input,
- generating,
- isSpeaking,
- uploadImage,
- attachmentList,
- setAttachmentList,
- } = useBing()
-
- useEffect(() => {
- window.scrollTo({
- top: document.body.offsetHeight,
- behavior: 'smooth'
- })
- }, [])
-
- return (
-
- `;
- element.appendChild(external_link);
-
- const iframe = document.createElement('iframe');
- iframe.src = new URL(path, url).toString();
- iframe.height = height;
- iframe.allow = "autoplay; camera; microphone; clipboard-read; clipboard-write;"
- iframe.width = width;
- iframe.style.border = 0;
- element.appendChild(iframe);
- })""" + "({port}, {path}, {width}, {height}, {cache}, window.element)".format(
- port=json.dumps(self.server_port),
- path=json.dumps("/"),
- width=json.dumps(self.width),
- height=json.dumps(self.height),
- cache=json.dumps(False),
- )
-
- display(Javascript(code))
- else:
- display(
- HTML(
- f''
- )
- )
- except ImportError:
- pass
-
- if getattr(self, "analytics_enabled", False):
- data = {
- "launch_method": "browser" if inbrowser else "inline",
- "is_google_colab": self.is_colab,
- "is_sharing_on": self.share,
- "share_url": self.share_url,
- "enable_queue": self.enable_queue,
- "show_tips": self.show_tips,
- "server_name": server_name,
- "server_port": server_port,
- "is_space": self.space_id is not None,
- "mode": self.mode,
- }
- analytics.launched_analytics(self, data)
-
- utils.show_tip(self)
-
- # Block main thread if debug==True
- if debug or int(os.getenv("GRADIO_DEBUG", 0)) == 1:
- self.block_thread()
- # Block main thread if running in a script to stop script from exiting
- is_in_interactive_mode = bool(getattr(sys, "ps1", sys.flags.interactive))
-
- if not prevent_thread_lock and not is_in_interactive_mode:
- self.block_thread()
-
- return TupleNoPrint((self.server_app, self.local_url, self.share_url))
-
- def integrate(
- self,
- comet_ml=None,
- wandb: ModuleType | None = None,
- mlflow: ModuleType | None = None,
- ) -> None:
- """
- A catch-all method for integrating with other libraries. This method should be run after launch()
- Parameters:
- comet_ml: If a comet_ml Experiment object is provided, will integrate with the experiment and appear on Comet dashboard
- wandb: If the wandb module is provided, will integrate with it and appear on WandB dashboard
- mlflow: If the mlflow module is provided, will integrate with the experiment and appear on ML Flow dashboard
- """
- analytics_integration = ""
- if comet_ml is not None:
- analytics_integration = "CometML"
- comet_ml.log_other("Created from", "Gradio")
- if self.share_url is not None:
- comet_ml.log_text(f"gradio: {self.share_url}")
- comet_ml.end()
- elif self.local_url:
- comet_ml.log_text(f"gradio: {self.local_url}")
- comet_ml.end()
- else:
- raise ValueError("Please run `launch()` first.")
- if wandb is not None:
- analytics_integration = "WandB"
- if self.share_url is not None:
- wandb.log(
- {
- "Gradio panel": wandb.Html(
- ''
- )
- }
- )
- else:
- print(
- "The WandB integration requires you to "
- "`launch(share=True)` first."
- )
- if mlflow is not None:
- analytics_integration = "MLFlow"
- if self.share_url is not None:
- mlflow.log_param("Gradio Interface Share Link", self.share_url)
- else:
- mlflow.log_param("Gradio Interface Local Link", self.local_url)
- if self.analytics_enabled and analytics_integration:
- data = {"integration": analytics_integration}
- analytics.integration_analytics(data)
-
- def close(self, verbose: bool = True) -> None:
- """
- Closes the Interface that was launched and frees the port.
- """
- try:
- if self.enable_queue:
- self._queue.close()
- if self.server:
- self.server.close()
- self.is_running = False
- # So that the startup events (starting the queue)
- # happen the next time the app is launched
- self.app.startup_events_triggered = False
- if verbose:
- print(f"Closing server running on port: {self.server_port}")
- except (AttributeError, OSError): # can't close if not running
- pass
-
- def block_thread(
- self,
- ) -> None:
- """Block main thread until interrupted by user."""
- try:
- while True:
- time.sleep(0.1)
- except (KeyboardInterrupt, OSError):
- print("Keyboard interruption in main thread... closing server.")
- if self.server:
- self.server.close()
- for tunnel in CURRENT_TUNNELS:
- tunnel.kill()
-
- def attach_load_events(self):
- """Add a load event for every component whose initial value should be randomized."""
- if Context.root_block:
- for component in Context.root_block.blocks.values():
- if (
- isinstance(component, components.IOComponent)
- and component.load_event_to_attach
- ):
- load_fn, every = component.load_event_to_attach
- # Use set_event_trigger to avoid ambiguity between load class/instance method
- dep = self.set_event_trigger(
- "load",
- load_fn,
- None,
- component,
- no_target=True,
- # If every is None, for sure skip the queue
- # else, let the enable_queue parameter take precedence
- # this will raise a nice error message is every is used
- # without queue
- queue=False if every is None else None,
- every=every,
- )[0]
- component.load_event = dep
-
- def startup_events(self):
- """Events that should be run when the app containing this block starts up."""
-
- if self.enable_queue:
- utils.run_coro_in_background(self._queue.start, self.ssl_verify)
- # So that processing can resume in case the queue was stopped
- self._queue.stopped = False
- utils.run_coro_in_background(self.create_limiter)
-
- def queue_enabled_for_fn(self, fn_index: int):
- if self.dependencies[fn_index]["queue"] is None:
- return self.enable_queue
- return self.dependencies[fn_index]["queue"]
diff --git a/spaces/cihyFjudo/fairness-paper-search/The Secrets of Di Pwedeng Hindi Pwede How Robin Padilla and Vina Morales Created Chemistry on Screen.md b/spaces/cihyFjudo/fairness-paper-search/The Secrets of Di Pwedeng Hindi Pwede How Robin Padilla and Vina Morales Created Chemistry on Screen.md
deleted file mode 100644
index 2bf6159fe6615a4463d909ff9a9320702069caed..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/The Secrets of Di Pwedeng Hindi Pwede How Robin Padilla and Vina Morales Created Chemistry on Screen.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
European War 6: 1914 Mod Apk Unlock All - A Guide for Strategy Game Fans
-
If you are a fan of strategy games that simulate historical wars, you might have heard of European War 6: 1914, a popular game developed by Easytech, a company that specializes in historical strategy games. In this game, you can choose from over 150 countries and regions, and lead them to victory or defeat in various wars and conflicts that took place between 1798 and 1950. You can also customize your own generals, troops, weapons, and technologies, and challenge other players online or offline.
-
However, some players may find the game too difficult, too expensive, or too boring after a while. That's why some of them resort to using a mod apk, which is a modified version of the original game application that can unlock all the features, resources, and content that are otherwise restricted or limited in the game. A mod apk can give you unlimited money, medals, generals, troops, weapons, technologies, and more. It can also remove ads, bugs, and errors that may affect your gameplay.
But is using a mod apk for European War 6: 1914 a good idea? What are the benefits and risks of doing so? How can you download and install a mod apk for European War 6: 1914? In this article, we will answer these questions and more. We will also provide you with some tips and tricks on how to use a mod apk for European War 6: 1914 safely and effectively. Read on to find out more!
-
What is European War 6: 1914 and what are its features?
-
European War 6: 1914 is a strategy game that simulates the historical wars of the 19th and 20th centuries. It is the sixth installment of the European War series, which started in 2010 with European War: Napoleon Wars. The game was released in 2020 for Android and iOS devices.
-
The game has four main modes: Campaign, Conquest, Challenge, and Multiplayer. In Campaign mode, you can follow the historical events and scenarios of different wars and regions, such as the Napoleonic Wars, the American Civil War, the World War I, the World War II, etc. You can choose from different countries and factions, and complete various missions and objectives to progress through the story. In Conquest mode, you can create your own scenarios and maps, and conquer the world with your own strategy and tactics. You can also adjust the difficulty level, the number of countries and regions, the resources and technologies available, etc. In Challenge mode, you can test your skills and knowledge in different quizzes and puzzles related to history and geography. You can also earn medals and rewards for completing them. In Multiplayer mode, you can play with or against other players online or offline via Wi-Fi or Bluetooth. You can also chat with them, send them gifts, or join alliances.
-
The game has over 150 countries and regions to choose from, each with their own unique generals, troops, weapons, and technologies. You can also customize your own generals by changing their names, portraits, skills, ranks, etc. You can also upgrade your troops by training them, equipping them with different weapons and armors, etc. You can also research new technologies by spending money and medals on them. The game has over 200 historical battles to fight in, each with their own terrain, weather, objectives, etc. You can also use different strategies and tactics to win them, such as diplomacy, espionage, sabotage, etc.
-
The game has high-quality graphics that depict the historical scenes and characters in detail. The game also has realistic sound effects that enhance the atmosphere of war. The game has a user-friendly interface that allows you to control your units easily and efficiently. The game also has a tutorial mode that teaches you the basics of the game.
-
The game is similar to other historical strategy games such as Age of Civilizations II , Age of Empires, or Civilization. However, it has its own unique features and challenges that make it stand out from the crowd. If you are looking for a strategy game that combines historical accuracy, complexity, and fun, you might want to give European War 6: 1914 a try.
-
What is a mod apk and why do some players use it?
-
A mod apk is a modified version of an original game application that can alter or enhance some aspects of the game. A mod apk can be created by the game developers themselves, or by third-party programmers or hackers who have access to the game's source code. A mod apk can be downloaded from various websites or platforms, such as Google Play, App Store, or APKPure.
-
european war 6 1914 mod apk unlimited money and medals
-european war 6 1914 hack mod apk free download
-european war 6 1914 mod apk latest version
-european war 6 1914 mod apk all generals unlocked
-european war 6 1914 mod apk android 1
-european war 6 1914 mod apk revdl
-european war 6 1914 mod apk no root
-european war 6 1914 mod apk offline
-european war 6 1914 mod apk obb
-european war 6 1914 mod apk rexdl
-european war 6 1914 mod apk premium
-european war 6 1914 mod apk full version
-european war 6 1914 mod apk mega
-european war 6 1914 mod apk data
-european war 6 1914 mod apk vip
-european war 6 1914 mod apk pro
-european war 6 1914 mod apk cracked
-european war 6 1914 mod apk cheat
-european war 6 1914 mod apk hack download
-european war 6 1914 mod apk update
-european war 6 1914 mod apk new version
-european war 6 1914 mod apk original
-european war 6 1914 mod apk for pc
-european war 6 1914 mod apk for ios
-european war 6 1914 mod apk for windows
-european war 6 1914 mod apk for mac
-european war 6 1914 mod apk for laptop
-european war 6 1914 mod apk for tablet
-european war 6 1914 mod apk for chromebook
-european war 6 1914 mod apk for android tv
-european war 6: world at war - ww1 strategy game mod apk unlock all
-easytech's world conquest games: ww1 ww2 civil war - all unlocked with mods and cheats
-how to install and play european war: world at war - ww1 strategy game with mods and hacks on android devices
-best tips and tricks for playing and winning in european war: world at war - ww1 strategy game with mods and hacks on android devices
-how to get free money and medals in european war: world at war - ww1 strategy game with mods and hacks on android devices
-how to unlock all generals and scenarios in european war: world at war - ww1 strategy game with mods and hacks on android devices
-how to upgrade and customize your troops and weapons in european war: world at war - ww1 strategy game with mods and hacks on android devices
-how to use diplomacy and alliances in european war: world at war - ww1 strategy game with mods and hacks on android devices
-how to conquer the world and win the great wars in european war: world at war - ww1 strategy game with mods and hacks on android devices
-how to play multiplayer mode in european war: world at war - ww1 strategy game with mods and hacks on android devices
-
Some players use a mod apk for various reasons, such as:
-
-
To unlock all the features, resources, and content that are otherwise restricted or limited in the game
-
To bypass the in-app purchases or ads that may require real money or interrupt the gameplay
-
To cheat or hack the game to gain an unfair advantage over other players or the game itself
-
To customize or personalize the game according to their preferences and tastes
-
To explore new possibilities or scenarios that are not available in the original game
-
To fix some bugs or errors that may affect the gameplay
-
To have more fun and enjoyment with the game
-
-
However, using a mod apk also comes with some legal and ethical issues, such as:
-
-
Violating the terms and conditions of the game developers or publishers
-
Infringing the intellectual property rights of the game developers or publishers
-
Exposing the device or data to viruses, malware, or scams that may harm them
-
Disrupting the balance and fairness of the game for other players
-
Ruining the original design and intention of the game creators
-
Losing the official support and updates from the game developers or publishers
-
Risking being banned or suspended from the game or its online services
-
-
Therefore, using a mod apk for European War 6: 1914 is a personal choice that depends on your own judgment and responsibility. You should weigh the pros and cons carefully before deciding to use a mod apk for European War 6: 1914.
-
What are the benefits of using a mod apk for European War 6: 1914?
If you decide to use a mod apk for European War 6: 1914, you can enjoy some benefits that the original game may not offer. Here are some of them:
-
-
You can unlock all the features, resources, and content that are otherwise restricted or limited in the game. For example, you can have unlimited money, medals, generals, troops, weapons, technologies, and more. You can also access all the modes, campaigns, conquests, challenges, and multiplayer options. You can also remove the ads that may interrupt your gameplay.
-
You can customize or personalize the game according to your preferences and tastes. For example, you can change the names, portraits, skills, ranks, etc. of your generals. You can also modify the graphics, sound, and user interface of the game. You can also create your own scenarios and maps in Conquest mode.
-
You can explore new possibilities or scenarios that are not available in the original game. For example, you can play as different countries or factions that are not normally playable in the game. You can also change the historical events and outcomes of the wars and conflicts. You can also use different strategies and tactics that may not work in the original game.
-
You can enhance your gameplay experience and enjoyment with the game. For example, you can have more fun and challenge with the game by adjusting the difficulty level, the number of countries and regions, the resources and technologies available, etc. You can also have more satisfaction and achievement with the game by completing all the missions and objectives, earning all the medals and rewards, conquering the world with your strategy and tactics, etc.
-
-
To illustrate these benefits, here is a table that compares the features of the original game and the mod apk:
-
-
-
Feature
-
Original Game
-
Mod Apk
-
-
-
Money
-
Limited
-
Unlimited
-
-
-
Medals
-
Limited
-
Unlimited
-
-
-
Generals
-
Limited
-
Unlimited
-
-
-
Troops
-
Limited
-
Unlimited
-
-
-
Weapons
-
Limited
-
Unlimited
-
-
-
Technologies
-
Limited
-
Unlimited
-
-
-
Modes
-
Limited
-
All unlocked
-
-
-
Campaigns
-
Limited
-
All unlocked
-
-
Conquests
-
Limited
-
All unlocked
-
-
-
Challenges
-
Limited
-
All unlocked
-
-
-
Multiplayer
-
Limited
-
All unlocked
-
-
-
Ads
-
Present
-
Removed
-
-
-
Bugs and errors
-
Present
-
Fixed
-
-
-
Customization
-
Limited
-
Enhanced
-
-
-
New possibilities and scenarios
-
Limited
-
Added
-
-
-
Gameplay experience and enjoyment
-
Limited
-
Improved
-
-
-
As you can see, using a mod apk for European War 6: 1914 can provide you with many benefits that can make your game more enjoyable and rewarding. However, you should also be aware of the risks and drawbacks of using a mod apk for European War 6: 1914, which we will discuss in the next section.
-
What are the risks and drawbacks of using a mod apk for European War 6: 1914?
-
Using a mod apk for European War 6: 1914 is not without its risks and drawbacks. Here are some of them:
-
-
You can violate the terms and conditions of the game developers or publishers, which can result in legal actions or penalties against you. You can also infringe the intellectual property rights of the game developers or publishers, which can result in lawsuits or damages against you.
-
You can expose your device or data to viruses, malware, or scams that can harm them. Some mod apks may contain malicious code or software that can infect your device or data, or steal your personal information or money. You can also download mod apks from unreliable sources or platforms that may contain viruses, malware, or scams.
-
You can disrupt the balance and fairness of the game for other players. Using a mod apk can give you an unfair advantage over other players who play the game legitimately, which can ruin their gameplay experience and satisfaction. You can also encounter other players who use mod apks to cheat or hack the game, which can ruin your gameplay experience and satisfaction.
-
You can ruin the original design and intention of the game creators. Using a mod apk can alter or enhance some aspects of the game that may not be intended by the game creators, which can affect their artistic vision and expression. You can also miss out on some features, resources, or content that the game creators have designed for the original game.
-
You can lose the official support and updates from the game developers or publishers. Using a mod apk can make your game incompatible with the official updates or patches that the game developers or publishers may release to improve or fix the game. You can also lose access to the official online services or features that the game developers or publishers may provide for the original game.
-
You can risk being banned or suspended from the game or its online services. Using a mod apk can make your game detectable by the anti-cheat or anti-hack systems that the game developers or publishers may use to protect their game. You can also be reported by other players who notice your suspicious behavior or activities in the game.
-
-
To illustrate these risks and drawbacks, here is a table that compares them with the original game and the mod apk:
-
-
-
Risk/Drawback
-
Original Game
-
Mod Apk
-
Legal and ethical issues
-
None
-
Present
-
-
-
Viruses, malware, or scams
-
None
-
Possible
-
-
-
Balance and fairness
-
Present
-
Disrupted
-
-
-
Original design and intention
-
Present
-
Ruined
-
-
-
Official support and updates
-
Present
-
Lost
-
-
-
Ban or suspension
-
None
-
Possible
-
-
-
As you can see, using a mod apk for European War 6: 1914 can also expose you to some risks and drawbacks that can make your game less enjoyable and rewarding. Therefore, you should be careful and cautious when using a mod apk for European War 6: 1914.
-
How to download and install a mod apk for European War 6: 1914?
-
If you still want to use a mod apk for European War 6: 1914, you need to know how to download and install it on your device. Here are the steps that you need to follow:
-
-
Find a reliable source where you can download a mod apk for European War 6: 1914. You can search online for some websites or platforms that offer mod apks for various games, or you can ask other players who have used a mod apk for European War 6: 1914 before. However, you should be careful and wary of some sources that may contain viruses, malware, or scams that can harm your device or data.
-
Download the mod apk file from the source that you have chosen. You may need to allow your device to download files from unknown sources in your settings. You may also need to disable your antivirus or firewall software temporarily to avoid any interference.
-
Install the mod apk file on your device. You may need to uninstall the original game application first if you have it on your device. You may also need to enable the installation of apps from unknown sources in your settings. You may also need to grant some permissions or access to the mod apk file during the installation process.
-
Launch the mod apk file on your device. You may need to verify or activate the mod apk file by following some instructions or entering some codes. You may also need to create an account or log in with an existing one to access the mod apk file.
-
Enjoy the game with the mod apk file. You can now play European War 6: 1914 with all the features, resources, and content that are unlocked by the mod apk file. However, you should also be aware of the risks and drawbacks of using a mod apk file, as we discussed in the previous section.
-
-
To help you with finding a reliable source where you can download a mod apk for European War 6: 1914, here is a link that you can use as a reference:
This is a website that offers mod apks for various games, including European War 6: 1914. It claims that its mod apks are safe, tested, and verified by its users and editors. However, you should still be careful and cautious when downloading and installing any mod apk from any source, as there is no guarantee that they are free from viruses, malware, or scams.
-
Conclusion
-
In this article, we have discussed what European War 6: 1914 is and what are its features, what a mod apk is and why some players use it, what are the benefits and risks of using a mod apk for European War 6: 1914, and how to download and install a mod apk for European War 6: 1914. We have also provided you with some tips and tricks on how to use a mod apk for European War 6: 1914 safely and effectively.
-
We hope that this article has been helpful and informative for you. If you are a fan of strategy games that simulate historical wars, you might want to give European War 6: 1914 a try. However, if you decide to use a mod apk for European War 6: 1914, you should weigh the pros and cons carefully before doing so. You should also be responsible and respectful when playing the game with or without a mod apk.
-
We would love to hear your opinions, experiences, and feedback on European War 6: 1914 and its mod apk. Please feel free to share them with us in the comments section below. Thank you for reading and happy gaming!
-
FAQs
-
Here are some frequently asked questions about European War 6: 1914 and its mod apk, along with their answers:
-
Q: Is European War 6: 1914 free to play?
-
A: Yes, European War 6: 1914 is free to download and play on Android and iOS devices. However, the game may contain some in-app purchases or ads that may require real money or interrupt the gameplay.
-
Q: Is using a mod apk for European War 6: 1914 legal?
-
A: No, using a mod apk for European War 6: 1914 is not legal, as it violates the terms and conditions of the game developers or publishers, and infringes their intellectual property rights. Using a mod apk for European War 6: 1914 may result in legal actions or penalties against you.
-
Q: Is using a mod apk for European War 6: 1914 safe?
-
A: No, using a mod apk for European War 6: 1914 is not safe, as it exposes your device or data to viruses, malware, or scams that can harm them. Using a mod apk for European War 6: 1914 may also make your game incompatible with the official updates or patches, or lose access to the official online services or features.
-
Q: Is using a mod apk for European War 6: 1914 fair?
-
A: No, using a mod apk for European War 6: 1914 is not fair, as it disrupts the balance and fairness of the game for other players who play the game legitimately. Using a mod apk for European War 6: 1914 may also encounter other players who use mod apks to cheat or hack the game.
-
Q: Is using a mod apk for European War 6: 1914 fun?
-
A: It depends on your personal preference and judgment. Some players may find using a mod apk for European War 6: 1914 fun, as it unlocks all the features, resources, and content that are otherwise restricted or limited in the game. However, some players may find using a mod apk for European War 6: 1914 boring, as it removes the challenge and achievement that come with playing the game legitimately.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Free Triple Play Video Poker No Registration No Limits No Hassle.md b/spaces/congsaPfin/Manga-OCR/logs/Free Triple Play Video Poker No Registration No Limits No Hassle.md
deleted file mode 100644
index 70a6612b2650e10c654fa107e6b46abae7e9ae91..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Free Triple Play Video Poker No Registration No Limits No Hassle.md
+++ /dev/null
@@ -1,303 +0,0 @@
-
-
Triple Play Video Poker: How to Play and Win Online
-
Video poker is one of the most popular and exciting casino games that you can play online. It combines the skill and strategy of poker with the simplicity and thrill of slot machines. And if you want to take your video poker experience to the next level, you should try triple play video poker.
-
Triple play video poker is a variation of video poker that allows you to play three hands at once, giving you three times the action and three times the chances to win. Unlike slots machines, video poker lets you use your knowledge and skills to beat the house edge and get the best payouts possible.
In this article, we will show you how to play triple play video poker online, how to find the best games and sites, and how to improve your strategy and win more. Whether you are a beginner or a seasoned player, you will find something useful and interesting in this guide. So, let's get started!
-
What is Triple Play Video Poker?
-
Triple play video poker is a type of video poker that lets you play three hands at the same time, with the same initial cards. You can choose which cards to hold or discard for each hand separately, and then draw new cards for each hand. The final outcome of each hand is determined by the poker value of your five cards, according to the paytable of the game.
-
There are many different variants of triple play video poker, such as Jacks or Better, Deuces Wild, Bonus Poker, Double Bonus Poker, Double Double Bonus Poker, Joker Poker, and more. Each variant has its own rules, payouts, and strategies. You can find them all at the best online casinos that offer video poker games.
-
The Benefits of Playing Triple Play Video Poker
-
Playing triple play video poker online has many advantages over playing single hand video poker or other casino games. Here are some of them:
-
-
You get more action and excitement. Playing three hands at once means you have more opportunities to make winning combinations and hit big jackpots.
-
You get more variety and challenge. Playing different variants of triple play video poker means you have to adapt your strategy and skills to each game's rules and paytable.
-
You get more control and flexibility. Playing online means you can choose your bet size, game speed, sound effects, and other settings according to your preferences.
-
You get more convenience and comfort. Playing online means you can access your favorite games anytime and anywhere, on your computer or mobile device.
-
-
The Rules of Triple Play Video Poker
-
The rules of triple play video poker are similar to those of single hand video poker, with some minor differences. Here are the basic steps to follow when playing triple play video poker online:
-
-
Select your bet size. You can choose how much each credit is worth, and how many credits you want to bet on each hand. Usually, the maximum bet is five credits per hand, or 15 credits in total.
-
Press the Deal button. You will receive five cards face up on the main hand, and two other sets of five cards face down on the other two hands.
-
Select which cards to hold or discard. You can use the Hold buttons under each card to select which cards you want to keep for each hand separately. The other cards will be exchanged for new ones in the next step.
-
Press the Draw button. All the cards that you did not hold will be replaced with new ones from the same deck. You will now see the final outcome of each hand.
-
Collect your winnings. If any of your hands have a winning combination, according to the paytable of the game, you will receive the corresponding payout. The payout is multiplied by the number of credits you bet on each hand.
-
-
Here is an example of how a round of triple play video poker looks like:
-
* triple play video poker app
-* triple play video poker online
-* triple play video poker for pc
-* triple play video poker casino
-* triple play video poker games
-* triple play video poker strategy
-* triple play video poker machine
-* triple play video poker trainer
-* triple play video poker android
-* triple play video poker iphone
-* triple play video poker apk
-* triple play video poker bonus
-* triple play video poker jackpot
-* triple play video poker tips
-* triple play video poker cheats
-* triple play video poker simulator
-* triple play video poker offline
-* triple play video poker review
-* triple play video poker tutorial
-* triple play video poker rules
-* triple play video poker odds
-* triple play video poker variations
-* triple play video poker pay tables
-* triple play video poker no ads
-* triple play video poker no registration
-* triple play video poker no internet
-* triple play video poker no deposit
-* triple play video poker no download
-* triple hand video poker free download
-* three hand video poker free download
-* 3 hand video poker free download
-* multi hand video poker free download
-* five hand video poker free download
-* ten hand video poker free download
-* 50 hand video poker free download
-* 100 hand video poker free download
-* ultimate x video poker free download
-* spin poker video slots free download
-* game king video poker free download
-* double bonus video poker free download
-* double double bonus video poker free download
-* deuces wild video poker free download
-* joker poker video poker free download
-* jacks or better video poker free download
-
-
-
Main Hand
-
Second Hand
-
Third Hand
-
Payout
-
-
-
10♠ J♠ Q♠ K♠ A♠
-
10♦ J♦ Q♦ K♦ A♦
-
10♥ J♥ Q♥ K♥ A♥
-
800 credits x 3 = 2400 credits
-
-
-
You have a Royal Flush on all three hands, which pays 800 credits for each credit bet.
-
-
-
2♣ 3♣ 4♣ 5♣ 6♣
-
2♥ 3♥ 4♥ 5♥ 6♥
-
2♦ 3♦ 4♦ 5♦ 6♦
-
50 credits x 3 = 150 credits
-
-
-
You have a Straight Flush on all three hands, which pays 50 credits for each credit bet.
-
-
-
A♠ A♥ A♦ A♣ K♠
-
A♠ A♥ A♦ A♣ Q♠
-
A♠ A♥ A♦ A♣ J♠
-
25 credits x 3 = 75 credits
-
-
-
You have Four of a Kind on all three hands, which pays 25 credits for each credit bet.
-
- ... and so on.
How to Choose Your Cards
-
One of the most important skills in video poker is knowing which cards to hold and which ones to discard. The best way to do this is to follow a strategy chart that tells you the optimal move for every possible hand. A strategy chart is based on mathematical calculations that take into account the probabilities and payouts of each game variant.
-
However, since you are playing three hands at once, you have to consider how your decision affects each hand separately. For example, if you have a pair of jacks on the main hand, you might want to hold them and hope for a three of a kind or better. But if the other two hands have low cards, you might want to discard them and try for a flush or a straight.
-
Therefore, you have to balance the risk and reward of each hand, and choose the cards that give you the best overall expected value. This can be tricky, especially if you are new to video poker. That's why we recommend practicing with free games before playing with real money.
-
How to Read the Paytable
-
The paytable is the table that shows you how much each winning combination pays, depending on how many credits you bet. You can find it on the top or the side of the screen, depending on the game. The paytable also tells you the game variant, the minimum qualifying hand, and the jackpot amount.
-
It is very important to read and understand the paytable before playing any video poker game, because it affects your strategy and your chances of winning. Different variants have different paytables, and some paytables are more generous than others. For example, some games pay 9 credits for a full house and 6 credits for a flush (9/6), while others pay 8 credits for a full house and 5 credits for a flush (8/5). The difference might seem small, but it can make a big difference in your long-term results.
-
As a general rule, you should look for games that have high payouts for the lower-ranking hands, such as jacks or better, two pair, and three of a kind. These are the hands that you will get more often, and they will help you sustain your bankroll. You should also look for games that have a bonus payout for certain hands, such as four aces or four deuces. These are the hands that can give you a big boost in your winnings.
-
How to Bet and Win
-
The final step in playing triple play video poker online is to place your bet and collect your winnings. You can choose how much each credit is worth, from $0.01 to $5.00, depending on the game and the site. You can also choose how many credits you want to bet on each hand, from one to five. The more credits you bet, the higher the payouts.
-
However, there is one exception: the royal flush. The royal flush is the highest-ranking hand in video poker, and it consists of a 10, jack, queen, king, and ace of the same suit. The payout for a royal flush is usually 250 credits for each credit bet, except when you bet five credits. In that case, the payout jumps to 800 credits per credit bet, or 4,000 credits in total.
-
This means that betting five credits on each hand gives you an extra incentive to hit a royal flush, and it also increases your overall return percentage. Therefore, we advise you to always bet five credits on each hand when playing triple play video poker online. Of course, this also means that you have to adjust your bet size according to your budget and bankroll management.
-
Once you have placed your bet and drawn your cards, you will see if any of your hands have a winning combination. If so, you will receive the corresponding payout according to the paytable. You can then choose to collect your winnings or continue playing with them.
How to Find the Best Triple Play Video Poker Games Online
-
Now that you know how to play triple play video poker online, you might be wondering where to find the best games and sites. There are hundreds of online casinos that offer video poker games, but not all of them are trustworthy, fair, and reliable. You have to be careful and do some research before choosing where to play.
-
Fortunately, we have done the hard work for you and compiled a list of the best online casinos that offer triple play video poker games. We have tested and reviewed each site based on several criteria, such as:
-
-
The quality and variety of the video poker games. We look for sites that offer different variants of triple play video poker, with high-quality graphics, sound effects, and gameplay.
-
The security and safety of the site. We look for sites that use encryption, firewalls, and other measures to protect your personal and financial information.
-
The fairness and randomness of the games. We look for sites that use certified random number generators (RNGs) to ensure that the outcomes of the games are fair and unpredictable.
-
The bonuses and promotions of the site. We look for sites that offer generous and frequent bonuses and promotions for video poker players, such as welcome bonuses, reload bonuses, loyalty programs, tournaments, and more.
-
The customer support and service of the site. We look for sites that offer friendly, professional, and responsive customer support via phone, email, live chat, or social media.
-
-
Based on these criteria, here are our top picks for the best online casinos that offer triple play video poker games:
-
The Top 10 Video Poker Games with the Best Odds
-
One of the main factors that affect your chances of winning at video poker is the game variant you choose. Different variants have different paytables, rules, and strategies, which affect the return percentage of the game. The return percentage is the amount of money that the game pays back to the players in the long run, on average.
-
The higher the return percentage, the better the odds for the player. For example, a game with a 99% return percentage means that for every $100 you bet, you can expect to get back $99 in winnings over time. Of course, this does not mean that you will win every time or that you will never lose money. It just means that you have a better chance of winning in the long run.
-
Therefore, it is wise to choose video poker games with high return percentages when playing online. Here are the top 10 video poker games with the best odds, according to their paytables:
-
-
-
Game Variant
-
Paytable
-
Return Percentage
-
-
-
Jacks or Better (9/6)
-
Royal Flush: 800 Straight Flush: 50 Four of a Kind: 25 Full House: 9 Flush: 6 Straight: 4 Three of a Kind: 3 Two Pair: 2 Jacks or Better: 1
-
99.54%
-
-
-
Bonus Poker (8/5)
-
Royal Flush: 800 Straight Flush: 50 Four Aces: 80 Four 2s-4s: 40 Four 5s-Ks: 25 Full House: 8 Flush: 5 Straight: 4 Three of a Kind: 3 Two Pair: 2 Jacks or Better: 1
-
99.17%
-
-
-
Bonus Poker Deluxe (8/6)
-
Royal Flush: 800 Straight Flush: 50 Four of a Kind: 80 Full House: 8 Flush: 6 Straight: 4 Three of a Kind: 3 Two Pair: 1 Jacks or Better: 1
-
99.64%
-
- ... and so on.
-
Double Bonus Poker (10/7)
-
Royal Flush: 800 Straight Flush: 50 Four Aces: 160 Four 2s-4s: 80 Four 5s-Ks: 50 Full House: 10 Flush: 7 Straight: 5 Three of a Kind: 3 Two Pair: 1 Jacks or Better: 1
-
100.17%
-
-
-
Double Double Bonus Poker (9/6)
-
Royal Flush: 800 Straight Flush: 50 Four Aces + 2-4: 400 Four Aces + 5-K: 160 Four 2s-4s + A-4: 160 Four 2s-4s + 5-K: 80 Four 5s-Ks: 50 Full House: 9 Flush: 6 Straight: 4 Three of a Kind: 3 Two Pair: 1 Jacks or Better: 1
-
98.98%
-
-
-
Joker Poker (Kings or Better)
-
Royal Flush: 800 Five of a Kind: 200 Royal Flush with Joker: 100 Straight Flush: 50 Four of a Kind: 20 Full House: 7 Flush: 5 Straight: 3 Three of a Kind: 2 Two Pair: 1 Kings or Better: 1
-
100.64%
-
- ... and so on.
The Best Video Poker Apps and Sites for Mobile and Desktop
-
If you want to play triple play video poker online, you need to find a reliable and reputable site that offers the games you want. You also need to make sure that the site is compatible with your device, whether it is a computer, a smartphone, or a tablet.
-
The good news is that there are many video poker apps and sites that you can choose from, depending on your preferences and needs. Some of them are dedicated to video poker only, while others offer a variety of casino games, including slots, blackjack, roulette, and more. Some of them are web-based, while others require you to download and install software or an app.
-
To help you find the best video poker apps and sites for mobile and desktop, we have reviewed and ranked the top options based on several factors, such as:
-
-
The quality and variety of the video poker games. We look for apps and sites that offer different variants of triple play video poker, with high-quality graphics, sound effects, and gameplay.
-
The compatibility and usability of the app or site. We look for apps and sites that work smoothly and seamlessly on different devices, platforms, browsers, and screen sizes.
-
The security and safety of the app or site. We look for apps and sites that use encryption, firewalls, and other measures to protect your personal and financial information.
-
The bonuses and promotions of the app or site. We look for apps and sites that offer generous and frequent bonuses and promotions for video poker players, such as welcome bonuses, reload bonuses, loyalty programs, tournaments, and more.
-
The customer support and service of the app or site. We look for apps and sites that offer friendly, professional, and responsive customer support via phone, email, live chat, or social media.
-
-
Based on these factors, here are our top picks for the best video poker apps and sites for mobile and desktop:
A free video poker app that offers over 70 different variants of video poker games, including triple play video poker. You can play with virtual coins or real money, and enjoy daily bonuses, tournaments, leaderboards, achievements, and more.
A free video poker site that offers over 40 different variants of video poker games, including triple play video poker. You can play with virtual coins or real money, and enjoy daily bonuses, tournaments, leaderboards, achievements, and more.
A paid video poker app that offers over 30 different variants of video poker games, including triple play video poker. You can play with virtual coins or real money, and enjoy features such as strategy charts, statistics, analysis, and more.
A real money online casino that offers over 20 different variants of video poker games, including triple play video poker. You can play with US dollars or bitcoins, and enjoy a welcome bonus of up to $3,000, as well as other promotions and rewards.
-
★★★★☆
-
- ... and so on.
The Best Video Poker Bonuses and Promotions
-
Another factor that can enhance your triple play video poker online experience is the bonuses and promotions that you can get from the online casinos. Bonuses and promotions are incentives that the casinos offer to attract and retain players, and they can give you extra money, free spins, cashback, and other benefits.
-
However, not all bonuses and promotions are created equal. Some of them are more suitable for video poker players than others, and some of them have terms and conditions that you have to meet before you can withdraw your winnings. Therefore, you have to be careful and read the fine print before claiming any bonus or promotion.
-
As a general rule, you should look for bonuses and promotions that have the following characteristics:
-
-
They are specifically designed for video poker players, or they allow video poker games to contribute fully or partially to the wagering requirements.
-
They have a high percentage match, a high maximum amount, and a low minimum deposit.
-
They have a low wagering requirement, a long validity period, and no maximum cashout limit.
-
They are offered by reputable and trustworthy online casinos that have good ratings and reviews.
-
-
To help you find the best video poker bonuses and promotions online, we have selected and ranked the top options based on these criteria:
A 100% match bonus up to $1,000 for new players who deposit with bitcoin or credit card. The bonus has a 25x wagering requirement, and video poker games contribute 10% to it.
-
★★★★★
-
- ... and so on.
How to Improve Your Triple Play Video Poker Strategy
-
Playing triple play video poker online is not only fun and exciting, but also rewarding and profitable, if you know how to play it well. Video poker is a game of skill and strategy, and you can improve your chances of winning by learning and applying some tips and tricks. Here are some of the best ways to improve your triple play video poker strategy:
-
The Basic Video Poker Strategy Chart
-
The first thing you need to do is to memorize the basic video poker strategy chart. This is a table that tells you the optimal move for every possible hand, based on the game variant and the paytable. You can find the strategy chart for each game online, or you can use a video poker trainer app or software that will guide you through each decision.
-
The basic video poker strategy chart is based on the principle of maximizing the expected value of each hand. The expected value is the average amount of money that you can expect to win or lose from each hand, in the long run. By following the strategy chart, you will always choose the move that gives you the highest expected value, and therefore, the highest return percentage.
-
However, since you are playing three hands at once, you have to consider how your decision affects each hand separately, as we explained before. Sometimes, you might have to deviate from the basic strategy chart and choose a different move that gives you a better overall expected value. This requires some practice and intuition, but it can make a big difference in your results.
-
The Advanced Video Poker Tips and Tricks
-
Once you have mastered the basic video poker strategy chart, you can take your game to the next level by learning some advanced video poker tips and tricks. These are some of the best ones:
-
-
Always bet five credits on each hand. This will give you the maximum payout for a royal flush, and increase your overall return percentage.
-
Always check the paytable before playing any game. Look for games that have high payouts for the lower-ranking hands, such as jacks or better, two pair, and three of a kind.
-
Always play games that have a bonus payout for certain hands, such as four aces or four deuces. These are the hands that can give you a big boost in your winnings.
-
Always play games that have a high return percentage, such as Jacks or Better (9/6), Bonus Poker (8/5), Bonus Poker Deluxe (8/6), Double Bonus Poker (10/7), Double Double Bonus Poker (9/6), Joker Poker (Kings or Better), and Deuces Wild (9/5).
-
Always use a video poker trainer app or software to practice your skills and test your strategies. You can also use a video poker calculator or analyzer to check the expected value of each hand and move.
-
-
The Common Video Poker Mistakes to Avoid
-
Finally, you should avoid making some common video poker mistakes that can ruin your game and cost you money. These are some of them:
-
-
Playing too fast or too slow. Playing too fast can make you miss some important details or make some errors in judgment. Playing too slow can make you lose focus or get bored. You should find a comfortable pace that suits your style and mood.
-
Playing with emotions or impulses. Playing with emotions or impulses can make you chase losses, bet more than you can afford, or make irrational decisions. You should always play with logic and discipline, and stick to your budget and bankroll management.
-
Playing without a plan or a goal. Playing without a plan or a goal can make you lose track of your progress, performance, or results. You should always have a clear plan and a realistic goal when playing video poker online, and review them regularly.
-
-
Conclusion and FAQs
-
In conclusion, triple play video poker is one of the most exciting and rewarding casino games that you can play online. It combines the skill and strategy of poker with the simplicity and thrill of slot machines. And it gives you three times the action and three times the chances to win.
-
To play triple play video poker online, you need to know how to choose your cards, read the paytable, bet and win, find the best games and sites, improve your strategy, and avoid common mistakes. By following our guide, you will learn all these skills and more.
-
We hope that you enjoyed reading this article and that you found it useful and informative. If you have any questions or comments about triple play video poker online, feel free to contact us anytime. We would love to hear from you!
-
Here are some frequently asked questions about triple play video poker online:
-
What is the difference between triple play video poker and single hand video poker?
-
The main difference between triple play video poker and single hand video poker is that in triple play video poker, you play three hands at once, with the same initial cards. You can choose which cards to hold or discard for each hand separately, and then draw new cards for each hand. The final outcome of each hand is determined by the poker value of your five cards, according to the paytable of the game.
-
What are the advantages of playing triple play video poker online?
-
Some of the advantages of playing triple play video poker online are:
-
-
You get more action and excitement. Playing three hands at once means you have more opportunities to make winning combinations and hit big jackpots.
-
You get more variety and challenge. Playing different variants of triple play video poker means you have to adapt your strategy and skills to each game's rules and paytable.
-
You get more control and flexibility. Playing online means you can choose your bet size, game speed, sound effects, and other settings according to your preferences.
-
You get more convenience and comfort. Playing online means you can access your favorite games anytime and anywhere, on your computer or mobile device.
-
-
How can I improve my chances of winning at triple play video poker online?
-
Some of the ways you can improve your chances of winning at triple play video poker online are:
-
-
Learn and follow the basic video poker strategy chart for each game variant and paytable.
-
Balance the risk and reward of each hand, and choose the cards that give you the best overall expected value.
-
Bet five credits on each hand to get the maximum payout for a royal flush, and increase your overall return percentage.
-
Check the paytable before playing any game, and look for games that have high payouts for the lower-ranking hands, such as jacks or better, two pair, and three of a kind.
-
Play games that have a bonus payout for certain hands, such as four aces or four deuces.
-
Play games that have a high return percentage, such as Jacks or Better (9/6), Bonus Poker (8/5), Bonus Poker Deluxe (8/6), Double Bonus Poker (10/7), Double Double Bonus Poker (9/6), Joker Poker (Kings or Better), and Deuces Wild (9/5).
-
Use a video poker trainer app or software to practice your skills and test your strategies.
-
Avoid making common video poker mistakes, such as playing too fast or too slow, playing with emotions or impulses, or playing without a plan or a goal.
-
-
Where can I find the best triple play video poker games and sites online?
-
You can find the best triple play video poker games and sites online by using our guide. We have tested and reviewed each site based on several criteria, such as:
-
-
The quality and variety of the video poker games. We look for sites that offer different variants of triple play video poker, with high-quality graphics, sound effects, and gameplay.
-
The security and safety of the site. We look for sites that use encryption, firewalls, and other measures to protect your personal and financial information.
-
The fairness and randomness of the games. We look for sites that use certified random number generators (RNGs) to ensure that the outcomes of the games are fair and unpredictable.
-
The bonuses and promotions of the site. We look for sites that offer generous and frequent bonuses and promotions for video poker players, such as welcome bonuses, reload bonuses, loyalty programs, tournaments, and more.
-
The customer support and service of the site. We look for sites that offer friendly, professional, and responsive customer support via phone, email, live chat, or social media.
-
-
How can I play triple play video poker online for free?
-
You can play triple play video poker online for free by using one of the following methods:
-
-
You can use a free video poker app or site that offers different variants of triple play video poker. You can play with virtual coins or real money, and enjoy daily bonuses, tournaments, leaderboards, achievements, and more.
-
You can use a free trial or demo mode at a real money online casino that offers triple play video poker games. You can play with virtual coins or real money, but you will not be able to withdraw your winnings until you make a deposit.
-
You can use a no deposit bonus or free spins at a real money online casino that offers triple play video poker games. You can play with real money without making a deposit, and keep your winnings if you meet the wagering requirements.
-
-
However, keep in mind that playing for free is not the same as playing for real money. When you play for free, you might not have access to all the features, games, and bonuses that the site offers. You might also have a different mindset and attitude when you play for free, which can affect your strategy and performance. Therefore, we recommend playing for real money if you want to enjoy the full benefits and excitement of triple play video poker online.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Geometry Dash World APK Play Online Levels Daily Quests and More in this Rythm-based Adventure.md b/spaces/congsaPfin/Manga-OCR/logs/Geometry Dash World APK Play Online Levels Daily Quests and More in this Rythm-based Adventure.md
deleted file mode 100644
index 7d32a6d07622290cb7f8c916e215c2892c9676d8..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Geometry Dash World APK Play Online Levels Daily Quests and More in this Rythm-based Adventure.md
+++ /dev/null
@@ -1,159 +0,0 @@
-
-
Geometry Dash World Full APK: A Rhythm-Based Action Platformer
-
Do you love jumping, flying and flipping through challenging levels? Do you enjoy listening to catchy music while playing a fun game? If yes, then you should try Geometry Dash World, a popular arcade game that will test your reflexes and skills. In this article, we will tell you everything you need to know about Geometry Dash World full apk, how to download and install it, how to play online levels, and how to customize your character.
-
What is Geometry Dash World?
-
Geometry Dash World is a game developed by RobTop Games, the same creator of the original Geometry Dash. It is a spin-off of the main series, featuring new levels, music, monsters, and everything else. It was released in December 2016 for Android and iOS devices.
The gameplay of Geometry Dash World is similar to the other games in the series. You control a geometric shape that can jump, fly, and flip through various obstacles. You have to tap the screen at the right time to avoid crashing or falling. The game is rhythm-based, meaning that the music syncs with the level design and the obstacles. The game is also very hard, requiring precise timing and fast reactions.
-
The features of Geometry Dash World
-
Geometry Dash World has many features that make it an enjoyable and addictive game. Some of these features are:
-
-
Ten unique levels with music from Dex Arson, Waterflame and F-777
-
Daily quests and rewards
-
Online levels created by the Geometry Dash community
-
Unique icons and colors to customize your character
-
Rockets, gravity switches, portals, and more
-
Practice mode to sharpen your skills
-
Near impossible challenges
-
-
How to download and install Geometry Dash World full apk?
-
If you want to enjoy all the features of Geometry Dash World without any limitations or ads, you can download and install the full apk version of the game. This will give you access to all the levels, icons, colors, secrets, and achievements in the game.
-
The benefits of downloading the full apk
-
Downloading the full apk of Geometry Dash World has many benefits, such as:
-
-
You can play offline without any internet connection
-
You can save your progress and data on your device
-
You can avoid any annoying ads or pop-ups
-
You can support the developer and appreciate their work
-
-
The steps to download and install the full apk
-
To download and install the full apk of Geometry Dash World, you need to follow these steps:
-
-
Go to [this link](^1^) and download the apk file (71 MB)
-
Enable unknown sources on your device settings
-
Locate the downloaded file on your file manager and tap on it
-
Install the apk file and wait for it to finish
-
Launch the game and enjoy!
-
-
How to play Geometry Dash World online levels?
-
One of the best features of Geometry Dash World is that you can play online levels created by other players from around the world. These levels are uploaded to a server where you can browse, rate, comment, and play them. You can also create your own levels using the level editor and share them with others.
-
geometry dash world mod apk unlimited everything
-geometry dash world hack apk download
-geometry dash world apk latest version
-geometry dash world full version apk free
-geometry dash world all levels unlocked apk
-geometry dash world 2.2 apk download
-geometry dash world android apk
-geometry dash world apk mod menu
-geometry dash world cheats apk
-geometry dash world cracked apk
-geometry dash world download apk pure
-geometry dash world editor apk
-geometry dash world free apk
-geometry dash world full game apk
-geometry dash world hacked apk
-geometry dash world ios apk
-geometry dash world mod apk android 1
-geometry dash world mod apk revdl
-geometry dash world new update apk
-geometry dash world online apk
-geometry dash world premium apk
-geometry dash world pro apk
-geometry dash world rexdl apk
-geometry dash world subzero apk download
-geometry dash world unlocked apk
- The types of online levels in Geometry Dash World
-
There are different types of online levels in Geometry Dash World, depending on their difficulty, length, and style. Some of the most common types are:
-
-
-
Type
-
Description
-
-
-
Easy
-
These levels are suitable for beginners and have simple obstacles and patterns.
-
-
-
Normal
-
These levels are slightly harder than easy levels and have more variety and challenge.
-
-
-
Hard
-
These levels are for experienced players and have complex and fast obstacles and patterns.
-
-
-
Harder
-
These levels are for advanced players and have very difficult and tricky obstacles and patterns.
-
-
-
Insane
-
These levels are for expert players and have extremely hard and insane obstacles and patterns.
-
-
-
Demon
-
These levels are for the most skilled players and have nearly impossible and frustrating obstacles and patterns.
-
-
-
Short
-
These levels are less than 30 seconds long and have a high intensity and pace.
-
-
-
Medium
-
These levels are between 30 seconds and 1 minute long and have a balanced intensity and pace.
-
-
-
Long
-
These levels are more than 1 minute long and have a low intensity and pace.
-
-
-
XStep
-
These levels are inspired by the XStep level from the original Geometry Dash and have a lot of spikes, jumps, and portals.
-
-
-
Theory of Everything
-
These levels are inspired by the Theory of Everything level from the original Geometry Dash and have a lot of gravity switches, flying sections, and wave mode.
-
-
...
-
-
The tips and tricks to play online levels
-
To play online levels in Geometry Dash World, you need to have some tips and tricks up your sleeve. Here are some of them:
-
-
Practice mode: Use the practice mode to learn the layout and patterns of the level before attempting it in normal mode. You can place checkpoints along the way to resume from where you left off.
-
Rhythm: Follow the rhythm of the music to time your taps correctly. The music is synced with the level design and the obstacles, so you can use it as a guide.
-
Persistence: Don't give up easily if you fail a level. Try again and again until you master it. You will improve your skills and reflexes as you play more.
-
Finger position: Find a comfortable finger position to tap the screen. You can use one finger or two fingers, depending on your preference. You can also adjust the sensitivity of the touch screen in the settings.
-
Focusing: Focus on the level and avoid any distractions. You need to pay attention to every detail and movement in the level. You can also use headphones to immerse yourself in the music.
-
Fun: Have fun playing online levels. Don't stress too much about completing them or getting high scores. Enjoy the game and appreciate the creativity of the level creators.
-
-
How to customize your character in Geometry Dash World?
-
Another cool feature of Geometry Dash World is that you can customize your character with different icons and colors. You can also unlock secrets and achievements that will give you more options to personalize your character.
-
The icons and colors in Geometry Dash World
-
You can choose from various icons and colors to change the appearance of your character. You can select an icon for your cube, ship, ball, UFO, wave, robot, or spider form. You can also select a primary color and a secondary color for your character. You can unlock more icons and colors by completing levels, quests, achievements, or secrets.
-
The secrets and achievements in Geometry Dash World
-
You can unlock secrets and achievements by performing certain actions or finding hidden objects in the game. Some of these secrets and achievements are:
-
-
The Vault: The Vault is a secret area where you can enter codes to unlock rewards. You can find The Vault by tapping on the lock icon in the settings menu. You can get codes by completing quests or finding clues in the game.
-
The Treasure Room: The Treasure Room is another secret area where you can collect chests that contain rewards. You can find The Treasure Room by tapping on the purple door in The World map. You can get chests by completing achievements or finding keys in the game.
-
The Shop: The Shop is where you can buy icons, colors, and trails for your character using orbs. You can find The Shop by tapping on the shop icon in the main menu. You can get orbs by playing levels or opening chests.
-
The Community Shop: The Community Shop is where you can buy icons and colors made by other players using diamonds. You can find The Community Shop by tapping on the community shop icon in the main menu. You can get diamonds by playing online levels or opening chests.
-
The Scratch's Shop: The Scratch's Shop is where you can buy special icons and colors using shards. You can find The Scratch's Shop by tapping on the scratch icon in the main menu. You can get shards by playing levels or opening chests.
-
-
Conclusion
-
Geometry Dash World is a fun and challenging game that will keep you entertained for hours. You can download and install the full apk version of the game to enjoy all the features without any limitations or ads. You can also play online levels created by other players and customize your character with different icons and colors. Geometry Dash World is a game that will test your skills, reflexes, and rhythm. Are you ready to dash into the world of geometry?
-
FAQs
-
Here are some frequently asked questions about Geometry Dash World full apk:
-
-
Q: Is Geometry Dash World full apk safe to download and install?
-
A: Yes, Geometry Dash World full apk is safe to download and install, as long as you use a trusted source like [this one]. However, you should always be careful when downloading files from unknown sources and scan them for viruses or malware before installing them.
-
Q: Do I need to root my device to install Geometry Dash World full apk?
-
A: No, you don't need to root your device to install Geometry Dash World full apk. You just need to enable unknown sources on your device settings and follow the steps mentioned above.
-
Q: Can I play Geometry Dash World full apk on PC?
-
A: Yes, you can play Geometry Dash World full apk on PC using an Android emulator like BlueStacks or NoxPlayer. You just need to download and install the emulator on your PC and then follow the same steps as you would on your mobile device.
-
Q: How can I update Geometry Dash World full apk?
-
A: To update Geometry Dash World full apk, you need to download and install the latest version of the apk file from [this link]. You don't need to uninstall the previous version, as the new one will overwrite it.
-
Q: How can I contact the developer of Geometry Dash World?
-
A: You can contact the developer of Geometry Dash World by sending an email to support@robtopgames.com or visiting their website at www.robtopgames.com.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Use GreenPois0n RC5 for Windows to Jailbreak iPhone iPad and iPod Touch on iOS 4.2.1.md b/spaces/congsaPfin/Manga-OCR/logs/How to Use GreenPois0n RC5 for Windows to Jailbreak iPhone iPad and iPod Touch on iOS 4.2.1.md
deleted file mode 100644
index 1df97e31ffd3641e0e934600fa904faadfdc709c..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/How to Use GreenPois0n RC5 for Windows to Jailbreak iPhone iPad and iPod Touch on iOS 4.2.1.md
+++ /dev/null
@@ -1,118 +0,0 @@
-
-
How to Download Greenpois0n RC5 4.2.1 for Windows
-
If you are an iPhone, iPad or iPod touch user who wants to jailbreak your device running iOS 4.2.1, you might be interested in downloading Greenpois0n RC5 4.2.1 for Windows. This is a tool that allows you to jailbreak your device untethered, meaning that you don't have to connect it to a computer every time you reboot it.
In this article, we will explain what Greenpois0n RC5 4.2.1 is, how to download it for Windows, and how to use it to jailbreak your device. We will also show you how to use Cydia, the jailbreak app store, where you can find many useful apps, tweaks and themes that are not available on the official App Store.
-
What is Greenpois0n RC5 4.2.1?
-
Greenpois0n RC5 4.2.1 is a tool developed by the Chronic Dev Team, a group of hackers who specialize in jailbreaking iOS devices. Jailbreaking is a process that removes the restrictions imposed by Apple on its devices, allowing users to customize them and install apps that are not approved by Apple.
-
A tool to jailbreak iOS 4.2.1 untethered
-
Greenpois0n RC5 4.2.1 is designed to jailbreak iOS devices running iOS 4.2.1, which is an older version of iOS that was released in November 2010. It supports iPhone 4, iPhone 3GS, iPod touch 4G/3G/2G and iPad.
-
One of the main features of Greenpois0n RC5 4.2.1 is that it can jailbreak iOS devices untethered, which means that you don't have to connect your device to a computer every time you reboot it or turn it off and on again. This makes your device more convenient and stable after jailbreaking.
-
Compatible with iPhone, iPad and iPod touch
-
Greenpois0n RC5 4.2.1 is compatible with the following iOS devices:
-
-
Device
Model
-
iPhone
iPhone 4, iPhone 3GS
-
iPad
iPad (first generation)
-
iPod touch
iPod touch 4G/3G/2G
-
-
Benefits of jailbreaking with Greenpois0n RC5 4.2.1
-
Jailbreaking your device with Greenpois0n RC5 4.2.1 has many benefits, such as:
-
-
You can install Cydia, the jailbreak app store, where you can find many apps, tweaks and themes that are not available on the official App Store.
-
You can customize your device's look and feel, such as changing the icons, fonts, colors, wallpapers, etc.
-
You can unlock your device and use it with any carrier or network of your choice.
-
You can enhance your device's performance and functionality, such as adding multitasking, widgets, gestures, etc.
-
You can access the root file system of your device and modify it as you wish.
-
-
Of course, jailbreaking also has some risks and drawbacks, such as voiding your warranty, exposing your device to security threats, and causing instability or compatibility issues. Therefore, you should always backup your device before jailbreaking and follow the instructions carefully.
-
How to Download Greenpois0n RC5 4.2.1 for Windows
-
If you are ready to jailbreak your device with Greenpois0n RC5 4.2.1, you will need to download the tool for Windows and follow the steps below.
-
Requirements
-
Before you start, make sure you have the following requirements:
-
How to jailbreak iOS 4.2.1 untethered with GreenPois0n RC5 for Windows
-GreenPois0n RC5 Windows version released - iOS 4.2.1 untethered jailbreak for iPhone, iPad and iPod touch
-GreenPois0n RC5 b2 - Jailbreak iOS 4.2.1 firmware untethered on Windows
-Download GreenPois0n RC5 for Windows - Untethered jailbreak for iOS 4.2.1 firmware
-GreenPois0n RC5 Windows download link - Jailbreak iOS 4.2.1 untethered without SHSH blobs
-GreenPois0n RC5 tutorial for Windows - Step by step guide to jailbreak iOS 4.2.1 untethered
-GreenPois0n RC5 Windows problems and solutions - How to fix errors and issues with iOS 4.2.1 untethered jailbreak
-GreenPois0n RC5 vs Redsn0w - Which is better for iOS 4.2.1 untethered jailbreak on Windows
-GreenPois0n RC5 Windows review - Pros and cons of iOS 4.2.1 untethered jailbreak with GreenPois0n
-GreenPois0n RC5 Windows alternatives - Other tools to jailbreak iOS 4.2.1 untethered on Windows
-GreenPois0n RC5 Windows requirements - What you need to jailbreak iOS 4.2.1 untethered with GreenPois0n
-GreenPois0n RC5 Windows features - What's new and improved in iOS 4.2.1 untethered jailbreak with GreenPois0n
-GreenPois0n RC5 Windows compatibility - Which devices and firmware versions are supported by iOS 4.2.1 untethered jailbreak with GreenPois0n
-GreenPois0n RC5 Windows FAQs - Frequently asked questions and answers about iOS 4.2.1 untethered jailbreak with GreenPois0n
-GreenPois0n RC5 Windows video tutorial - Watch how to jailbreak iOS 4.2.1 untethered with GreenPois0n on Windows
-GreenPois0n RC5 Windows screenshots - See how iOS 4.2.1 untethered jailbreak with GreenPois0n looks like on Windows
-GreenPois0n RC5 Windows tips and tricks - How to get the most out of iOS 4.2.1 untethered jailbreak with GreenPois0n
-GreenPois0n RC5 Windows updates - How to check and install the latest version of iOS 4.2.1 untethered jailbreak with GreenPois0n
-GreenPois0n RC5 Windows feedback - How to share your experience and opinion about iOS 4.2.1 untethered jailbreak with GreenPois0n
-GreenPois0n RC5 Windows support - How to get help and assistance with iOS 4.2.1 untethered jailbreak with GreenPois0n
-Download GreenPois0n RC5 for Mac - Untethered jailbreak for iOS 4.2.1 firmware on Mac OS X
-How to switch from tethered to untethered jailbreak on iOS 4.2.1 with GreenPois0n RC5 for Windows
-How to restore your device after jailbreaking iOS 4.2.1 untethered with GreenPois0n RC5 for Windows
-How to backup your data before jailbreaking iOS 4.2.1 untethered with GreenPois0n RC5 for Windows
-How to customize your device after jailbreaking iOS 4.2.1 untethered with GreenPois0n RC5 for Windows
-How to install Cydia apps and tweaks after jailbreaking iOS 4.2.1 untethered with GreenPois0n RC5 for Windows
-How to unlock your iPhone after jailbreaking iOS 4.2.1 untethered with GreenPois0n RC5 for Windows
-How to fix battery drain issue after jailbreaking iOS 4.2.1 untethered with GreenPois0n RC5 for Windows
-How to fix WiFi issue after jailbreaking iOS 4.2.1 untethered with GreenPois0n RC5 for Windows
-How to fix GPS issue after jailbreaking iOS 4.2.1 untethered with GreenPois0n RC5 for Windows
-
-
A Windows PC with an internet connection.
-
A USB cable to connect your device to your PC.
-
A device running iOS 4.2.1 that is compatible with Greenpois0n RC5 4.2.1 (see above).
-
A backup of your device's data in case something goes wrong.
-
-
Steps
-
Once you have everything ready, you can proceed with the following steps:
-
Step 1: Backup your device and quit iTunes
-
The first step is to backup your device using iTunes or iCloud. This will ensure that you don't lose any important data in case something goes wrong during the jailbreak process. To backup your device using iTunes, connect it to your PC and open iTunes. Then, click on the device icon in the upper left corner and select "Back Up Now". To backup your device using iCloud, go to Settings > iCloud > Backup and tap on "Back Up Now".
-
After backing up your device, make sure you quit iTunes and any other programs that might interfere with the jailbreak process.
-
Step 2: Download Greenpois0n RC5 4.2.1 for Windows
-
The next step is to download Greenpois0n RC5 4.2.1 for Windows from the official website of the Chronic Dev Team. You can also use this mirror link if the official website is down or slow. The file size is about 16 MB and the file name is gp_win_rc5_b4.zip.
-
After downloading the file, extract it to a folder on your desktop or any other location that is easy to access.
-
Step 3: Run Greenpois0n and follow the instructions
-
The third step is to run Greenpois0n and follow the instructions on the screen. To do this, double-click on the greenpois0n.exe file that you extracted in the previous step. You will see a window like this:
-
-
Before you click on "Prepare to Jailbreak (DFU)", make sure your device is turned off and connected to your PC via USB. Then, click on the button and follow the instructions to put your device into DFU mode. DFU mode is a special mode that allows your device to communicate with Greenpois0n and accept the jailbreak code. To enter DFU mode, you will need to press and hold the power and home buttons on your device for a certain amount of time.
-
The instructions will tell you how long to press each button and when to release them. You will also see a countdown timer on the screen that will guide you through the process. If you do everything correctly, you will see a message saying "Ready to Jailbreak" and a button saying "Jailbreak". If not, you will see a message saying "Try Again" and a button saying "Retry". In that case, try again until you succeed.
-
Once you are ready to jailbreak, click on the "Jailbreak" button and wait for Greenpois0n to do its magic. You will see some code running on your device's screen and a progress bar on your PC's screen. This may take a few minutes, so be patient and do not disconnect or interrupt your device.
-
Step 4: Install Cydia from the Loader app
-
The final step is to install Cydia from the Loader app that Greenpois on will install on your device after the jailbreak. Cydia is the jailbreak app store, where you can find many apps, tweaks and themes that are not available on the official App Store.
-
To install Cydia, go to your device's home screen and look for an icon that says "Loader". Tap on it and you will see a screen like this:
-
-
Tap on the "Cydia" button and wait for the app to download and install. You may need to confirm some prompts or restart your device during the process. Once Cydia is installed, you will see its icon on your home screen. You can then delete the Loader app by tapping and holding on it and tapping on the "X" button.
-
How to Use Cydia, the Jailbreak App Store
-
Now that you have Cydia on your device, you can start exploring the world of jailbreak apps, tweaks and themes. Here are some tips on how to use Cydia:
-
What is Cydia?
-
Cydia is an app that allows you to browse and install software packages that are not available on the official App Store. These packages are created by independent developers and hackers who want to enhance or modify the iOS experience. Some of these packages are free, while others are paid or require a donation.
-
Cydia works by accessing repositories, which are online sources that host the packages. There are many repositories that you can add to Cydia, each offering different kinds of packages. Some of the most popular repositories are BigBoss, ModMyi, ZodTTD & MacCiti, and Saurik's own repository.
-
How to access Cydia
-
To access Cydia, simply tap on its icon on your home screen. You will see a screen like this:
-
-
On the bottom of the screen, you will see five tabs: Home, Sections, Changes, Manage and Search. Here is what each tab does:
-
-
Home: This tab shows you some information and news about Cydia, such as featured packages, updates, tips and tutorials.
-
Sections: This tab shows you the categories of packages that are available on Cydia, such as Themes, Tweaks, Utilities, Games, etc. You can browse through them and find what you are looking for.
-
Changes: This tab shows you the latest updates and additions to the packages that are available on Cydia. You can also refresh this tab to check for new updates.
-
Manage: This tab shows you the sources (repositories) and packages that you have installed or added to Cydia. You can also add or remove sources and packages from here.
-
Search: This tab allows you to search for a specific package by name or keyword. You can also use filters to narrow down your search results.
-
-
How to install apps, tweaks and themes from Cydia
-
To install an app, tweak or theme from Cydia, follow these steps:
-
-
Find the package that you want to install by browsing through the sections or searching for it.
-
Tap on the package name and you will see a screen with more information about it, such as description, screenshots, ratings, etc.
-
If you want to install the package, tap on the "Install" button on the top right corner. If the package is paid or requires a donation, you will need to purchase it or donate first before installing it.
-
You will see a confirmation screen with the details of the package and its dependencies (other packages that are required for it to work). Tap on "Confirm" to proceed with the installation.
-
Cydia will download and install the package and its dependencies. You may need to restart your device or respring (restart the springboard) after the installation.
-
Once the installation is done, you will see a screen with a button saying "Return to Cydia". Tap on it and you will go back to Cydia.
-
-
You can now enjoy your new app, tweak or theme on your device. To uninstall a package from Cydia, simply go to the Manage tab > Packages > tap on the package name > tap on "Modify" > tap on "Remove".
-
Conclusion
-If you have any feedback or suggestions for improving the article, please let me know. I appreciate your input and I want to make sure you are satisfied with my work. Thank you for choosing me as your content writer. ? 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Mortal Kombat XL APK - Join the Online Multiplayer Faction Wars and Compete with Other Players Worldwide.md b/spaces/congsaPfin/Manga-OCR/logs/Mortal Kombat XL APK - Join the Online Multiplayer Faction Wars and Compete with Other Players Worldwide.md
deleted file mode 100644
index 8ab510dedc4396b1aaa16aa0d00bc7ee0b83fec9..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Mortal Kombat XL APK - Join the Online Multiplayer Faction Wars and Compete with Other Players Worldwide.md
+++ /dev/null
@@ -1,132 +0,0 @@
-
-
Mortal Kombat XL Download APK: How to Play the Ultimate Fighting Game on Your Android Device
-
If you are a fan of fighting games, you must have heard of Mortal Kombat, one of the most iconic and brutal franchises in the genre. Mortal Kombat has been around since 1992, and it has evolved over the years with new characters, graphics, gameplay, and storylines. One of the latest and most popular entries in the series is Mortal Kombat XL, which was released in 2016 as an enhanced version of Mortal Kombat X.
-
Mortal Kombat XL is a game that offers a lot of content, variety, and fun for both casual and hardcore players. It has a rich roster of fighters, each with their own unique abilities, moves, and fatalities. It has several game modes, such as story mode, arcade mode, online mode, and tower mode. It has stunning graphics, animations, and sound effects that make every fight feel realistic and immersive. And it has the signature gore and violence that make Mortal Kombat stand out from other fighting games.
But what if you want to play Mortal Kombat XL on your Android device? Is it possible to enjoy this game on your smartphone or tablet? The answer is yes, thanks to the Mortal Kombat XL download APK. This is a file that allows you to install and run Mortal Kombat XL on your Android device without any hassle. In this article, we will show you how to download and install Mortal Kombat XL APK on your Android device, and how to enjoy it to the fullest.
-
What is Mortal Kombat XL?
-
Mortal Kombat XL is an updated version of Mortal Kombat X, which was released in 2015. Mortal Kombat X is the tenth main installment in the Mortal Kombat series, and it continues the story of the previous games. It is set 25 years after the events of Mortal Kombat 9, and it features a new generation of fighters who have to face a new threat from the Outworld.
-
The main features of Mortal Kombat XL
-
Mortal Kombat XL has many features that make it one of the best fighting games ever made. Some of these features are:
-
-
It has over 30 playable characters, including fan-favorites like Scorpion, Sub-Zero, Raiden, Liu Kang, Kitana, Sonya Blade, Johnny Cage, and more. It also has new characters like Cassie Cage, Jacqui Briggs, Takeda Takahashi, D'Vorah, Erron Black, Ferra/Torr, and more. And it has guest characters from other franchises like Alien, Predator, Jason Voorhees, Leatherface, and Bo' Rai Cho.
-
It has four different variations for each character, which change their appearance, abilities, moves, and strategies. For example, Scorpion can choose between Ninjutsu, Hellfire, Inferno, or Flame Fist variations.
-
It has a cinematic story mode that spans 12 chapters and follows the events of Mortal Kombat X. It also has an arcade mode that lets you fight against random opponents until you reach the final boss.
-
It has an online mode that lets you compete with other players around the world in ranked matches or casual matches. You can also join factions and participate in faction wars that reward you with points and rewards. You can also chat with other players and create or join rooms for custom matches.
-
It has a tower mode that lets you challenge yourself with different objectives and modifiers. For example, you can play the Test Your Luck tower, which randomly applies effects like low gravity, poison, bombs, or inverted controls to your fights. Or you can play the Living Towers, which change every hour, day, or week with new challenges and rewards.
-
It has a krypt mode that lets you explore a vast underground area filled with secrets, puzzles, and treasures. You can use the coins you earn from playing the game to unlock new costumes, fatalities, brutalities, concept art, and more.
-
It has stunning graphics that use the Unreal Engine 4 to create realistic and detailed environments, characters, and effects. It also has smooth and responsive gameplay that supports 60 frames per second on most devices.
-
It has the signature gore and violence that make Mortal Kombat famous. It has brutal and creative fatalities that finish off your opponents in gruesome ways. It also has brutalities that let you perform quick and savage kills during the fight. And it has x-ray moves that show the damage you inflict on your opponent's bones and organs in slow motion.
-
-
The difference between Mortal Kombat X and Mortal Kombat XL
-
Mortal Kombat XL is an enhanced version of Mortal Kombat X that includes all the downloadable content (DLC) that was released for the original game. This means that Mortal Kombat XL has more characters, costumes, stages, and features than Mortal Kombat X. For example, Mortal Kombat XL has nine additional characters that were not available in Mortal Kombat X: Alien, Predator, Jason Voorhees, Leatherface, Bo' Rai Cho, Triborg, Tremor, Tanya, and Goro. It also has new costumes for some of the existing characters, such as Cyber Sub-Zero, Revenant Liu Kang, Revenant Kitana, Revenant Jax, Revenant Kung Lao, Dark Emperor Liu Kang, Dark Empress Kitana, and more. It also has new stages for some of the game modes, such as the Pit Stage for arcade mode and the Refugee Kamp Stage for online mode.
-
If you already own Mortal Kombat X on your Android device, you can upgrade to Mortal Kombat XL by purchasing the XL Pack from the in-game store. This will give you access to all the DLC content that is included in Mortal Kombat XL. However, if you do not own Mortal Kombat X on your Android device, you can download and install Mortal Kombat XL APK directly from a trusted source online.
-
How to download and install Mortal Kombat XL APK on your Android device
-
If you want to play Mortal Kombat XL on your Android device, you will need to download and install Mortal Kombat XL APK from a reliable website. This is a file that contains the game data and allows you to run it on your device without any problems. However, before you do that, you will need to make sure that your device meets the requirements for running Mortal Kombat XL APK.
-
mortal kombat xl apk for android free download
-mortal kombat xl mod apk download latest version
-how to install mortal kombat xl on android
-mortal kombat xl apk + obb download
-mortal kombat xl android gameplay
-mortal kombat xl apk offline
-mortal kombat xl apk filehippo
-mortal kombat xl characters and moves apk
-mortal kombat xl apk no verification
-mortal kombat xl apk highly compressed
-mortal kombat xl apk revdl
-mortal kombat xl apk rexdl
-mortal kombat xl apk pure
-mortal kombat xl apk uptodown
-mortal kombat xl apk apkpure
-mortal kombat xl apk data download
-mortal kombat xl apk and data
-mortal kombat xl apk android 1
-mortal kombat xl apk android republic
-mortal kombat xl apk android oyun club
-mortal kombat xl apk all characters unlocked
-mortal kombat xl apk and obb file download
-mortal kombat xl apk blackmod
-mortal kombat xl apk best settings
-mortal kombat xl apk by rexdl
-mortal kombat xl apk by revdl
-mortal kombat xl apk by apkpure
-mortal kombat xl apk by uptodown
-mortal kombat xl cheats and hacks apk download
-mortal kombat xl cracked apk download
-mortal kombat xl download for android phone
-mortal kombat xl download for android tablet
-mortal kombat xl download for android device
-mortal kombat xl download for android emulator
-mortal kombat xl download for android free full version
-mortal kombat xl full game download for android
-mortal kombat xl free download for android mobile
-mortal kombat x vs xl android download
-how to download and play mortal kombat xl on android
-how to download and install mortal kombat xl on android without verification
-how to download and run mortal kombat xl on android with ppsspp emulator
-how to download and update mortal kombat x to xl on android
-is there a mortal kombat xl for android?
-is it possible to play mortal kombat xl on android?
-is it safe to download mortal kombat xl on android?
-
The requirements for running Mortal Kombat XL APK
-
Mortal Kombat XL APK is a large and demanding game that requires a powerful device to run smoothly and without errors. Here are some of the minimum requirements for running Mortal Kombat XL APK on your Android device:
-
-
Your device must have Android 5.0 or higher as its operating system.
-
Your device must have at least 2 GB of RAM and 4 GB of free storage space.
-
Your device must have a quad-core processor with a clock speed of at least 1.5 GHz.
-
Your device must have a GPU that supports OpenGL ES 3.1 or higher.
-
Your device must have a stable internet connection for downloading the game data and playing online modes.
-
-
If your device meets these requirements, you can proceed to download and install Mortal Kombat XL APK on your Android device.
-
The steps for downloading and installing Mortal Kombat XL APK
-
Here are the steps for downloading and installing Mortal Kombat XL APK on your Android device:
-
-
Go to a reputable website that offers Mortal Kombat XL APK for download. Make sure that the website is safe and secure by checking its reviews and ratings.
-
Download the Mortal Kombat XL APK file to your device. The file size may vary depending on the website, but it should be around 1 GB.
-
Once the download is complete, go to your device settings and enable the installation of apps from unknown sources. This will allow you to install the Mortal Kombat XL APK file on your device.
-
Locate the Mortal Kombat XL APK file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.
-
After the installation is done, launch the Mortal Kombat XL app on your device. The app will ask you to download additional data for the game, which may take some time depending on your internet speed. The additional data size may vary depending on the website, but it should be around 2 GB.
-
Once the additional data is downloaded, you can start playing Mortal Kombat XL on your Android device. Enjoy!
-
-
The tips for playing Mortal Kombat XL APK smoothly and safely
-
Mortal Kombat XL APK is a great way to enjoy Mortal Kombat XL on your Android device, but it also comes with some risks and challenges. Here are some tips for playing Mortal Kombat XL APK smoothly and safely:
-
-
Make sure that you download Mortal Kombat XL APK from a trusted and verified website. Do not download it from any random or suspicious website, as it may contain viruses, malware, or spyware that can harm your device or steal your personal information.
-
Make sure that you have enough storage space and battery life on your device before downloading and installing Mortal Kombat XL APK. The game requires a lot of space and power to run properly, so you do not want to run out of either while playing.
-
Make sure that you have a good internet connection when downloading the game data and playing online modes. The game requires a stable and fast internet connection to download the data and connect with other players. If your connection is slow or unstable, you may experience lag, glitches, or disconnections while playing.
-
Make sure that you update the game regularly when new updates are available. The game developers may release new updates that fix bugs, improve performance, or add new features to the game. You can check for updates in the game settings or in the app store.
-
Make sure that you play the game responsibly and respectfully. Do not cheat, hack, or mod the game in any way, as it may ruin the game experience for yourself and others. Do not harass, insult, or threaten other players online, as it may result in a ban or a report. And do not play the game for too long or too often, as it may affect your health and well-being.
-
-
How to enjoy Mortal Kombat XL APK to the fullest
-
Mortal Kombat XL APK is a game that offers a lot of fun and excitement for anyone who loves fighting games. It has a lot of content, variety, and challenge that will keep you entertained for hours. Here are some tips on how to enjoy Mortal Kombat XL APK to the fullest:
-
The best characters to use in Mortal Kombat XL APK
-
Mortal Kombat XL APK has a large and diverse roster of characters that you can choose from. Each character has their own strengths, weaknesses, styles, and personalities that make them unique and fun to play with. However, some characters may be better than others depending on your preferences and skills. Here are some of the best characters to use in Mortal Kombat XL APK:
-
-
Character
Reason
-
Scorpion
Scorpion is one of the most iconic and popular characters in Mortal Kombat history. He is a ninja who wields a spear and controls fire. He is fast, agile, and versatile, with a lot of combos, mix-ups, and mobility options. He can also teleport behind his opponents and catch them off guard. He is a great character for beginners and experts alike.
-
Sub-Zero
Sub-Zero is another classic and fan-favorite character in Mortal Kombat history. He is a ninja who manipulates ice and cold. He is strong, durable, and defensive, with a lot of tools to control the space and tempo of the fight. He can freeze his opponents with his ice ball or ice clone, create ice weapons or walls, and slide under his opponents' attacks. He is a great character for intermediate and advanced players.
-
Cassie Cage
Cassie Cage is one of the new characters introduced in Mortal Kombat X. She is the daughter of Johnny Cage and Sonya Blade, and she inherits their fighting skills and charisma. She is a balanced and well-rounded character, with a lot of options for offense and defense. She can use her pistols, baton, drone, or martial arts to attack her opponents from different ranges and angles. She can also use her nut punch or x-ray move to deal massive damage and stun her opponents. She is a great character for any level of player.
-
Alien
Alien is one of the guest characters from the Alien franchise. He is a terrifying and deadly creature that uses his claws, tail, teeth, and acid to hunt and kill his prey. He is fast, aggressive, and unpredictable, with a lot of pressure, damage, and range. He can also use his facehugger or chestburster to create traps or setups for his opponents. He is a great character for advanced and expert players.
-
-
The best game modes to play in Mortal Kombat XL APK
-
Mortal Kombat XL APK has several game modes that you can play and enjoy. Each game mode has its own objectives, rules, and rewards that make them different and fun. Here are some of the best game modes to play in Mortal Kombat XL APK:
-
-
Story mode: This is the mode where you can experience the story of Mortal Kombat X and XL. You can follow the events of the game through 12 chapters, each focusing on a different character. You can watch cinematic cutscenes, dialogue, and fights that advance the plot and reveal the secrets of the Mortal Kombat universe. You can also unlock new costumes and achievements by completing the story mode.
-
Arcade mode: This is the mode where you can fight against random opponents until you reach the final boss. You can choose your character, variation, difficulty, and number of rounds. You can also see your stats, such as wins, losses, fatalities, brutalities, and more. You can also unlock new endings for each character by completing the arcade mode.
-
Online mode: This is the mode where you can compete with other players around the world in various modes. You can play ranked matches or casual matches with players of similar skill level. You can also join factions and participate in faction wars that reward you with points and rewards. You can also chat with other players and create or join rooms for custom matches.
-
Tower mode: This is the mode where you can challenge yourself with different objectives and modifiers. You can play the Test Your Luck tower, which randomly applies effects like low gravity, poison, bombs, or inverted controls to your fights. Or you can play the Living Towers, which change every hour, day, or week with new challenges and rewards.
-
Krypt mode: This is the mode where you can explore a vast underground area filled with secrets, puzzles, and treasures. You can use the coins you earn from playing the game to unlock new costumes, fatalities, brutalities, concept art, and more.
-
-
The best tips and tricks to master Mortal Kombat XL APK
-
Mortal Kombat XL APK is a game that requires skill, strategy, and practice to master. It is not a game that you can win by button mashing or spamming moves. It is a game that rewards you for learning the mechanics, characters, combos, and tactics of the game. Here are some of the best tips and tricks to master Mortal Kombat XL APK:
-
-
Learn the basics: Before you jump into the game modes, make sure that you learn the basics of the game. Learn how to move, block, attack, grab, throw, break, counter, of the website before downloading and installing Mortal Kombat XL APK.
-
Is Mortal Kombat XL APK legal?
-
Mortal Kombat XL APK is legal to download and install from a legitimate website. However, some websites may offer pirated or cracked files that violate the intellectual property rights of the game developers. Therefore, you should always respect the law and the game developers by downloading and installing Mortal Kombat XL APK from an authorized website.
-
Is Mortal Kombat XL APK compatible with my device?
-
Mortal Kombat XL APK is compatible with most Android devices that meet the minimum requirements for running the game. However, some devices may have issues or errors with the game due to different specifications or models. Therefore, you should always check the compatibility of your device with the game before downloading and installing Mortal Kombat XL APK.
-
How can I contact the support team of Mortal Kombat XL APK?
-
If you have any questions, problems, or feedback about Mortal Kombat XL APK, you can contact the support team of the game by visiting their official website, social media pages, or email address. You can also check their online forums, FAQs, or guides for more information and help.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Play Melon Playground on Your Mac with BlueStacks The Ultimate Android Emulator for PC and Mac.md b/spaces/congsaPfin/Manga-OCR/logs/Play Melon Playground on Your Mac with BlueStacks The Ultimate Android Emulator for PC and Mac.md
deleted file mode 100644
index c94403867274d2c4bed7906e48bdcc9d0f99557a..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Play Melon Playground on Your Mac with BlueStacks The Ultimate Android Emulator for PC and Mac.md
+++ /dev/null
@@ -1,133 +0,0 @@
-
-
Can You Download Melon Playground on Mac?
-
Melon Playground is a popular sandbox game that lets you unleash your creativity and imagination. You can use various items, weapons, and physics to create your own scenarios and experiments. But can you download Melon Playground on Mac? The answer is yes, but not directly. In this article, we will show you two ways to play Melon Playground on your Mac computer.
-
What is Melon Playground?
-
Melon Playground is a simulation game developed by TwentySeven, a studio based in Russia. It is available for Android devices on Google Play Store. The game has over 10 million downloads and a rating of 2.9 stars out of 5.
Melon Playground is a sandbox game, which means that there is no fixed goal or objective. You can do whatever you want with the items and tools provided in the game. You can create your own scenes, characters, and stories. You can also destroy, explode, burn, freeze, or electrocute anything you see.
-
Features and gameplay of Melon Playground
-
Melon Playground has a wide variety of items at your disposal, such as melee weapons, guns, barrels, explosives, vehicles, animals, humans, robots, zombies, aliens, and more. You can also customize the items by changing their color, size, shape, texture, and properties. You can also use physics to make the items interact with each other in realistic or unrealistic ways.
-
The game has a simple and intuitive interface that allows you to drag and drop items from the menu to the scene. You can also use gestures to rotate, zoom, or move the camera. You can also pause, resume, or reset the scene at any time. The game also has a screenshot and video recording feature that lets you capture and share your creations with others.
-
How to Download Melon Playground on Mac?
-
Since Melon Playground is an Android game, you cannot download it directly from the App Store or run it natively on your Mac. However, there are two ways to play Melon Playground on your Mac using either an Android emulator or a cloud gaming service.
-
Option 1: Use an Android emulator
-
An Android emulator is a software that simulates an Android device on your computer. It allows you to run Android apps and games on your Mac as if you were using an actual Android device.
-
How to install melon playground on mac with bluestacks
-Melon playground app store download for mac
-Melon playground mac compatibility and requirements
-Melon playground simulation game for mac
-Melon playground sandbox game for mac
-Melon playground weapons and tools for mac
-Melon playground no ads subscription for mac
-Melon playground privacy policy and data usage for mac
-Melon playground developer and support for mac
-Melon playground reviews and ratings for mac
-Melon playground update and new features for mac
-Melon playground tips and tricks for mac
-Melon playground cheats and hacks for mac
-Melon playground gameplay and videos for mac
-Melon playground alternatives and similar games for mac
-Melon playground free download and play for mac
-Melon playground online and offline mode for mac
-Melon playground multiplayer and co-op mode for mac
-Melon playground custom scenarios and maps for mac
-Melon playground best weapons and tools for mac
-Melon playground bugs and glitches for mac
-Melon playground fixes and solutions for mac
-Melon playground system performance and optimization for mac
-Melon playground keyboard and mouse controls for mac
-Melon playground gamepad support and settings for mac
-Melon playground graphics and sound quality for mac
-Melon playground fun and relaxing game for mac
-Melon playground stress relief and outlet game for mac
-Melon playground destruction and chaos game for mac
-Melon playground explosions and fire effects for mac
-Melon playground ragdoll physics and animations for mac
-Melon playground barrels and grenades for mac
-Melon playground machine guns and rifles for mac
-Melon playground melee weapons and swords for mac
-Melon playground dummies and targets for mac
-Melon playground challenges and achievements for mac
-Melon playground leaderboards and rankings for mac
-Melon playground social media and community for mac
-Melon playground feedback and suggestions for mac
-Melon playground questions and answers for mac
-Is melon playground safe and secure for mac?
-Is melon playground compatible with macOS 11 or later?
-Is melon playground available on the Mac App Store?
-Is melon playground free or paid on the Mac App Store?
-Is melon playground worth downloading on the Mac App Store?
-Is melon playground fun and addictive on the Mac App Store?
-Is melon playground easy to install on the Mac App Store?
-Is melon playground updated regularly on the Mac App Store?
-
What is an Android emulator and how does it work?
-
An Android emulator is a software that creates a virtual environment that mimics the hardware and software of an Android device. It runs an Android operating system (OS) on your computer and lets you access the Google Play Store and other Android services. You can then download and install any Android app or game on your computer and run it using the emulator.
-
Pros and cons of using an Android emulator
-
Using an Android emulator has some advantages and disadvantages. Here are some of them:
Pros:
-
-
You can play Melon Playground and other Android games on your Mac without buying an Android device.
-
You can use your keyboard and mouse to control the game, which may be more comfortable and convenient than using a touchscreen.
-
You can customize the emulator settings to optimize the performance and graphics of the game according to your preferences and system specifications.
-
-
Cons:
-
-
You need to download and install an Android emulator on your Mac, which may take up some storage space and memory.
-
You may experience some lag, glitches, or compatibility issues when running the game on the emulator, especially if your Mac is not powerful enough or if the emulator is not updated.
-
You may violate the terms and conditions of the game developer or the Google Play Store by using an emulator, which may result in a ban or a suspension of your account.
-
-
Steps to download and install an Android emulator and Melon Playground on Mac
-
There are many Android emulators available for Mac, such as BlueStacks, NoxPlayer, MEmu, LDPlayer, and more. You can choose any emulator that suits your needs and preferences. Here are the general steps to download and install an Android emulator and Melon Playground on Mac:
-
-
Go to the official website of the Android emulator you want to use and download the installer file for Mac.
-
Run the installer file and follow the instructions to install the emulator on your Mac.
-
Launch the emulator and sign in with your Google account. If you don't have one, you can create one for free.
-
Open the Google Play Store app on the emulator and search for Melon Playground. Alternatively, you can download the APK file of Melon Playground from a third-party source and drag and drop it to the emulator.
-
Click on the Install button to download and install Melon Playground on the emulator.
-
Once the installation is complete, you can launch Melon Playground from the emulator home screen or app drawer and start playing.
-
-
Option 2: Use a cloud gaming service
-
A cloud gaming service is a service that allows you to play games online without downloading or installing them on your device. It streams the game from a remote server to your device over the internet. You can play any game on any device as long as you have a stable internet connection and a compatible browser or app.
-
What is a cloud gaming service and how does it work?
-
A cloud gaming service is a service that uses cloud computing technology to run games on powerful servers and deliver them to your device via streaming. It works like Netflix or YouTube, but for games. You don't need to download or install anything on your device, you just need to access the service through a web browser or an app and choose the game you want to play. The service will then stream the game to your device in real-time, allowing you to control it with your keyboard, mouse, touchpad, or controller.
-
Pros and cons of using a cloud gaming service
-
Using a cloud gaming service has some advantages and disadvantages. Here are some of them:
-
Pros:
-
-
You can play Melon Playground and other games on your Mac without downloading or installing anything on your device, saving storage space and memory.
-
You can play games that are not compatible with your device or operating system, such as Android games on Mac.
-
You can enjoy high-quality graphics and performance of games without worrying about your device's specifications or settings.
-
-
Cons:
-
-
You need a fast and reliable internet connection to play games online without lag or interruption.
-
You may need to pay a subscription fee or buy credits to access some cloud gaming services or games.
-
You may not be able to access some features or functions of games that are available offline, such as saving progress, customizing settings, or playing multiplayer modes.
-
-
Steps to sign up for a cloud gaming service and play Melon Playground online on Mac
-
There are many cloud gaming services available online, such as Google Stadia, NVIDIA GeForce Now, Amazon Luna, Microsoft xCloud, Vortex, Shadow, and more. You can choose any service that offers Melon Playground or other games that you want to play. Here are the general steps to sign up for a cloud gaming service and play Melon Playground online on Mac:
-
-
Go to the official website of the cloud gaming service you want to use and create an account. You may need to provide your email, password, payment method, and other personal information.
-
Choose a subscription plan or buy credits that suit your budget and gaming needs. Some services may offer a free trial or a limited number of games for free.
-
Download and install the app of the cloud gaming service on your Mac, or access the service through a web browser that supports streaming, such as Chrome, Safari, or Firefox.
-
Launch the app or the browser and sign in with your account. You will see a library of games that you can play online.
-
Search for Melon Playground or browse the categories and genres to find it. Click on the Play button to start streaming the game to your Mac.
-
Enjoy playing Melon Playground online on your Mac with high-quality graphics and performance.
-
-
Conclusion
-
Melon Playground is a fun and creative sandbox game that lets you play with various items, weapons, and physics. However, since it is an Android game, you cannot download it directly on your Mac. You need to use either an Android emulator or a cloud gaming service to play Melon Playground on your Mac.
-
Both options have their pros and cons, so you need to consider your preferences, budget, and internet connection before choosing one. If you want to play Melon Playground offline or customize the game settings, you may prefer using an Android emulator. If you want to play Melon Playground online or enjoy high-quality graphics and performance, you may prefer using a cloud gaming service.
-
Whichever option you choose, we hope that this article has helped you learn how to download Melon Playground on Mac. Now you can enjoy creating and destroying anything you want with Melon Playground on your Mac computer.
-
FAQs
-
Is Melon Playground free to play?
-
Yes, Melon Playground is free to download and play on Android devices. However, it may contain ads and in-app purchases that require real money. If you use an Android emulator or a cloud gaming service to play Melon Playground on Mac, you may also need to pay for the emulator or the service.
-
Is Melon Playground safe to download and play?
-
Yes, Melon Playground is safe to download and play if you get it from the official Google Play Store or a trusted third-party source. However, you should be careful when using an Android emulator or a cloud gaming service, as they may pose some security risks or violate some terms and conditions. You should always use a reputable emulator or service and protect your device and account with antivirus software and strong passwords.
-
Is Melon Playground compatible with other devices?
-
Melon Playground is compatible with most Android devices that have Android 4.4 or higher. However, some devices may not support the game or run it smoothly due to different specifications or settings. You can also play Melon Playground on other devices such as Windows PC, Mac, iOS, or Linux using an Android emulator or a cloud gaming service.
-
Can I play Melon Playground offline?
-
Yes, you can play Melon Playground offline on your Android device without an internet connection. However, you may not be able to access some features or functions of the game that require online services, such as updating the game, downloading new items, or sharing your creations. If you use an Android emulator or a cloud gaming service to play Melon Playground on Mac, you will need an internet connection to run the emulator or the service.
-
Can I create my own items and scenarios in Melon Playground?
-
Yes, you can create your own items and scenarios in Melon Playground using the customization tools provided in the game. You can change the color, size, shape, texture, and properties of any item in the game. You can also use physics to make the items interact with each other in realistic or unrealistic ways. You can then save your creations and share them with others through screenshots or videos.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Rise of the Kings Mod Apk OBB Build Your Empire and Defeat Your Enemies.md b/spaces/congsaPfin/Manga-OCR/logs/Rise of the Kings Mod Apk OBB Build Your Empire and Defeat Your Enemies.md
deleted file mode 100644
index fb84d089dc86398d04d8abd76ae275d5ecbec6e9..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Rise of the Kings Mod Apk OBB Build Your Empire and Defeat Your Enemies.md
+++ /dev/null
@@ -1,79 +0,0 @@
-
-
Rise of the Kings Mod APK + OBB: A Guide for Strategy Game Lovers
-
If you are a fan of strategy games, you might have heard of Rise of the Kings, a popular online multiplayer game that lets you build your own empire, recruit and train your army, forge alliances and fight enemies, and explore a vast world full of challenges and opportunities. In this article, we will tell you everything you need to know about Rise of the Kings, and how you can download and install Rise of the Kings Mod APK + OBB, a modified version of the game that gives you unlimited money, free VIP privileges, and no ads. Read on to find out more.
Rise of the Kings is a strategy game developed by ONEMT, a Chinese game studio that specializes in creating immersive and realistic games for mobile devices. The game was released in 2016 and has since attracted millions of players from all over the world. The game is set in a fantasy world where several kingdoms are vying for power and glory. You play as a lord who has to build your own empire, recruit and train your army, forge alliances and fight enemies, and explore a vast world full of challenges and opportunities.
-
Features of Rise of the Kings
-
Rise of the Kings has many features that make it one of the best strategy games on the market. Here are some of them:
-
Build your empire
-
You start with a small castle and some resources, and you have to expand your territory, upgrade your buildings, research new technologies, and manage your economy. You can choose from different types of buildings, such as farms, mines, barracks, workshops, academies, and more. You can also customize your castle with various decorations and designs.
-
Recruit and train your army
-
You can recruit different types of units, such as infantry, cavalry, archers, siege weapons, and more. You can also train them to improve their skills and abilities. You can also recruit legendary heroes who have unique talents and powers. You can equip them with weapons, armor, accessories, and mounts. You can also form teams with other players to create powerful formations.
-
Forge alliances and fight enemies
-
You can join or create an alliance with other players to cooperate and communicate with them. You can share resources, information, strategies, and troops with your allies. You can also participate in alliance events, such as wars, raids, rallies, quests, and more. You can also compete with other alliances for territory, resources, honor, and rewards. You can also fight against other players in PvP battles or against NPC enemies in PvE battles.
-
Rise of the Kings Mod Apk 1.9.32 Full + OBB Data
-Download Rise of the Kings Mod Apk + OBB for Android
-How to Install Rise of the Kings Mod Apk + OBB on PC
-Rise of the Kings Mod Apk + OBB Unlimited Gems and Gold
-Rise of the Kings Mod Apk + OBB Latest Version 2023
-Rise of the Kings Mod Apk + OBB Offline Mode
-Rise of the Kings Mod Apk + OBB Hack and Cheats
-Rise of the Kings Mod Apk + OBB Gameplay and Review
-Rise of the Kings Mod Apk + OBB Free Download Link
-Rise of the Kings Mod Apk + OBB No Root Required
-Rise of the Kings Mod Apk + OBB Features and Benefits
-Rise of the Kings Mod Apk + OBB Tips and Tricks
-Rise of the Kings Mod Apk + OBB Best Strategy and Guide
-Rise of the Kings Mod Apk + OBB Compatible Devices and Requirements
-Rise of the Kings Mod Apk + OBB Update and Patch Notes
-Rise of the Kings Mod Apk + OBB Customer Support and Feedback
-Rise of the Kings Mod Apk + OBB Bug Fixes and Improvements
-Rise of the Kings Mod Apk + OBB Comparison and Alternatives
-Rise of the Kings Mod Apk + OBB Pros and Cons
-Rise of the Kings Mod Apk + OBB FAQ and Troubleshooting
-Rise of the Kings Mod Apk + OBB Forum and Community
-Rise of the Kings Mod Apk + OBB Wiki and Database
-Rise of the Kings Mod Apk + OBB Codes and Rewards
-Rise of the Kings Mod Apk + OBB Events and Promotions
-Rise of the Kings Mod Apk + OBB News and Updates
-
Explore a vast world
-
You can explore a vast world full of mysteries and surprises. You can discover new lands, resources, monsters, treasures, secrets, and more. You can also interact with other players in various ways, such as trading, chatting, gifting, spying, attacking, defending, and more. You can also experience different events and scenarios that change according to real-time situations.
-
What is Rise of the Kings Mod APK + OBB?
-
Rise of the Kings Mod APK + OBB is a modified version of the original game that gives you some advantages that are not available in the original game. These advantages include unlimited money, free VIP privileges, and no ads. With these benefits, you can enjoy the game without any limitations or interruptions. You can buy anything you want, access exclusive features, and play the game smoothly and comfortably.
-
Benefits of Rise of the Kings Mod APK + OBB
-
Here are some of the benefits of Rise of the Kings Mod APK + OBB that you can enjoy:
-
Unlimited money
-
Money is the main currency in the game that you can use to buy various items, such as resources, equipment, boosts, and more. With unlimited money, you can buy anything you want without worrying about running out of money. You can also speed up your progress and development by using money to upgrade your buildings, research new technologies, and train your army faster.
-
Free VIP privileges
-
VIP is a special status in the game that gives you access to exclusive features and benefits, such as extra resources, faster construction and research, more troops and heroes, and more. Normally, you have to pay real money to get VIP privileges or earn them by completing certain tasks. With Rise of the Kings Mod APK + OBB, you can get free VIP privileges without spending any money or doing any work. You can enjoy all the perks of being a VIP without any hassle.
-
No ads
-
Ads are annoying and distracting interruptions that can ruin your gaming experience. They can pop up at any time and force you to watch them or close them. They can also consume your data and battery life. With Rise of the Kings Mod APK + OBB, you can get rid of all the ads in the game and play the game without any interruptions or distractions. You can focus on your strategy and enjoy the game fully.
-
How to download and install Rise of the Kings Mod APK + OBB?
-
If you are interested in downloading and installing Rise of the Kings Mod APK + OBB, you need to follow some simple steps. Here are the steps you need to follow:
-
Steps to download and install Rise of the Kings Mod APK + OBB
-
Before you start, make sure you have enough storage space on your device and a stable internet connection.
-
Download the files from a trusted source
-
The first step is to download the files from a trusted source. You can find many websites that offer Rise of the Kings Mod APK + OBB files for free, but not all of them are safe and reliable. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information. To avoid this, you should only download the files from a trusted source that has positive reviews and feedback from other users. You can use this link to download the files safely and securely.
-
Enable unknown sources on your device
-
The second step is to enable unknown sources on your device. This is a security setting that prevents you from installing apps from sources other than the official Google Play Store. Since Rise of the Kings Mod APK + OBB is not available on the Play Store, you need to enable unknown sources to install it on your device. To do this, go to your device settings > security > unknown sources > enable.
-
Install the APK file and copy the OBB file to the Android/obb folder
-
The third step is to install the APK file and copy the OBB file to the Android/obb folder. The APK file is the application file that contains the game data and code. The OBB file is the additional data file that contains the game graphics and sounds. To install the APK file, locate it in your device storage and tap on it. Follow the instructions on the screen to complete the installation. To copy the OBB file, locate it in your device storage and move it to the Android/obb folder. If you don't have this folder, create it manually.
-
Launch the game and enjoy
-
The final step is to launch the game and enjoy. To launch the game, go to your app drawer and tap on the Rise of the Kings icon. You will see the game loading screen and then the main menu. You can now enjoy the game with unlimited money, free VIP privileges, and no ads. You can also access all the features and content of the game without any restrictions or limitations.
-
Conclusion
-
Rise of the Kings is a strategy game that lets you build your own empire, recruit and train your army, forge alliances and fight enemies, and explore a vast world full of challenges and opportunities. It is a fun and addictive game that will keep you entertained for hours. However, if you want to enjoy the game without any limitations or interruptions, you should download and install Rise of the Kings Mod APK + OBB, a modified version of the game that gives you unlimited money, free VIP privileges, and no ads. With these benefits, you can buy anything you want, access exclusive features, and play the game smoothly and comfortably. To download and install Rise of the Kings Mod APK + OBB, you just need to follow some simple steps that we have explained in this article. We hope this article was helpful and informative for you. If you have any questions or feedback, feel free to leave a comment below.
-
FAQs
-
Here are some frequently asked questions about Rise of the Kings Mod APK + OBB:
-
Is Rise of the Kings Mod APK + OBB safe to use?
-
Yes, Rise of the Kings Mod APK + OBB is safe to use as long as you download it from a trusted source. We have provided a link to download the files safely and securely in this article. However, you should always be careful when downloading and installing any modded or hacked apps from unknown sources, as they may contain viruses, malware, or spyware that can harm your device or steal your personal information.
-
Is Rise of the Kings Mod APK + OBB compatible with my device?
-
Rise of the Kings Mod APK + OBB is compatible with most Android devices that run on Android 4.0.3 or higher. However, some devices may not support the game due to hardware or software limitations. To check if your device is compatible with the game, you can visit the official Google Play Store page of Rise of the Kings and see if your device is listed among the supported devices.
-
Will I get banned for using Rise of the Kings Mod APK + OBB?
-
There is a low risk of getting banned for using Rise of the Kings Mod APK + OBB, as the modded version does not interfere with the game servers or other players' accounts. However, you should always use the modded version at your own risk and discretion, as we cannot guarantee that it will work flawlessly or that it will not be detected by the game developers or moderators. If you want to avoid any potential issues or consequences, you should play the game with the original version.
-
Can I update Rise of the Kings Mod APK + OBB?
-
No, you cannot update Rise of the Kings Mod APK + OBB through the Google Play Store or any other official source. If you try to do so, you will lose all the benefits of the modded version and revert back to the original version. If you want to update the game, you will have to download and install the latest version of Rise of the Kings Mod APK + OBB from the same source that you downloaded it from. You may also have to uninstall the previous version of the game before installing the new one.
-
Can I play Rise of the Kings Mod APK + OBB offline?
-
No, you cannot play Rise of the Kings Mod APK + OBB offline, as the game requires an internet connection to run and function properly. The game is an online multiplayer game that connects you with other players from all over the world. You need an internet connection to access the game servers, chat with other players, participate in alliance events, and more. If you try to play the game offline, you will not be able to load the game or access any of its features.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/The Walking Dead How to Survive a Walker Attack.md b/spaces/congsaPfin/Manga-OCR/logs/The Walking Dead How to Survive a Walker Attack.md
deleted file mode 100644
index 16c3f4a5d5ef73843fc199730338d92c3a0166fe..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/The Walking Dead How to Survive a Walker Attack.md
+++ /dev/null
@@ -1,125 +0,0 @@
-
-
The Walking Dead: A Guide to the Post-Apocalyptic Horror Series
-
If you are a fan of horror, drama, and zombies, you might have heard of The Walking Dead, one of the most popular and acclaimed TV shows of the last decade. But what is The Walking Dead exactly, and why should you watch it? In this article, we will give you a comprehensive guide to the post-apocalyptic horror series, covering its origins, characters, themes, and more. Whether you are a newcomer or a longtime fan, we hope this article will help you enjoy The Walking Dead even more.
-
What is The Walking Dead?
-
The Walking Dead is a multimedia franchise that revolves around a group of survivors in a world overrun by zombies, or "walkers" as they are called in the series. The franchise consists of comic books, TV shows, video games, novels, webisodes, and movies. Here are some of the main components of the franchise:
The original source material for The Walking Dead is a comic book series created by writer Robert Kirkman and artists Tony Moore and Charlie Adlard. The comic book series began in 2003 and ended in 2019, with 193 issues published. The comic book series follows the journey of Rick Grimes, a former sheriff's deputy who wakes up from a coma to find himself in a zombie apocalypse. He meets other survivors and forms a community with them, facing various threats from both the living and the dead. The comic book series is known for its dark, gritty, and realistic tone, as well as its shocking twists and deaths.
-
The Walking Dead TV series
-
The most popular adaptation of The Walking Dead is a TV series that premiered on AMC in 2010 and will conclude in 2022, with 11 seasons and 177 episodes. The TV series is developed by Frank Darabont and has several executive producers, including Kirkman, Gale Anne Hurd, David Alpert, Greg Nicotero, Scott M. Gimple, Angela Kang, and others. The TV series follows a similar storyline as the comic book series, but also introduces new characters, locations, and events. The TV series is praised for its compelling characters, performances, action sequences, makeup effects, and social commentary.
-
The Walking Dead franchise
-
Besides the comic book series and the TV series, The Walking Dead has expanded into a larger franchise that includes several spin-offs and movies. Some of the notable spin-offs are:
-
-
Fear the Walking Dead, a companion TV series that explores the onset of the zombie apocalypse from different perspectives.
-
The Walking Dead: World Beyond, a limited TV series that focuses on the first generation of survivors who grew up in the zombie apocalypse.
-
Tales of the Walking Dead, an upcoming anthology TV series that will feature standalone stories about new and existing characters.
-
The Walking Dead: A New Frontier, a video game developed by Telltale Games that follows a new group of survivors in a post-apocalyptic world.
-
The Road to Woodbury, a novel written by Kirkman and Jay Bonans
inga, a novel that tells the backstory of the Governor, one of the main antagonists of the series.
-
-
Some of the upcoming movies are:
-
-
The Walking Dead, a trilogy of movies that will continue the story of Rick Grimes after his disappearance from the TV series.
-
Untitled Daryl and Carol Movie, a movie that will follow the adventures of Daryl Dixon and Carol Peletier after the end of the TV series.
-
-
Who are the main characters of The Walking Dead?
-
One of the strengths of The Walking Dead is its large and diverse cast of characters, who have different backgrounds, personalities, skills, and motivations. The characters evolve and change over time, as they face various challenges and dilemmas in the zombie apocalypse. Some of the main characters of The Walking Dead are:
-
Rick Grimes
-
Rick Grimes is the protagonist and leader of the group of survivors. He is a former sheriff's deputy who wakes up from a coma to find himself in a zombie apocalypse. He is determined to protect his family and friends, and to find a safe place to live. He is brave, loyal, resourceful, and compassionate, but also ruthless, pragmatic, and sometimes conflicted. He often struggles with his role as a leader, as he has to make difficult decisions that affect the lives of others. He is played by Andrew Lincoln in the TV series.
-
The Walking Dead cast and crew
-The Walking Dead comic book series
-The Walking Dead spin-off shows
-The Walking Dead season 11 release date
-The Walking Dead zombies or walkers
-The Walking Dead Rick Grimes movies
-The Walking Dead best episodes and moments
-The Walking Dead merchandise and collectibles
-The Walking Dead fan theories and predictions
-The Walking Dead video games and apps
-The Walking Dead behind the scenes and trivia
-The Walking Dead crossover with Fear the Walking Dead
-The Walking Dead Negan and Lucille
-The Walking Dead Daryl Dixon and Carol Peletier
-The Walking Dead Michonne and her katana
-The Walking Dead Glenn Rhee and Maggie Greene
-The Walking Dead Carl Grimes and his eye patch
-The Walking Dead Morgan Jones and his stick
-The Walking Dead Eugene Porter and his mullet
-The Walking Dead Rosita Espinosa and her baby
-The Walking Dead Gabriel Stokes and his glasses
-The Walking Dead Aaron and his metal arm
-The Walking Dead Ezekiel and his tiger Shiva
-The Walking Dead Jerry and his axe
-The Walking Dead Sasha Williams and her sniper rifle
-The Walking Dead Tara Chambler and her sunglasses
-The Walking Dead Abraham Ford and his mustache
-The Walking Dead Hershel Greene and his farm
-The Walking Dead Shane Walsh and his betrayal
-The Walking Dead Lori Grimes and her pregnancy
-The Walking Dead Andrea and her gun skills
-The Walking Dead Merle Dixon and his knife hand
-The Walking Dead Dale Horvath and his RV
-The Walking Dead Sophia Peletier and her barn reveal
-The Walking Dead Beth Greene and her singing voice
-The Walking Dead Tyreese Williams and his hammer
-The Walking Dead Bob Stookey and his tainted meat
-The Walking Dead Noah and his revolving door death
-The Walking Dead Denise Cloyd and her arrow to the eye
-The Walking Dead Spencer Monroe and his guts spillage
-The Walking Dead Simon and his baseball bat swing
-The Walking Dead Jadis and her junkyard group
-The Walking Dead Enid and her JSS motto
-The Walking Dead Alpha and her Whisperers
-The Walking Dead Beta and his mask
-The Walking Dead Lydia and her mother issues
-The Walking Dead Yumiko and her lawyer background
-The Walking Dead Magna and her prison tattoos
-The Walking Dead Connie and her deafness
-The Walking Dead Princess and her pink jacket
-
Daryl Dixon
-
Daryl Dixon is one of the most popular and beloved characters of The Walking Dead. He is a skilled hunter, tracker, and fighter, who uses a crossbow as his signature weapon. He is loyal, brave, independent, and resilient, but also introverted, guarded, and sometimes volatile. He has a close bond with his brother Merle, who is often a source of conflict for him. He also develops a strong friendship with Carol Peletier, who helps him open up and heal from his past traumas. He is played by Norman Reedus in the TV series.
-
Carol Peletier
-
Carol Peletier is one of the longest-surviving characters of The Walking Dead. She starts off as a timid and abused housewife, who loses her husband and daughter in the zombie apocalypse. She then transforms into a strong, confident, and capable survivor, who is willing to do anything to protect her group. She is smart, resourceful, strategic, and compassionate, but also ruthless, cold, and sometimes manipulative. She has a close friendship with Daryl Dixon, who supports her through her losses and struggles. She is played by Melissa McBride in the TV series.
-
Negan
-
Negan is one of the most notorious and controversial characters of The Walking Dead. He is the leader of the Saviors, a large group of survivors who extort other communities for resources in exchange for protection from the walkers. He is charismatic, witty, sadistic, and violent, who uses a baseball bat wrapped in barbed wire named Lucille as his weapon of choice. He kills several members of Rick's group in a brutal way, sparking a war between them. He is later captured by Rick and imprisoned for years, until he redeems himself by helping them fight against other enemies. He is played by Jeffrey Dean Morgan in the TV series.
-
What are the main themes of The Walking Dead?
-
The Walking Dead is not just about zombies and gore. It is also about exploring various themes that are relevant to our society and humanity. Some of the main themes of The Walking Dead are:
-
Survival
-
The most obvious theme of The Walking Dead is survival. The characters have to survive not only from the walkers, but also from other humans who pose threats to them. They have to find food, water, shelter, weapons, medicine, and other resources to stay alive. They also have to deal with injuries, diseases, infections, hunger, thirst, fatigue, and stress. They have to adapt to different environments and situations, and overcome various obstacles and challenges. Survival is not easy or guaranteed in The Walking Dead, as many characters die or disappear along the way.
-
Humanity
-
Another theme of The Walking Dead is humanity. The characters have to question what it means to be human in a world where humanity seems to be lost or corrupted. They have to face moral dilemmas and ethical choices that test their values and principles. They have to balance their individual needs
and their group interests, and their personal feelings and their rational judgments. They have to cope with their emotions, such as fear, anger, grief, guilt, and hope. They have to maintain their sanity, dignity, and identity in a chaotic and cruel world. They have to find meaning and purpose in their existence, and to preserve their values and beliefs.
-
Leadership
-
A third theme of The Walking Dead is leadership. The characters have to decide who to follow and who to trust in a world where authority and order are gone or corrupted. They have to deal with different types of leaders, such as Rick Grimes, who is a democratic and benevolent leader, Negan, who is a tyrannical and oppressive leader, and the Governor, who is a charismatic and manipulative leader. They also have to face the challenges and responsibilities of being a leader themselves, such as making decisions, resolving conflicts, inspiring others, and facing consequences.
-
Morality
-
A fourth theme of The Walking Dead is morality. The characters have to determine what is right and wrong in a world where morality seems to be relative or irrelevant. They have to confront the ethical implications of their actions and choices, such as killing, stealing, lying, betraying, sacrificing, and forgiving. They have to deal with the moral gray areas and ambiguities that arise in the zombie apocalypse, such as whether to kill walkers or spare them, whether to help strangers or ignore them, whether to cooperate with other groups or compete with them, and whether to follow the rules or break them.
-
Why should you watch The Walking Dead?
-
Now that you know what The Walking Dead is about, you might be wondering why you should watch it. Here are some of the reasons why The Walking Dead is worth watching:
-
The Walking Dead is thrilling and suspenseful
-
If you like horror and action, you will love The Walking Dead. The series is full of thrilling and suspenseful scenes that will keep you on the edge of your seat. You will witness the characters fighting against hordes of walkers, escaping from dangerous situations, encountering new enemies, and facing unexpected twists. You will also enjoy the stunning visuals, sound effects, and music that create a tense and immersive atmosphere.
-
The Walking Dead is emotional and character-driven
-
If you like drama and character development, you will love The Walking Dead. The series is not just about zombies and violence. It is also about the human stories and relationships that emerge in the zombie apocalypse. You will get to know the characters deeply, their backgrounds, personalities, motivations, and goals. You will care about them, root for them, cry for them, and sometimes hate them. You will also witness their growth and change over time, as they face various challenges and dilemmas.
-
The Walking Dead is creative and diverse
-
If you like creativity and diversity, you will love The Walking Dead. The series is not just a repetitive or predictable show. It is constantly evolving and expanding its scope and scale. You will explore different settings and locations, such as Atlanta, Hershel's farm, the prison, Terminus, Alexandria, the Kingdom, the Hilltop, the Sanctuary , and Oceanside. You will meet different groups and communities, such as the Survivors, the Saviors, the Whisperers, the Commonwealth, and the CRM. You will encounter different types of walkers, such as the roamers, the lurkers, the herd, the spiked, and the radioactive. You will also enjoy the various spin-offs and movies that expand the universe and explore new stories and characters.
-
How to watch The Walking Dead?
-
If you are interested in watching The Walking Dead, you might be wondering how to do it. Here are some of the ways you can watch The Walking Dead:
-
The Walking Dead streaming platforms
-
The easiest way to watch The Walking Dead is to stream it online. You can find all the episodes of the TV series on various streaming platforms, such as Netflix, Hulu, Amazon Prime Video, AMC+, and others. You can also find some of the spin-offs and movies on these platforms, or on other platforms, such as YouTube, iTunes, Google Play, and others. You can choose the platform that suits your preferences and budget, and enjoy The Walking Dead anytime and anywhere.
-
The Walking Dead spin-offs and movies
-
Another way to watch The Walking Dead is to watch its spin-offs and movies. You can find some of the spin-offs on the same streaming platforms as the TV series, or on other platforms, such as AMC's website or app. You can also find some of the movies on these platforms, or on other platforms, such as theaters or cable TV. You can choose the spin-off or movie that interests you, and enjoy a different perspective or experience of The Walking Dead.
-
Conclusion
-
In conclusion, The Walking Dead is a post-apocalyptic horror series that has captivated millions of fans around the world. It is a franchise that consists of comic books, TV shows, video games, novels, webisodes, and movies. It is a series that features a large and diverse cast of characters, who have to survive in a world overrun by zombies and other threats. It is a series that explores various themes that are relevant to our society and humanity. It is a series that is thrilling, emotional, creative, and diverse. It is a series that you should watch if you are a fan of horror, drama, and zombies.
-
Here are some FAQs about The Walking Dead:
-
-
Q: When will The Walking Dead end?
-
A: The Walking Dead TV series will end in 2022, with its 11th and final season. However, the franchise will continue with its spin-offs and movies.
-
Q: How faithful is The Walking Dead TV series to the comic book series?
-
A: The Walking Dead TV series is based on the comic book series, but it also deviates from it in many ways. Some of the differences include new characters, locations, events, deaths, and endings.
-
Q: Who are the Whisperers?
-
A: The Whisperers are one of the main antagonists of The Walking Dead. They are a group of survivors who wear walker skins to blend in with them and to communicate with them. They are led by Alpha and Beta.
-
Q: What is CRM?
-
A: CRM is an acronym for Civic Republic Military. It is a mysterious and powerful organization that has access to advanced technology and resources. It is involved in the disappearance of Rick Grimes and the experiments on walkers.
-
Q: What is Lucille?
-
A: Lucille is the name of Negan's baseball bat wrapped in barbed wire. It is also the name of his late wife, who died of cancer before the zombie apocalypse.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/cooelf/Multimodal-CoT/timm/models/vision_transformer_hybrid.py b/spaces/cooelf/Multimodal-CoT/timm/models/vision_transformer_hybrid.py
deleted file mode 100644
index d5f0a5377ec9492c5ed55ceb3ce5a4378cbb8e3c..0000000000000000000000000000000000000000
--- a/spaces/cooelf/Multimodal-CoT/timm/models/vision_transformer_hybrid.py
+++ /dev/null
@@ -1,363 +0,0 @@
-""" Hybrid Vision Transformer (ViT) in PyTorch
-
-A PyTorch implement of the Hybrid Vision Transformers as described in:
-
-'An Image Is Worth 16 x 16 Words: Transformers for Image Recognition at Scale'
- - https://arxiv.org/abs/2010.11929
-
-`How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers`
- - https://arxiv.org/abs/2106.TODO
-
-NOTE These hybrid model definitions depend on code in vision_transformer.py.
-They were moved here to keep file sizes sane.
-
-Hacked together by / Copyright 2021 Ross Wightman
-"""
-from copy import deepcopy
-from functools import partial
-
-import torch
-import torch.nn as nn
-
-from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD
-from .layers import StdConv2dSame, StdConv2d, to_2tuple
-from .resnet import resnet26d, resnet50d
-from .resnetv2 import ResNetV2, create_resnetv2_stem
-from .registry import register_model
-from timm.models.vision_transformer import _create_vision_transformer
-
-
-def _cfg(url='', **kwargs):
- return {
- 'url': url,
- 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None,
- 'crop_pct': .9, 'interpolation': 'bicubic', 'fixed_input_size': True,
- 'mean': (0.5, 0.5, 0.5), 'std': (0.5, 0.5, 0.5),
- 'first_conv': 'patch_embed.backbone.stem.conv', 'classifier': 'head',
- **kwargs
- }
-
-
-default_cfgs = {
- # hybrid in-1k models (weights from official JAX impl where they exist)
- 'vit_tiny_r_s16_p8_224': _cfg(
- url='https://storage.googleapis.com/vit_models/augreg/'
- 'R_Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_224.npz',
- first_conv='patch_embed.backbone.conv'),
- 'vit_tiny_r_s16_p8_384': _cfg(
- url='https://storage.googleapis.com/vit_models/augreg/'
- 'R_Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz',
- first_conv='patch_embed.backbone.conv', input_size=(3, 384, 384), crop_pct=1.0),
- 'vit_small_r26_s32_224': _cfg(
- url='https://storage.googleapis.com/vit_models/augreg/'
- 'R26_S_32-i21k-300ep-lr_0.001-aug_light0-wd_0.03-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.03-res_224.npz',
- ),
- 'vit_small_r26_s32_384': _cfg(
- url='https://storage.googleapis.com/vit_models/augreg/'
- 'R26_S_32-i21k-300ep-lr_0.001-aug_medium2-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz',
- input_size=(3, 384, 384), crop_pct=1.0),
- 'vit_base_r26_s32_224': _cfg(),
- 'vit_base_r50_s16_224': _cfg(),
- 'vit_base_r50_s16_384': _cfg(
- url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_resnet50_384-9fd3c705.pth',
- input_size=(3, 384, 384), crop_pct=1.0),
- 'vit_large_r50_s32_224': _cfg(
- url='https://storage.googleapis.com/vit_models/augreg/'
- 'R50_L_32-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.01-res_224.npz'
- ),
- 'vit_large_r50_s32_384': _cfg(
- url='https://storage.googleapis.com/vit_models/augreg/'
- 'R50_L_32-i21k-300ep-lr_0.001-aug_medium2-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz',
- input_size=(3, 384, 384), crop_pct=1.0
- ),
-
- # hybrid in-21k models (weights from official Google JAX impl where they exist)
- 'vit_tiny_r_s16_p8_224_in21k': _cfg(
- url='https://storage.googleapis.com/vit_models/augreg/R_Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0.npz',
- num_classes=21843, crop_pct=0.9, first_conv='patch_embed.backbone.conv'),
- 'vit_small_r26_s32_224_in21k': _cfg(
- url='https://storage.googleapis.com/vit_models/augreg/R26_S_32-i21k-300ep-lr_0.001-aug_medium2-wd_0.03-do_0.0-sd_0.0.npz',
- num_classes=21843, crop_pct=0.9),
- 'vit_base_r50_s16_224_in21k': _cfg(
- url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_resnet50_224_in21k-6f7c7740.pth',
- num_classes=21843, crop_pct=0.9),
- 'vit_large_r50_s32_224_in21k': _cfg(
- url='https://storage.googleapis.com/vit_models/augreg/R50_L_32-i21k-300ep-lr_0.001-aug_medium2-wd_0.1-do_0.0-sd_0.0.npz',
- num_classes=21843, crop_pct=0.9),
-
- # hybrid models (using timm resnet backbones)
- 'vit_small_resnet26d_224': _cfg(
- mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, first_conv='patch_embed.backbone.conv1.0'),
- 'vit_small_resnet50d_s16_224': _cfg(
- mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, first_conv='patch_embed.backbone.conv1.0'),
- 'vit_base_resnet26d_224': _cfg(
- mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, first_conv='patch_embed.backbone.conv1.0'),
- 'vit_base_resnet50d_224': _cfg(
- mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD, first_conv='patch_embed.backbone.conv1.0'),
-}
-
-
-class HybridEmbed(nn.Module):
- """ CNN Feature Map Embedding
- Extract feature map from CNN, flatten, project to embedding dim.
- """
- def __init__(self, backbone, img_size=224, patch_size=1, feature_size=None, in_chans=3, embed_dim=768):
- super().__init__()
- assert isinstance(backbone, nn.Module)
- img_size = to_2tuple(img_size)
- patch_size = to_2tuple(patch_size)
- self.img_size = img_size
- self.patch_size = patch_size
- self.backbone = backbone
- if feature_size is None:
- with torch.no_grad():
- # NOTE Most reliable way of determining output dims is to run forward pass
- training = backbone.training
- if training:
- backbone.eval()
- o = self.backbone(torch.zeros(1, in_chans, img_size[0], img_size[1]))
- if isinstance(o, (list, tuple)):
- o = o[-1] # last feature if backbone outputs list/tuple of features
- feature_size = o.shape[-2:]
- feature_dim = o.shape[1]
- backbone.train(training)
- else:
- feature_size = to_2tuple(feature_size)
- if hasattr(self.backbone, 'feature_info'):
- feature_dim = self.backbone.feature_info.channels()[-1]
- else:
- feature_dim = self.backbone.num_features
- assert feature_size[0] % patch_size[0] == 0 and feature_size[1] % patch_size[1] == 0
- self.grid_size = (feature_size[0] // patch_size[0], feature_size[1] // patch_size[1])
- self.num_patches = self.grid_size[0] * self.grid_size[1]
- self.proj = nn.Conv2d(feature_dim, embed_dim, kernel_size=patch_size, stride=patch_size)
-
- def forward(self, x):
- x = self.backbone(x)
- if isinstance(x, (list, tuple)):
- x = x[-1] # last feature if backbone outputs list/tuple of features
- x = self.proj(x).flatten(2).transpose(1, 2)
- return x
-
-
-def _create_vision_transformer_hybrid(variant, backbone, pretrained=False, **kwargs):
- embed_layer = partial(HybridEmbed, backbone=backbone)
- kwargs.setdefault('patch_size', 1) # default patch size for hybrid models if not set
- return _create_vision_transformer(
- variant, pretrained=pretrained, embed_layer=embed_layer, default_cfg=default_cfgs[variant], **kwargs)
-
-
-def _resnetv2(layers=(3, 4, 9), **kwargs):
- """ ResNet-V2 backbone helper"""
- padding_same = kwargs.get('padding_same', True)
- stem_type = 'same' if padding_same else ''
- conv_layer = partial(StdConv2dSame, eps=1e-8) if padding_same else partial(StdConv2d, eps=1e-8)
- if len(layers):
- backbone = ResNetV2(
- layers=layers, num_classes=0, global_pool='', in_chans=kwargs.get('in_chans', 3),
- preact=False, stem_type=stem_type, conv_layer=conv_layer)
- else:
- backbone = create_resnetv2_stem(
- kwargs.get('in_chans', 3), stem_type=stem_type, preact=False, conv_layer=conv_layer)
- return backbone
-
-
-@register_model
-def vit_tiny_r_s16_p8_224(pretrained=False, **kwargs):
- """ R+ViT-Ti/S16 w/ 8x8 patch hybrid @ 224 x 224.
- """
- backbone = _resnetv2(layers=(), **kwargs)
- model_kwargs = dict(patch_size=8, embed_dim=192, depth=12, num_heads=3, **kwargs)
- model = _create_vision_transformer_hybrid(
- 'vit_tiny_r_s16_p8_224', backbone=backbone, pretrained=pretrained, **model_kwargs)
- return model
-
-
-@register_model
-def vit_tiny_r_s16_p8_384(pretrained=False, **kwargs):
- """ R+ViT-Ti/S16 w/ 8x8 patch hybrid @ 384 x 384.
- """
- backbone = _resnetv2(layers=(), **kwargs)
- model_kwargs = dict(patch_size=8, embed_dim=192, depth=12, num_heads=3, **kwargs)
- model = _create_vision_transformer_hybrid(
- 'vit_tiny_r_s16_p8_384', backbone=backbone, pretrained=pretrained, **model_kwargs)
- return model
-
-
-@register_model
-def vit_small_r26_s32_224(pretrained=False, **kwargs):
- """ R26+ViT-S/S32 hybrid.
- """
- backbone = _resnetv2((2, 2, 2, 2), **kwargs)
- model_kwargs = dict(embed_dim=384, depth=12, num_heads=6, **kwargs)
- model = _create_vision_transformer_hybrid(
- 'vit_small_r26_s32_224', backbone=backbone, pretrained=pretrained, **model_kwargs)
- return model
-
-
-@register_model
-def vit_small_r26_s32_384(pretrained=False, **kwargs):
- """ R26+ViT-S/S32 hybrid.
- """
- backbone = _resnetv2((2, 2, 2, 2), **kwargs)
- model_kwargs = dict(embed_dim=384, depth=12, num_heads=6, **kwargs)
- model = _create_vision_transformer_hybrid(
- 'vit_small_r26_s32_384', backbone=backbone, pretrained=pretrained, **model_kwargs)
- return model
-
-
-@register_model
-def vit_base_r26_s32_224(pretrained=False, **kwargs):
- """ R26+ViT-B/S32 hybrid.
- """
- backbone = _resnetv2((2, 2, 2, 2), **kwargs)
- model_kwargs = dict(embed_dim=768, depth=12, num_heads=12, **kwargs)
- model = _create_vision_transformer_hybrid(
- 'vit_base_r26_s32_224', backbone=backbone, pretrained=pretrained, **model_kwargs)
- return model
-
-
-@register_model
-def vit_base_r50_s16_224(pretrained=False, **kwargs):
- """ R50+ViT-B/S16 hybrid from original paper (https://arxiv.org/abs/2010.11929).
- """
- backbone = _resnetv2((3, 4, 9), **kwargs)
- model_kwargs = dict(embed_dim=768, depth=12, num_heads=12, **kwargs)
- model = _create_vision_transformer_hybrid(
- 'vit_base_r50_s16_224', backbone=backbone, pretrained=pretrained, **model_kwargs)
- return model
-
-
-@register_model
-def vit_base_r50_s16_384(pretrained=False, **kwargs):
- """ R50+ViT-B/16 hybrid from original paper (https://arxiv.org/abs/2010.11929).
- ImageNet-1k weights fine-tuned from in21k @ 384x384, source https://github.com/google-research/vision_transformer.
- """
- backbone = _resnetv2((3, 4, 9), **kwargs)
- model_kwargs = dict(embed_dim=768, depth=12, num_heads=12, **kwargs)
- model = _create_vision_transformer_hybrid(
- 'vit_base_r50_s16_384', backbone=backbone, pretrained=pretrained, **model_kwargs)
- return model
-
-
-@register_model
-def vit_base_resnet50_384(pretrained=False, **kwargs):
- # DEPRECATED this is forwarding to model def above for backwards compatibility
- return vit_base_r50_s16_384(pretrained=pretrained, **kwargs)
-
-
-@register_model
-def vit_large_r50_s32_224(pretrained=False, **kwargs):
- """ R50+ViT-L/S32 hybrid.
- """
- backbone = _resnetv2((3, 4, 6, 3), **kwargs)
- model_kwargs = dict(embed_dim=1024, depth=24, num_heads=16, **kwargs)
- model = _create_vision_transformer_hybrid(
- 'vit_large_r50_s32_224', backbone=backbone, pretrained=pretrained, **model_kwargs)
- return model
-
-
-@register_model
-def vit_large_r50_s32_384(pretrained=False, **kwargs):
- """ R50+ViT-L/S32 hybrid.
- """
- backbone = _resnetv2((3, 4, 6, 3), **kwargs)
- model_kwargs = dict(embed_dim=1024, depth=24, num_heads=16, **kwargs)
- model = _create_vision_transformer_hybrid(
- 'vit_large_r50_s32_384', backbone=backbone, pretrained=pretrained, **model_kwargs)
- return model
-
-
-@register_model
-def vit_tiny_r_s16_p8_224_in21k(pretrained=False, **kwargs):
- """ R+ViT-Ti/S16 w/ 8x8 patch hybrid. ImageNet-21k.
- """
- backbone = _resnetv2(layers=(), **kwargs)
- model_kwargs = dict(patch_size=8, embed_dim=192, depth=12, num_heads=3, **kwargs)
- model = _create_vision_transformer_hybrid(
- 'vit_tiny_r_s16_p8_224_in21k', backbone=backbone, pretrained=pretrained, **model_kwargs)
- return model
-
-
-@register_model
-def vit_small_r26_s32_224_in21k(pretrained=False, **kwargs):
- """ R26+ViT-S/S32 hybrid. ImageNet-21k.
- """
- backbone = _resnetv2((2, 2, 2, 2), **kwargs)
- model_kwargs = dict(embed_dim=384, depth=12, num_heads=6, **kwargs)
- model = _create_vision_transformer_hybrid(
- 'vit_small_r26_s32_224_in21k', backbone=backbone, pretrained=pretrained, **model_kwargs)
- return model
-
-
-@register_model
-def vit_base_r50_s16_224_in21k(pretrained=False, **kwargs):
- """ R50+ViT-B/16 hybrid model from original paper (https://arxiv.org/abs/2010.11929).
- ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.
- """
- backbone = _resnetv2(layers=(3, 4, 9), **kwargs)
- model_kwargs = dict(embed_dim=768, depth=12, num_heads=12, representation_size=768, **kwargs)
- model = _create_vision_transformer_hybrid(
- 'vit_base_r50_s16_224_in21k', backbone=backbone, pretrained=pretrained, **model_kwargs)
- return model
-
-
-@register_model
-def vit_base_resnet50_224_in21k(pretrained=False, **kwargs):
- # DEPRECATED this is forwarding to model def above for backwards compatibility
- return vit_base_r50_s16_224_in21k(pretrained=pretrained, **kwargs)
-
-
-@register_model
-def vit_large_r50_s32_224_in21k(pretrained=False, **kwargs):
- """ R50+ViT-L/S32 hybrid. ImageNet-21k.
- """
- backbone = _resnetv2((3, 4, 6, 3), **kwargs)
- model_kwargs = dict(embed_dim=1024, depth=24, num_heads=16, **kwargs)
- model = _create_vision_transformer_hybrid(
- 'vit_large_r50_s32_224_in21k', backbone=backbone, pretrained=pretrained, **model_kwargs)
- return model
-
-
-@register_model
-def vit_small_resnet26d_224(pretrained=False, **kwargs):
- """ Custom ViT small hybrid w/ ResNet26D stride 32. No pretrained weights.
- """
- backbone = resnet26d(pretrained=pretrained, in_chans=kwargs.get('in_chans', 3), features_only=True, out_indices=[4])
- model_kwargs = dict(embed_dim=768, depth=8, num_heads=8, mlp_ratio=3, **kwargs)
- model = _create_vision_transformer_hybrid(
- 'vit_small_resnet26d_224', backbone=backbone, pretrained=pretrained, **model_kwargs)
- return model
-
-
-@register_model
-def vit_small_resnet50d_s16_224(pretrained=False, **kwargs):
- """ Custom ViT small hybrid w/ ResNet50D 3-stages, stride 16. No pretrained weights.
- """
- backbone = resnet50d(pretrained=pretrained, in_chans=kwargs.get('in_chans', 3), features_only=True, out_indices=[3])
- model_kwargs = dict(embed_dim=768, depth=8, num_heads=8, mlp_ratio=3, **kwargs)
- model = _create_vision_transformer_hybrid(
- 'vit_small_resnet50d_s16_224', backbone=backbone, pretrained=pretrained, **model_kwargs)
- return model
-
-
-@register_model
-def vit_base_resnet26d_224(pretrained=False, **kwargs):
- """ Custom ViT base hybrid w/ ResNet26D stride 32. No pretrained weights.
- """
- backbone = resnet26d(pretrained=pretrained, in_chans=kwargs.get('in_chans', 3), features_only=True, out_indices=[4])
- model_kwargs = dict(embed_dim=768, depth=12, num_heads=12, **kwargs)
- model = _create_vision_transformer_hybrid(
- 'vit_base_resnet26d_224', backbone=backbone, pretrained=pretrained, **model_kwargs)
- return model
-
-
-@register_model
-def vit_base_resnet50d_224(pretrained=False, **kwargs):
- """ Custom ViT base hybrid w/ ResNet50D stride 32. No pretrained weights.
- """
- backbone = resnet50d(pretrained=pretrained, in_chans=kwargs.get('in_chans', 3), features_only=True, out_indices=[4])
- model_kwargs = dict(embed_dim=768, depth=12, num_heads=12, **kwargs)
- model = _create_vision_transformer_hybrid(
- 'vit_base_resnet50d_224', backbone=backbone, pretrained=pretrained, **model_kwargs)
- return model
\ No newline at end of file
diff --git a/spaces/cooelf/Multimodal-CoT/timm/utils/__init__.py b/spaces/cooelf/Multimodal-CoT/timm/utils/__init__.py
deleted file mode 100644
index d02e62d2d0ce62e594393014208e28c3ace5318b..0000000000000000000000000000000000000000
--- a/spaces/cooelf/Multimodal-CoT/timm/utils/__init__.py
+++ /dev/null
@@ -1,13 +0,0 @@
-from .agc import adaptive_clip_grad
-from .checkpoint_saver import CheckpointSaver
-from .clip_grad import dispatch_clip_grad
-from .cuda import ApexScaler, NativeScaler
-from .distributed import distribute_bn, reduce_tensor
-from .jit import set_jit_legacy
-from .log import setup_default_logging, FormatterNoInfo
-from .metrics import AverageMeter, accuracy
-from .misc import natural_key, add_bool_arg
-from .model import unwrap_model, get_state_dict
-from .model_ema import ModelEma, ModelEmaV2
-from .random import random_seed
-from .summary import update_summary, get_outdir
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/decode_heads/aspp_head.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/decode_heads/aspp_head.py
deleted file mode 100644
index 3c0aadb2b097a604d96ba1c99c05663b7884b6e0..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/decode_heads/aspp_head.py
+++ /dev/null
@@ -1,107 +0,0 @@
-import torch
-import torch.nn as nn
-from annotator.mmpkg.mmcv.cnn import ConvModule
-
-from annotator.mmpkg.mmseg.ops import resize
-from ..builder import HEADS
-from .decode_head import BaseDecodeHead
-
-
-class ASPPModule(nn.ModuleList):
- """Atrous Spatial Pyramid Pooling (ASPP) Module.
-
- Args:
- dilations (tuple[int]): Dilation rate of each layer.
- in_channels (int): Input channels.
- channels (int): Channels after modules, before conv_seg.
- conv_cfg (dict|None): Config of conv layers.
- norm_cfg (dict|None): Config of norm layers.
- act_cfg (dict): Config of activation layers.
- """
-
- def __init__(self, dilations, in_channels, channels, conv_cfg, norm_cfg,
- act_cfg):
- super(ASPPModule, self).__init__()
- self.dilations = dilations
- self.in_channels = in_channels
- self.channels = channels
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.act_cfg = act_cfg
- for dilation in dilations:
- self.append(
- ConvModule(
- self.in_channels,
- self.channels,
- 1 if dilation == 1 else 3,
- dilation=dilation,
- padding=0 if dilation == 1 else dilation,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg))
-
- def forward(self, x):
- """Forward function."""
- aspp_outs = []
- for aspp_module in self:
- aspp_outs.append(aspp_module(x))
-
- return aspp_outs
-
-
-@HEADS.register_module()
-class ASPPHead(BaseDecodeHead):
- """Rethinking Atrous Convolution for Semantic Image Segmentation.
-
- This head is the implementation of `DeepLabV3
- `_.
-
- Args:
- dilations (tuple[int]): Dilation rates for ASPP module.
- Default: (1, 6, 12, 18).
- """
-
- def __init__(self, dilations=(1, 6, 12, 18), **kwargs):
- super(ASPPHead, self).__init__(**kwargs)
- assert isinstance(dilations, (list, tuple))
- self.dilations = dilations
- self.image_pool = nn.Sequential(
- nn.AdaptiveAvgPool2d(1),
- ConvModule(
- self.in_channels,
- self.channels,
- 1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg))
- self.aspp_modules = ASPPModule(
- dilations,
- self.in_channels,
- self.channels,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- self.bottleneck = ConvModule(
- (len(dilations) + 1) * self.channels,
- self.channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
-
- def forward(self, inputs):
- """Forward function."""
- x = self._transform_inputs(inputs)
- aspp_outs = [
- resize(
- self.image_pool(x),
- size=x.size()[2:],
- mode='bilinear',
- align_corners=self.align_corners)
- ]
- aspp_outs.extend(self.aspp_modules(x))
- aspp_outs = torch.cat(aspp_outs, dim=1)
- output = self.bottleneck(aspp_outs)
- output = self.cls_seg(output)
- return output
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/data/datasets/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/data/datasets/__init__.py
deleted file mode 100644
index 59ce30713f63d056107b2a06ecd434eb27a30b7d..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/data/datasets/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-from . import (
- register_ade20k_panoptic,
- register_cityscapes_panoptic,
- register_coco_panoptic_annos_semseg,
- register_ade20k_instance,
- register_coco_panoptic2instance,
-)
diff --git a/spaces/cymic/Talking_Head_Anime_3/tha3/nn/separable_conv.py b/spaces/cymic/Talking_Head_Anime_3/tha3/nn/separable_conv.py
deleted file mode 100644
index e33bce3c25aa279e7a7fbb0a7998a3f3788e4c25..0000000000000000000000000000000000000000
--- a/spaces/cymic/Talking_Head_Anime_3/tha3/nn/separable_conv.py
+++ /dev/null
@@ -1,119 +0,0 @@
-from typing import Optional
-
-from torch.nn import Sequential, Conv2d, ConvTranspose2d, Module
-
-from tha3.nn.normalization import NormalizationLayerFactory
-from tha3.nn.util import BlockArgs, wrap_conv_or_linear_module
-
-
-def create_separable_conv3(in_channels: int, out_channels: int,
- bias: bool = False,
- initialization_method='he',
- use_spectral_norm: bool = False) -> Module:
- return Sequential(
- wrap_conv_or_linear_module(
- Conv2d(in_channels, in_channels, kernel_size=3, stride=1, padding=1, bias=False, groups=in_channels),
- initialization_method,
- use_spectral_norm),
- wrap_conv_or_linear_module(
- Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=bias),
- initialization_method,
- use_spectral_norm))
-
-
-def create_separable_conv7(in_channels: int, out_channels: int,
- bias: bool = False,
- initialization_method='he',
- use_spectral_norm: bool = False) -> Module:
- return Sequential(
- wrap_conv_or_linear_module(
- Conv2d(in_channels, in_channels, kernel_size=7, stride=1, padding=3, bias=False, groups=in_channels),
- initialization_method,
- use_spectral_norm),
- wrap_conv_or_linear_module(
- Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=bias),
- initialization_method,
- use_spectral_norm))
-
-
-def create_separable_conv3_block(
- in_channels: int, out_channels: int, block_args: Optional[BlockArgs] = None):
- if block_args is None:
- block_args = BlockArgs()
- return Sequential(
- wrap_conv_or_linear_module(
- Conv2d(in_channels, in_channels, kernel_size=3, stride=1, padding=1, bias=False, groups=in_channels),
- block_args.initialization_method,
- block_args.use_spectral_norm),
- wrap_conv_or_linear_module(
- Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=False),
- block_args.initialization_method,
- block_args.use_spectral_norm),
- NormalizationLayerFactory.resolve_2d(block_args.normalization_layer_factory).create(out_channels, affine=True),
- block_args.nonlinearity_factory.create())
-
-
-def create_separable_conv7_block(
- in_channels: int, out_channels: int, block_args: Optional[BlockArgs] = None):
- if block_args is None:
- block_args = BlockArgs()
- return Sequential(
- wrap_conv_or_linear_module(
- Conv2d(in_channels, in_channels, kernel_size=7, stride=1, padding=3, bias=False, groups=in_channels),
- block_args.initialization_method,
- block_args.use_spectral_norm),
- wrap_conv_or_linear_module(
- Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=False),
- block_args.initialization_method,
- block_args.use_spectral_norm),
- NormalizationLayerFactory.resolve_2d(block_args.normalization_layer_factory).create(out_channels, affine=True),
- block_args.nonlinearity_factory.create())
-
-
-def create_separable_downsample_block(
- in_channels: int, out_channels: int, is_output_1x1: bool, block_args: Optional[BlockArgs] = None):
- if block_args is None:
- block_args = BlockArgs()
- if is_output_1x1:
- return Sequential(
- wrap_conv_or_linear_module(
- Conv2d(in_channels, in_channels, kernel_size=4, stride=2, padding=1, bias=False, groups=in_channels),
- block_args.initialization_method,
- block_args.use_spectral_norm),
- wrap_conv_or_linear_module(
- Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=False),
- block_args.initialization_method,
- block_args.use_spectral_norm),
- block_args.nonlinearity_factory.create())
- else:
- return Sequential(
- wrap_conv_or_linear_module(
- Conv2d(in_channels, in_channels, kernel_size=4, stride=2, padding=1, bias=False, groups=in_channels),
- block_args.initialization_method,
- block_args.use_spectral_norm),
- wrap_conv_or_linear_module(
- Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=False),
- block_args.initialization_method,
- block_args.use_spectral_norm),
- NormalizationLayerFactory.resolve_2d(block_args.normalization_layer_factory)
- .create(out_channels, affine=True),
- block_args.nonlinearity_factory.create())
-
-
-def create_separable_upsample_block(
- in_channels: int, out_channels: int, block_args: Optional[BlockArgs] = None):
- if block_args is None:
- block_args = BlockArgs()
- return Sequential(
- wrap_conv_or_linear_module(
- ConvTranspose2d(
- in_channels, in_channels, kernel_size=4, stride=2, padding=1, bias=False, groups=in_channels),
- block_args.initialization_method,
- block_args.use_spectral_norm),
- wrap_conv_or_linear_module(
- Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=False),
- block_args.initialization_method,
- block_args.use_spectral_norm),
- NormalizationLayerFactory.resolve_2d(block_args.normalization_layer_factory)
- .create(out_channels, affine=True),
- block_args.nonlinearity_factory.create())
diff --git a/spaces/darkartsaibwd/Envvi-Inkpunk-Diffusion/README.md b/spaces/darkartsaibwd/Envvi-Inkpunk-Diffusion/README.md
deleted file mode 100644
index 50c55a51990d9b9881b7d512e52772fac537ee52..0000000000000000000000000000000000000000
--- a/spaces/darkartsaibwd/Envvi-Inkpunk-Diffusion/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Envvi Inkpunk Diffusion
-emoji: 🏃
-colorFrom: purple
-colorTo: green
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/utils/options.py b/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/utils/options.py
deleted file mode 100644
index db490e4aa52e26fde31959fd74c2cef3af2ecf76..0000000000000000000000000000000000000000
--- a/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/utils/options.py
+++ /dev/null
@@ -1,108 +0,0 @@
-import yaml
-import time
-from collections import OrderedDict
-from os import path as osp
-from basicsr.utils.misc import get_time_str
-
-def ordered_yaml():
- """Support OrderedDict for yaml.
-
- Returns:
- yaml Loader and Dumper.
- """
- try:
- from yaml import CDumper as Dumper
- from yaml import CLoader as Loader
- except ImportError:
- from yaml import Dumper, Loader
-
- _mapping_tag = yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG
-
- def dict_representer(dumper, data):
- return dumper.represent_dict(data.items())
-
- def dict_constructor(loader, node):
- return OrderedDict(loader.construct_pairs(node))
-
- Dumper.add_representer(OrderedDict, dict_representer)
- Loader.add_constructor(_mapping_tag, dict_constructor)
- return Loader, Dumper
-
-
-def parse(opt_path, root_path, is_train=True):
- """Parse option file.
-
- Args:
- opt_path (str): Option file path.
- is_train (str): Indicate whether in training or not. Default: True.
-
- Returns:
- (dict): Options.
- """
- with open(opt_path, mode='r') as f:
- Loader, _ = ordered_yaml()
- opt = yaml.load(f, Loader=Loader)
-
- opt['is_train'] = is_train
-
- # opt['name'] = f"{get_time_str()}_{opt['name']}"
- if opt['path'].get('resume_state', None): # Shangchen added
- resume_state_path = opt['path'].get('resume_state')
- opt['name'] = resume_state_path.split("/")[-3]
- else:
- opt['name'] = f"{get_time_str()}_{opt['name']}"
-
-
- # datasets
- for phase, dataset in opt['datasets'].items():
- # for several datasets, e.g., test_1, test_2
- phase = phase.split('_')[0]
- dataset['phase'] = phase
- if 'scale' in opt:
- dataset['scale'] = opt['scale']
- if dataset.get('dataroot_gt') is not None:
- dataset['dataroot_gt'] = osp.expanduser(dataset['dataroot_gt'])
- if dataset.get('dataroot_lq') is not None:
- dataset['dataroot_lq'] = osp.expanduser(dataset['dataroot_lq'])
-
- # paths
- for key, val in opt['path'].items():
- if (val is not None) and ('resume_state' in key or 'pretrain_network' in key):
- opt['path'][key] = osp.expanduser(val)
-
- if is_train:
- experiments_root = osp.join(root_path, 'experiments', opt['name'])
- opt['path']['experiments_root'] = experiments_root
- opt['path']['models'] = osp.join(experiments_root, 'models')
- opt['path']['training_states'] = osp.join(experiments_root, 'training_states')
- opt['path']['log'] = experiments_root
- opt['path']['visualization'] = osp.join(experiments_root, 'visualization')
-
- else: # test
- results_root = osp.join(root_path, 'results', opt['name'])
- opt['path']['results_root'] = results_root
- opt['path']['log'] = results_root
- opt['path']['visualization'] = osp.join(results_root, 'visualization')
-
- return opt
-
-
-def dict2str(opt, indent_level=1):
- """dict to string for printing options.
-
- Args:
- opt (dict): Option dict.
- indent_level (int): Indent level. Default: 1.
-
- Return:
- (str): Option string for printing.
- """
- msg = '\n'
- for k, v in opt.items():
- if isinstance(v, dict):
- msg += ' ' * (indent_level * 2) + k + ':['
- msg += dict2str(v, indent_level + 1)
- msg += ' ' * (indent_level * 2) + ']\n'
- else:
- msg += ' ' * (indent_level * 2) + k + ': ' + str(v) + '\n'
- return msg
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/vegalite/v5/data.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/vegalite/v5/data.py
deleted file mode 100644
index 703dffb3246a32f4734f0653dfcc1aaa0d1d23f9..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/vegalite/v5/data.py
+++ /dev/null
@@ -1,43 +0,0 @@
-from ..data import (
- MaxRowsError,
- curry,
- default_data_transformer,
- limit_rows,
- pipe,
- sample,
- to_csv,
- to_json,
- to_values,
- DataTransformerRegistry,
-)
-
-
-# ==============================================================================
-# VegaLite 5 data transformers
-# ==============================================================================
-
-
-ENTRY_POINT_GROUP = "altair.vegalite.v5.data_transformer" # type: str
-
-
-data_transformers = DataTransformerRegistry(
- entry_point_group=ENTRY_POINT_GROUP
-) # type: DataTransformerRegistry
-data_transformers.register("default", default_data_transformer)
-data_transformers.register("json", to_json)
-data_transformers.register("csv", to_csv)
-data_transformers.enable("default")
-
-
-__all__ = (
- "MaxRowsError",
- "curry",
- "default_data_transformer",
- "limit_rows",
- "pipe",
- "sample",
- "to_csv",
- "to_json",
- "to_values",
- "data_transformers",
-)
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio_client/utils.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio_client/utils.py
deleted file mode 100644
index 850e6f8882bd3295a01c9285b136dc54c3daa7d3..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio_client/utils.py
+++ /dev/null
@@ -1,575 +0,0 @@
-from __future__ import annotations
-
-import asyncio
-import base64
-import json
-import mimetypes
-import os
-import pkgutil
-import secrets
-import shutil
-import tempfile
-import warnings
-from concurrent.futures import CancelledError
-from dataclasses import dataclass, field
-from datetime import datetime
-from enum import Enum
-from pathlib import Path
-from threading import Lock
-from typing import Any, Callable, Optional
-
-import fsspec.asyn
-import httpx
-import huggingface_hub
-import requests
-from huggingface_hub import SpaceStage
-from websockets.legacy.protocol import WebSocketCommonProtocol
-
-API_URL = "api/predict/"
-WS_URL = "queue/join"
-UPLOAD_URL = "upload"
-CONFIG_URL = "config"
-API_INFO_URL = "info"
-RAW_API_INFO_URL = "info?serialize=False"
-SPACE_FETCHER_URL = "https://gradio-space-api-fetcher-v2.hf.space/api"
-RESET_URL = "reset"
-SPACE_URL = "https://hf.space/{}"
-
-SKIP_COMPONENTS = {
- "state",
- "row",
- "column",
- "tabs",
- "tab",
- "tabitem",
- "box",
- "form",
- "accordion",
- "group",
- "interpretation",
- "dataset",
-}
-STATE_COMPONENT = "state"
-INVALID_RUNTIME = [
- SpaceStage.NO_APP_FILE,
- SpaceStage.CONFIG_ERROR,
- SpaceStage.BUILD_ERROR,
- SpaceStage.RUNTIME_ERROR,
- SpaceStage.PAUSED,
-]
-
-__version__ = (pkgutil.get_data(__name__, "version.txt") or b"").decode("ascii").strip()
-
-
-class TooManyRequestsError(Exception):
- """Raised when the API returns a 429 status code."""
-
- pass
-
-
-class QueueError(Exception):
- """Raised when the queue is full or there is an issue adding a job to the queue."""
-
- pass
-
-
-class InvalidAPIEndpointError(Exception):
- """Raised when the API endpoint is invalid."""
-
- pass
-
-
-class SpaceDuplicationError(Exception):
- """Raised when something goes wrong with a Space Duplication."""
-
- pass
-
-
-class Status(Enum):
- """Status codes presented to client users."""
-
- STARTING = "STARTING"
- JOINING_QUEUE = "JOINING_QUEUE"
- QUEUE_FULL = "QUEUE_FULL"
- IN_QUEUE = "IN_QUEUE"
- SENDING_DATA = "SENDING_DATA"
- PROCESSING = "PROCESSING"
- ITERATING = "ITERATING"
- PROGRESS = "PROGRESS"
- FINISHED = "FINISHED"
- CANCELLED = "CANCELLED"
-
- @staticmethod
- def ordering(status: Status) -> int:
- """Order of messages. Helpful for testing."""
- order = [
- Status.STARTING,
- Status.JOINING_QUEUE,
- Status.QUEUE_FULL,
- Status.IN_QUEUE,
- Status.SENDING_DATA,
- Status.PROCESSING,
- Status.PROGRESS,
- Status.ITERATING,
- Status.FINISHED,
- Status.CANCELLED,
- ]
- return order.index(status)
-
- def __lt__(self, other: Status):
- return self.ordering(self) < self.ordering(other)
-
- @staticmethod
- def msg_to_status(msg: str) -> Status:
- """Map the raw message from the backend to the status code presented to users."""
- return {
- "send_hash": Status.JOINING_QUEUE,
- "queue_full": Status.QUEUE_FULL,
- "estimation": Status.IN_QUEUE,
- "send_data": Status.SENDING_DATA,
- "process_starts": Status.PROCESSING,
- "process_generating": Status.ITERATING,
- "process_completed": Status.FINISHED,
- "progress": Status.PROGRESS,
- }[msg]
-
-
-@dataclass
-class ProgressUnit:
- index: Optional[int]
- length: Optional[int]
- unit: Optional[str]
- progress: Optional[float]
- desc: Optional[str]
-
- @classmethod
- def from_ws_msg(cls, data: list[dict]) -> list[ProgressUnit]:
- return [
- cls(
- index=d.get("index"),
- length=d.get("length"),
- unit=d.get("unit"),
- progress=d.get("progress"),
- desc=d.get("desc"),
- )
- for d in data
- ]
-
-
-@dataclass
-class StatusUpdate:
- """Update message sent from the worker thread to the Job on the main thread."""
-
- code: Status
- rank: int | None
- queue_size: int | None
- eta: float | None
- success: bool | None
- time: datetime | None
- progress_data: list[ProgressUnit] | None
-
-
-def create_initial_status_update():
- return StatusUpdate(
- code=Status.STARTING,
- rank=None,
- queue_size=None,
- eta=None,
- success=None,
- time=datetime.now(),
- progress_data=None,
- )
-
-
-@dataclass
-class JobStatus:
- """The job status.
-
- Keeps track of the latest status update and intermediate outputs (not yet implements).
- """
-
- latest_status: StatusUpdate = field(default_factory=create_initial_status_update)
- outputs: list[Any] = field(default_factory=list)
-
-
-@dataclass
-class Communicator:
- """Helper class to help communicate between the worker thread and main thread."""
-
- lock: Lock
- job: JobStatus
- prediction_processor: Callable[..., tuple]
- reset_url: str
- should_cancel: bool = False
-
-
-########################
-# Network utils
-########################
-
-
-def is_http_url_like(possible_url: str) -> bool:
- """
- Check if the given string looks like an HTTP(S) URL.
- """
- return possible_url.startswith(("http://", "https://"))
-
-
-def probe_url(possible_url: str) -> bool:
- """
- Probe the given URL to see if it responds with a 200 status code (to HEAD, then to GET).
- """
- headers = {"User-Agent": "gradio (https://gradio.app/; team@gradio.app)"}
- try:
- with requests.session() as sess:
- head_request = sess.head(possible_url, headers=headers)
- if head_request.status_code == 405:
- return sess.get(possible_url, headers=headers).ok
- return head_request.ok
- except Exception:
- return False
-
-
-def is_valid_url(possible_url: str) -> bool:
- """
- Check if the given string is a valid URL.
- """
- warnings.warn(
- "is_valid_url should not be used. "
- "Use is_http_url_like() and probe_url(), as suitable, instead.",
- )
- return is_http_url_like(possible_url) and probe_url(possible_url)
-
-
-async def get_pred_from_ws(
- websocket: WebSocketCommonProtocol,
- data: str,
- hash_data: str,
- helper: Communicator | None = None,
-) -> dict[str, Any]:
- completed = False
- resp = {}
- while not completed:
- # Receive message in the background so that we can
- # cancel even while running a long pred
- task = asyncio.create_task(websocket.recv())
- while not task.done():
- if helper:
- with helper.lock:
- if helper.should_cancel:
- # Need to reset the iterator state since the client
- # will not reset the session
- async with httpx.AsyncClient() as http:
- reset = http.post(
- helper.reset_url, json=json.loads(hash_data)
- )
- # Retrieve cancel exception from task
- # otherwise will get nasty warning in console
- task.cancel()
- await asyncio.gather(task, reset, return_exceptions=True)
- raise CancelledError()
- # Need to suspend this coroutine so that task actually runs
- await asyncio.sleep(0.01)
- msg = task.result()
- resp = json.loads(msg)
- if helper:
- with helper.lock:
- has_progress = "progress_data" in resp
- status_update = StatusUpdate(
- code=Status.msg_to_status(resp["msg"]),
- queue_size=resp.get("queue_size"),
- rank=resp.get("rank", None),
- success=resp.get("success"),
- time=datetime.now(),
- eta=resp.get("rank_eta"),
- progress_data=ProgressUnit.from_ws_msg(resp["progress_data"])
- if has_progress
- else None,
- )
- output = resp.get("output", {}).get("data", [])
- if output and status_update.code != Status.FINISHED:
- try:
- result = helper.prediction_processor(*output)
- except Exception as e:
- result = [e]
- helper.job.outputs.append(result)
- helper.job.latest_status = status_update
- if resp["msg"] == "queue_full":
- raise QueueError("Queue is full! Please try again.")
- if resp["msg"] == "send_hash":
- await websocket.send(hash_data)
- elif resp["msg"] == "send_data":
- await websocket.send(data)
- completed = resp["msg"] == "process_completed"
- return resp["output"]
-
-
-########################
-# Data processing utils
-########################
-
-
-def download_tmp_copy_of_file(
- url_path: str, hf_token: str | None = None, dir: str | None = None
-) -> str:
- if dir is not None:
- os.makedirs(dir, exist_ok=True)
- headers = {"Authorization": "Bearer " + hf_token} if hf_token else {}
- directory = Path(dir or tempfile.gettempdir()) / secrets.token_hex(20)
- directory.mkdir(exist_ok=True, parents=True)
- file_path = directory / Path(url_path).name
-
- with requests.get(url_path, headers=headers, stream=True) as r:
- r.raise_for_status()
- with open(file_path, "wb") as f:
- shutil.copyfileobj(r.raw, f)
- return str(file_path.resolve())
-
-
-def create_tmp_copy_of_file(file_path: str, dir: str | None = None) -> str:
- directory = Path(dir or tempfile.gettempdir()) / secrets.token_hex(20)
- directory.mkdir(exist_ok=True, parents=True)
- dest = directory / Path(file_path).name
- shutil.copy2(file_path, dest)
- return str(dest.resolve())
-
-
-def get_mimetype(filename: str) -> str | None:
- if filename.endswith(".vtt"):
- return "text/vtt"
- mimetype = mimetypes.guess_type(filename)[0]
- if mimetype is not None:
- mimetype = mimetype.replace("x-wav", "wav").replace("x-flac", "flac")
- return mimetype
-
-
-def get_extension(encoding: str) -> str | None:
- encoding = encoding.replace("audio/wav", "audio/x-wav")
- type = mimetypes.guess_type(encoding)[0]
- if type == "audio/flac": # flac is not supported by mimetypes
- return "flac"
- elif type is None:
- return None
- extension = mimetypes.guess_extension(type)
- if extension is not None and extension.startswith("."):
- extension = extension[1:]
- return extension
-
-
-def encode_file_to_base64(f: str | Path):
- with open(f, "rb") as file:
- encoded_string = base64.b64encode(file.read())
- base64_str = str(encoded_string, "utf-8")
- mimetype = get_mimetype(str(f))
- return (
- "data:"
- + (mimetype if mimetype is not None else "")
- + ";base64,"
- + base64_str
- )
-
-
-def encode_url_to_base64(url: str):
- resp = requests.get(url)
- resp.raise_for_status()
- encoded_string = base64.b64encode(resp.content)
- base64_str = str(encoded_string, "utf-8")
- mimetype = get_mimetype(url)
- return (
- "data:" + (mimetype if mimetype is not None else "") + ";base64," + base64_str
- )
-
-
-def encode_url_or_file_to_base64(path: str | Path):
- path = str(path)
- if is_http_url_like(path):
- return encode_url_to_base64(path)
- return encode_file_to_base64(path)
-
-
-def decode_base64_to_binary(encoding: str) -> tuple[bytes, str | None]:
- extension = get_extension(encoding)
- data = encoding.rsplit(",", 1)[-1]
- return base64.b64decode(data), extension
-
-
-def strip_invalid_filename_characters(filename: str, max_bytes: int = 200) -> str:
- """Strips invalid characters from a filename and ensures that the file_length is less than `max_bytes` bytes."""
- filename = "".join([char for char in filename if char.isalnum() or char in "._- "])
- filename_len = len(filename.encode())
- if filename_len > max_bytes:
- while filename_len > max_bytes:
- if len(filename) == 0:
- break
- filename = filename[:-1]
- filename_len = len(filename.encode())
- return filename
-
-
-def sanitize_parameter_names(original_name: str) -> str:
- """Cleans up a Python parameter name to make the API info more readable."""
- return (
- "".join([char for char in original_name if char.isalnum() or char in " _"])
- .replace(" ", "_")
- .lower()
- )
-
-
-def decode_base64_to_file(
- encoding: str,
- file_path: str | None = None,
- dir: str | Path | None = None,
- prefix: str | None = None,
-):
- directory = Path(dir or tempfile.gettempdir()) / secrets.token_hex(20)
- directory.mkdir(exist_ok=True, parents=True)
- data, extension = decode_base64_to_binary(encoding)
- if file_path is not None and prefix is None:
- filename = Path(file_path).name
- prefix = filename
- if "." in filename:
- prefix = filename[0 : filename.index(".")]
- extension = filename[filename.index(".") + 1 :]
-
- if prefix is not None:
- prefix = strip_invalid_filename_characters(prefix)
-
- if extension is None:
- file_obj = tempfile.NamedTemporaryFile(
- delete=False, prefix=prefix, dir=directory
- )
- else:
- file_obj = tempfile.NamedTemporaryFile(
- delete=False,
- prefix=prefix,
- suffix="." + extension,
- dir=directory,
- )
- file_obj.write(data)
- file_obj.flush()
- return file_obj
-
-
-def dict_or_str_to_json_file(jsn: str | dict | list, dir: str | Path | None = None):
- if dir is not None:
- os.makedirs(dir, exist_ok=True)
-
- file_obj = tempfile.NamedTemporaryFile(
- delete=False, suffix=".json", dir=dir, mode="w+"
- )
- if isinstance(jsn, str):
- jsn = json.loads(jsn)
- json.dump(jsn, file_obj)
- file_obj.flush()
- return file_obj
-
-
-def file_to_json(file_path: str | Path) -> dict | list:
- with open(file_path) as f:
- return json.load(f)
-
-
-###########################
-# HuggingFace Hub API Utils
-###########################
-def set_space_timeout(
- space_id: str,
- hf_token: str | None = None,
- timeout_in_seconds: int = 300,
-):
- headers = huggingface_hub.utils.build_hf_headers(
- token=hf_token,
- library_name="gradio_client",
- library_version=__version__,
- )
- req = requests.post(
- f"https://huggingface.co/api/spaces/{space_id}/sleeptime",
- json={"seconds": timeout_in_seconds},
- headers=headers,
- )
- try:
- huggingface_hub.utils.hf_raise_for_status(req)
- except huggingface_hub.utils.HfHubHTTPError as err:
- raise SpaceDuplicationError(
- f"Could not set sleep timeout on duplicated Space. Please visit {SPACE_URL.format(space_id)} "
- "to set a timeout manually to reduce billing charges."
- ) from err
-
-
-########################
-# Misc utils
-########################
-
-
-def synchronize_async(func: Callable, *args, **kwargs) -> Any:
- """
- Runs async functions in sync scopes. Can be used in any scope.
-
- Example:
- if inspect.iscoroutinefunction(block_fn.fn):
- predictions = utils.synchronize_async(block_fn.fn, *processed_input)
-
- Args:
- func:
- *args:
- **kwargs:
- """
- return fsspec.asyn.sync(fsspec.asyn.get_loop(), func, *args, **kwargs) # type: ignore
-
-
-class APIInfoParseError(ValueError):
- pass
-
-
-def get_type(schema: dict):
- if "type" in schema:
- return schema["type"]
- elif schema.get("oneOf"):
- return "oneOf"
- elif schema.get("anyOf"):
- return "anyOf"
- else:
- raise APIInfoParseError(f"Cannot parse type for {schema}")
-
-
-def json_schema_to_python_type(schema: Any) -> str:
- """Convert the json schema into a python type hint"""
- type_ = get_type(schema)
- if type_ == {}:
- if "json" in schema["description"]:
- return "Dict[Any, Any]"
- else:
- return "Any"
- elif type_ == "null":
- return "None"
- elif type_ == "integer":
- return "int"
- elif type_ == "string":
- return "str"
- elif type_ == "boolean":
- return "bool"
- elif type_ == "number":
- return "int | float"
- elif type_ == "array":
- items = schema.get("items")
- if "prefixItems" in items:
- elements = ", ".join(
- [json_schema_to_python_type(i) for i in items["prefixItems"]]
- )
- return f"Tuple[{elements}]"
- else:
- elements = json_schema_to_python_type(items)
- return f"List[{elements}]"
- elif type_ == "object":
- des = ", ".join(
- [
- f"{n}: {json_schema_to_python_type(v)} ({v.get('description')})"
- for n, v in schema["properties"].items()
- ]
- )
- return f"Dict({des})"
- elif type_ in ["oneOf", "anyOf"]:
- desc = " | ".join([json_schema_to_python_type(i) for i in schema[type_]])
- return desc
- else:
- raise APIInfoParseError(f"Cannot parse schema {schema}")
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/commands/scan_cache.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/commands/scan_cache.py
deleted file mode 100644
index ff26fa9de50f607ca78a24c5041010b4d629c148..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/commands/scan_cache.py
+++ /dev/null
@@ -1,138 +0,0 @@
-# coding=utf-8
-# Copyright 2022-present, the HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Contains command to scan the HF cache directory.
-
-Usage:
- huggingface-cli scan-cache
- huggingface-cli scan-cache -v
- huggingface-cli scan-cache -vvv
- huggingface-cli scan-cache --dir ~/.cache/huggingface/hub
-"""
-import time
-from argparse import _SubParsersAction
-from typing import Optional
-
-from ..utils import CacheNotFound, HFCacheInfo, scan_cache_dir
-from . import BaseHuggingfaceCLICommand
-from ._cli_utils import ANSI, tabulate
-
-
-class ScanCacheCommand(BaseHuggingfaceCLICommand):
- @staticmethod
- def register_subcommand(parser: _SubParsersAction):
- scan_cache_parser = parser.add_parser("scan-cache", help="Scan cache directory.")
-
- scan_cache_parser.add_argument(
- "--dir",
- type=str,
- default=None,
- help="cache directory to scan (optional). Default to the default HuggingFace cache.",
- )
- scan_cache_parser.add_argument(
- "-v",
- "--verbose",
- action="count",
- default=0,
- help="show a more verbose output",
- )
- scan_cache_parser.set_defaults(func=ScanCacheCommand)
-
- def __init__(self, args):
- self.verbosity: int = args.verbose
- self.cache_dir: Optional[str] = args.dir
-
- def run(self):
- try:
- t0 = time.time()
- hf_cache_info = scan_cache_dir(self.cache_dir)
- t1 = time.time()
- except CacheNotFound as exc:
- cache_dir = exc.cache_dir
- print(f"Cache directory not found: {cache_dir}")
- return
-
- self._print_hf_cache_info_as_table(hf_cache_info)
-
- print(
- f"\nDone in {round(t1-t0,1)}s. Scanned {len(hf_cache_info.repos)} repo(s)"
- f" for a total of {ANSI.red(hf_cache_info.size_on_disk_str)}."
- )
- if len(hf_cache_info.warnings) > 0:
- message = f"Got {len(hf_cache_info.warnings)} warning(s) while scanning."
- if self.verbosity >= 3:
- print(ANSI.gray(message))
- for warning in hf_cache_info.warnings:
- print(ANSI.gray(warning))
- else:
- print(ANSI.gray(message + " Use -vvv to print details."))
-
- def _print_hf_cache_info_as_table(self, hf_cache_info: HFCacheInfo) -> None:
- if self.verbosity == 0:
- print(
- tabulate(
- rows=[
- [
- repo.repo_id,
- repo.repo_type,
- "{:>12}".format(repo.size_on_disk_str),
- repo.nb_files,
- repo.last_accessed_str,
- repo.last_modified_str,
- ", ".join(sorted(repo.refs)),
- str(repo.repo_path),
- ]
- for repo in sorted(hf_cache_info.repos, key=lambda repo: repo.repo_path)
- ],
- headers=[
- "REPO ID",
- "REPO TYPE",
- "SIZE ON DISK",
- "NB FILES",
- "LAST_ACCESSED",
- "LAST_MODIFIED",
- "REFS",
- "LOCAL PATH",
- ],
- )
- )
- else:
- print(
- tabulate(
- rows=[
- [
- repo.repo_id,
- repo.repo_type,
- revision.commit_hash,
- "{:>12}".format(revision.size_on_disk_str),
- revision.nb_files,
- revision.last_modified_str,
- ", ".join(sorted(revision.refs)),
- str(revision.snapshot_path),
- ]
- for repo in sorted(hf_cache_info.repos, key=lambda repo: repo.repo_path)
- for revision in sorted(repo.revisions, key=lambda revision: revision.commit_hash)
- ],
- headers=[
- "REPO ID",
- "REPO TYPE",
- "REVISION",
- "SIZE ON DISK",
- "NB FILES",
- "LAST_MODIFIED",
- "REFS",
- "LOCAL PATH",
- ],
- )
- )
diff --git a/spaces/derek-thomas/RAGDemo/backend/semantic_search.py b/spaces/derek-thomas/RAGDemo/backend/semantic_search.py
deleted file mode 100644
index 653cf44d4d345fe165b229babaec744cf774d476..0000000000000000000000000000000000000000
--- a/spaces/derek-thomas/RAGDemo/backend/semantic_search.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import time
-import logging
-from qdrant_haystack import QdrantDocumentStore
-from haystack.nodes import EmbeddingRetriever
-from pathlib import Path
-
-# Setting up the logging
-logging.basicConfig(level=logging.INFO)
-logger = logging.getLogger(__name__)
-
-# Start the timer for loading the QdrantDocumentStore
-start_time = time.perf_counter()
-
-proj_dir = Path(__file__).parents[1]
-qd_document_store = QdrantDocumentStore(path=str(proj_dir/'Qdrant'), index='RAGDemo')
-
-# Log the time taken to load the QdrantDocumentStore
-document_store_loading_time = time.perf_counter() - start_time
-logger.info(f"Time taken to load QdrantDocumentStore: {document_store_loading_time:.6f} seconds")
-
-# Start the timer for loading the EmbeddingRetriever
-start_time = time.perf_counter()
-
-qd_retriever = EmbeddingRetriever(document_store=qd_document_store,
- embedding_model="BAAI/bge-base-en-v1.5",
- model_format="sentence_transformers",
- use_gpu=False)
-
-# Log the time taken to load the EmbeddingRetriever
-retriever_loading_time = time.perf_counter() - start_time
-logger.info(f"Time taken to load EmbeddingRetriever: {retriever_loading_time:.6f} seconds")
diff --git a/spaces/dhof/shapetest/README.md b/spaces/dhof/shapetest/README.md
deleted file mode 100644
index 28c652015c6a0211f7d9edebd73b9131214eec98..0000000000000000000000000000000000000000
--- a/spaces/dhof/shapetest/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: Shap-E
-emoji: 🧢
-colorFrom: yellow
-colorTo: blue
-sdk: gradio
-sdk_version: 3.28.3
-python_version: 3.10.11
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: hysts/Shap-E
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/diacanFperku/AutoGPT/Easeus Partition Master Key 13.8 Technician Portable BETTER.md b/spaces/diacanFperku/AutoGPT/Easeus Partition Master Key 13.8 Technician Portable BETTER.md
deleted file mode 100644
index 406b9c6c76958e8d3d869b68ba320828f8d96e8c..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Easeus Partition Master Key 13.8 Technician Portable BETTER.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
"
-examples = [["chicken_curry.jpg"], ["garlic_bread.jpg"], ["takoyaki.jpg"]]
-enable_queue = True
-
-gr.Interface(
- fn=predict,
- inputs=gr.inputs.Image(shape=(224, 224)),
- outputs=gr.outputs.Label(num_top_classes=3),
- title=title,
- description=description,
- article=article,
- examples=examples,
- enable_queue=enable_queue,
-).launch()
diff --git a/spaces/epexVfeibi/Imagedeblurr/!!BETTER!! Full Version Scriptcase 6 Serial Number.md b/spaces/epexVfeibi/Imagedeblurr/!!BETTER!! Full Version Scriptcase 6 Serial Number.md
deleted file mode 100644
index cf0988f4fbb38156b8d1de0dcee271acde962d1d..0000000000000000000000000000000000000000
--- a/spaces/epexVfeibi/Imagedeblurr/!!BETTER!! Full Version Scriptcase 6 Serial Number.md
+++ /dev/null
@@ -1,147 +0,0 @@
-
-
Full Version Scriptcase 6 Serial Number: What You Need to Know
-
-
If you are looking for a powerful and easy-to-use web development tool, you might have heard of Scriptcase. Scriptcase is a low-code platform that allows you to create web applications using only your browser. You can import data from spreadsheets, databases, or external sources, and generate forms, reports, charts, dashboards, and more.
Scriptcase is compatible with various databases, such as MySQL, PostgreSQL, SQLite, Interbase, Firebird, MS Access, Oracle, SQL Server, DB2, SyBase, Informix, or ODBC layer. You can also customize your applications with your own business rules, using external libraries, programming IDE (Blank), Events, Macros, and other features.
-
-
Scriptcase has different versions for different needs and budgets. You can choose from Express Edition, Professional Edition, Enterprise Edition, or Cloud Edition. Each version has different features and limitations. For example, the Express Edition is free but has a limit of 10 projects and 2 connections.
-
-
But what if you want to get the full version of Scriptcase 6 with all the features and no limitations? You will need a valid serial number to activate it. A serial number is a unique code that identifies your license and allows you to use Scriptcase without restrictions.
-
-
How to Get a Full Version Scriptcase 6 Serial Number
-
-
There are two ways to get a full version Scriptcase 6 serial number: buying it or cracking it.
-
-
The first option is to buy it from the official website of Scriptcase. You can choose the edition that suits your needs and pay with your credit card or PayPal. You will receive an email with your serial number and instructions on how to activate it. This is the legal and safe way to get a full version Scriptcase 6 serial number.
-
-
The second option is to crack it. This means using a software or a tool that generates a fake serial number that bypasses the activation process of Scriptcase. This is an illegal and risky way to get a full version Scriptcase 6 serial number. You might find some websites or blogs that offer cracked versions of Scriptcase 6 or serial number generators. However, these are not reliable sources and might contain viruses, malware, or spyware that can harm your computer or steal your personal information. Moreover, you might face legal consequences for violating the terms and conditions of Scriptcase.
-
-
Why You Should Avoid Cracking Scriptcase 6
-
-
Cracking Scriptcase 6 might seem tempting if you want to save money or try it before buying it. However, there are many reasons why you should avoid cracking Scriptcase 6 and opt for the legal way instead.
-
-
-
-
Cracking Scriptcase 6 is illegal and unethical. You are breaking the law and disrespecting the work of the developers who created Scriptcase. You are also depriving them of their income and support.
-
Cracking Scriptcase 6 is unsafe and unreliable. You are exposing your computer and your data to potential threats from viruses, malware, or spyware that might come with the cracked version or the serial number generator. You are also risking losing your work or getting errors or bugs in your applications.
-
Cracking Scriptcase 6 is unprofessional and unproductive. You are missing out on the benefits of having a legitimate license of Scriptcase 6, such as updates, support, documentation, tutorials, community forums, and more. You are also limiting your opportunities and credibility as a web developer.
-
-
-
Conclusion
-
-
Scriptcase 6 is a great web development tool that can help you create web applications faster and easier. However, if you want to get the full version of Scriptcase 6 with all the features and no limitations, you need a valid serial number to activate it.
-
-
The best way to get a full version Scriptcase 6 serial number is to buy it from the official website of Scriptcase. This is the legal and safe way to get a full version Scriptcase 6 serial number. You will also enjoy the benefits of having a legitimate license of Scriptcase 6.
-
-
The worst way to get a full version Scriptcase 6 serial number is to crack it. This is an illegal and risky way to get a full version Scriptcase 6 serial number. You will also face the drawbacks of having a cracked version of Scriptcase 6.
-
-
Therefore, we recommend you to avoid cracking Scriptcase 6 and opt for the legal way instead. This will ensure your security, quality, professionalism, and productivity as a web developer.
-
How to Use Scriptcase 6
-
-
Once you have activated your full version Scriptcase 6 serial number, you can start using Scriptcase 6 to create web applications. The process is simple and intuitive. You just need to follow these steps:
-
-
-
Create a new project or open an existing one.
-
Select the database connection that you want to use for your application.
-
Create a new application or edit an existing one. You can choose from different types of applications, such as Form, Grid, Chart, Calendar, Control, Report, Dashboard, etc.
-
Configure the settings and options of your application, such as layout, fields, buttons, events, validations, filters, etc.
-
Generate the source code of your application and run it in your browser.
-
Publish your application to a web server or export it to a file.
-
-
-
Scriptcase 6 also provides you with tools and resources to help you with your web development. You can access the documentation, tutorials, videos, samples, forums, support, and more from the Scriptcase menu.
-
-
Benefits of Using Scriptcase 6
-
-
Using Scriptcase 6 with a full version serial number has many benefits for web developers. Here are some of them:
-
-
-
You can create web applications faster and easier than coding from scratch.
-
You can use a visual interface to design and develop your applications without writing a single line of code.
-
You can import data from different sources and generate applications based on them.
-
You can create responsive and cross-browser applications that work on any device and platform.
-
You can customize your applications with your own business rules and logic.
-
You can integrate your applications with external libraries and APIs.
-
You can collaborate with other developers on the same project using the cloud edition or the team development feature.
-
You can update and maintain your applications easily with the automatic versioning and backup system.
-
-
-
Scriptcase 6 is a powerful and easy-to-use web development tool that can help you create web applications faster and easier. However, if you want to get the full version of Scriptcase 6 with all the features and no limitations, you need a valid serial number to activate it.
-
-
The best way to get a full version Scriptcase 6 serial number is to buy it from the official website of Scriptcase. This is the legal and safe way to get a full version Scriptcase 6 serial number. You will also enjoy the benefits of having a legitimate license of Scriptcase 6.
-
-
The worst way to get a full version Scriptcase 6 serial number is to crack it. This is an illegal and risky way to get a full version Scriptcase 6 serial number. You will also face the drawbacks of having a cracked version of Scriptcase 6.
-
-
Therefore, we recommend you to avoid cracking Scriptcase 6 and opt for the legal way instead. This will ensure your security, quality, professionalism, and productivity as a web developer.
-
How to Download and Install Scriptcase 6
-
-
If you have bought a full version Scriptcase 6 serial number, you can download and install Scriptcase 6 on your computer or on a web server. The process is simple and straightforward. You just need to follow these steps:
-
-
-
Go to the official website of Scriptcase and log in with your email and password.
-
Go to the download section and choose the version that matches your operating system (Windows, Linux, or Mac).
-
Download the installer file and save it on your computer or on a web server.
-
Run the installer file and follow the instructions on the screen.
-
Enter your full version Scriptcase 6 serial number when prompted.
-
Finish the installation and launch Scriptcase 6.
-
-
-
You can also watch the video tutorials on the website of Scriptcase to see how to download and install Scriptcase 6 step by step.
-
-
How to Update and Upgrade Scriptcase 6
-
-
If you have a full version Scriptcase 6 serial number, you can update and upgrade Scriptcase 6 to get the latest features and improvements. The process is easy and convenient. You just need to follow these steps:
-
-
-
Open Scriptcase 6 and go to Help > Check for updates.
-
If there are any updates available, click on Download and Install.
-
Wait for the updates to be downloaded and installed.
-
Restart Scriptcase 6 and enjoy the new features and improvements.
-
-
-
If you want to upgrade your edition of Scriptcase 6, you can do so by going to the website of Scriptcase and choosing the edition that you want to upgrade to. You will need to pay the difference between your current edition and the new edition. You will receive an email with your new serial number and instructions on how to activate it.
-
-
You can also watch the video tutorials on the website of Scriptcase to see how to update and upgrade Scriptcase 6 step by step.
-
How to Troubleshoot Scriptcase 6
-
-
If you encounter any problems or issues while using Scriptcase 6 with a full version serial number, you can troubleshoot them by following these steps:
-
-
-
Check the system requirements of Scriptcase 6 and make sure your computer or web server meets them.
-
Check the compatibility of Scriptcase 6 with your browser and make sure you are using the latest version.
-
Check the connection settings of Scriptcase 6 and make sure they are correct and valid.
-
Check the error logs of Scriptcase 6 and see if there are any messages or codes that indicate the cause of the problem.
-
Check the documentation and FAQ of Scriptcase 6 and see if there are any solutions or tips for your problem.
-
Contact the support team of Scriptcase 6 and report your problem with details and screenshots.
-
-
-
You can also watch the video tutorials on the website of Scriptcase to see how to troubleshoot Scriptcase 6 step by step.
-
-
How to Uninstall Scriptcase 6
-
-
If you want to uninstall Scriptcase 6 from your computer or web server, you can do so by following these steps:
-
-
-
Backup your projects and data before uninstalling Scriptcase 6.
-
Go to the control panel of your computer or web server and find the program list.
-
Select Scriptcase 6 and click on Uninstall.
-
Follow the instructions on the screen to complete the uninstallation process.
-
Delete any remaining files or folders related to Scriptcase 6 from your computer or web server.
-
-
-
You can also watch the video tutorials on the website of Scriptcase to see how to uninstall Scriptcase 6 step by step.
-
Conclusion
-
-
Scriptcase 6 is a powerful and easy-to-use web development tool that can help you create web applications faster and easier. However, if you want to get the full version of Scriptcase 6 with all the features and no limitations, you need a valid serial number to activate it.
-
-
The best way to get a full version Scriptcase 6 serial number is to buy it from the official website of Scriptcase. This is the legal and safe way to get a full version Scriptcase 6 serial number. You will also enjoy the benefits of having a legitimate license of Scriptcase 6, such as updates, support, documentation, tutorials, community forums, and more.
-
-
The worst way to get a full version Scriptcase 6 serial number is to crack it. This is an illegal and risky way to get a full version Scriptcase 6 serial number. You will also face the drawbacks of having a cracked version of Scriptcase 6, such as viruses, malware, spyware, errors, bugs, legal consequences, and more.
-
-
Therefore, we recommend you to avoid cracking Scriptcase 6 and opt for the legal way instead. This will ensure your security, quality, professionalism, and productivity as a web developer.
-
-
We hope this article has helped you to understand what Scriptcase 6 is and how to get a full version Scriptcase 6 serial number. If you have any questions or feedback, please feel free to contact us or leave a comment below.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/epexVfeibi/Imagedeblurr/!!TOP!! Download Tally Ees V6 3 Release 1 Crack Keygen.md b/spaces/epexVfeibi/Imagedeblurr/!!TOP!! Download Tally Ees V6 3 Release 1 Crack Keygen.md
deleted file mode 100644
index cb25045577f967a29d525cd31e263af689d4cb8c..0000000000000000000000000000000000000000
--- a/spaces/epexVfeibi/Imagedeblurr/!!TOP!! Download Tally Ees V6 3 Release 1 Crack Keygen.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Results
-
-After installing the AOC official driver, the screen is not displayed and your computer is not able to communicate with the external monitor. Check your monitor and your computer manufacturer's website to ensure that the monitor is compatible with your computer.
-
-## 0.3. Run the driver setup utility to install the Universal drivers
-
-In this tutorial, we will install the Universal drivers for all AOC monitors. In this tutorial, we will install the Universal driver for the AOC monitor.
-
-1. Open the Device Manager by clicking on the Start button, selecting Control Panel, and then selecting Programs and Features. In the left pane, select View By and then select By Category. Select Hardware and Sound from the left pane and then select Device Manager from the middle pane. Expand the display driver list in the left pane and then select Display adapters from the middle pane. Select Universal from the list. If there 4fefd39f24
-
-
-
diff --git a/spaces/fatiXbelha/sd/Download FIFA Romania and get ready for the new 23 season with updated players kits and clubs.md b/spaces/fatiXbelha/sd/Download FIFA Romania and get ready for the new 23 season with updated players kits and clubs.md
deleted file mode 100644
index fbbdc5323e5001f315cde85b145904f6ac82223b..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download FIFA Romania and get ready for the new 23 season with updated players kits and clubs.md
+++ /dev/null
@@ -1,148 +0,0 @@
-
-
Download FIFA Romania: How to Play the Ultimate Football Game with Your Favorite Romanian Teams and Players
-
If you are a fan of football and you love Romania, you might be interested in downloading FIFA Romania, a modded version of the popular FIFA game that features Romanian teams, players, stadiums, and more. In this article, we will show you how to download FIFA Romania, how to play it, and why you should give it a try.
-
Introduction
-
What is FIFA Romania?
-
FIFA Romania is a modification of the original FIFA game that adds Romanian elements to the game. It was created by a group of passionate Romanian fans who wanted to bring their country's football culture to the virtual world. FIFA Romania includes:
Over 40 Romanian teams from Liga 1, Liga 2, and Liga 3
-
Over 1000 Romanian players with realistic faces, attributes, and ratings
-
Over 20 Romanian stadiums with authentic details and atmosphere
-
Romanian commentators, chants, banners, logos, kits, and graphics
-
Romanian national team with updated roster and kits
-
-
Why should you download FIFA Romania?
-
Downloading FIFA Romania has many benefits for football lovers. Here are some of them:
-
-
You can play with your favorite Romanian teams and players in various game modes, such as career mode, tournament mode, or online mode
-
You can experience the thrill of the Romanian football league, with realistic fixtures, standings, and statistics
-
You can enjoy the authentic Romanian football atmosphere, with lively crowds, chants, and commentary
-
You can support the Romanian football community and show your patriotism
-
You can have fun and challenge yourself with a different and unique football game
-
-
How to download FIFA Romania
-
Requirements and compatibility
-
To download FIFA Romania, you will need the following:
-
-
A PC or laptop with Windows operating system (Windows 7 or higher)
-
A copy of FIFA 14 or FIFA 15 installed on your device (you can buy it online or from a local store)
-
At least 10 GB of free disk space on your device
-
A stable internet connection for downloading and installing the mod
-
A compatible controller or keyboard for playing the game (optional)
-
-
FIFA Romania is compatible with both FIFA 14 and FIFA 15 versions. However, some features may vary depending on the version you have. For example, FIFA 15 has better graphics and gameplay than FIFA 14, but FIFA 14 has more Romanian teams and players than FIFA 15.
-
Steps to download and install FIFA Romania
-
To download and install FIFA Romania, follow these steps:
-
-
Go to the official website of FIFA Romania (www.fifaromania.net) and register an account (it's free)
-
Log in to your account and go to the download section
-
Select the version of FIFA Romania that matches your version of FIFA (FIFA 14 or FIFA 15)
-
Download the mod file (it's a zip file that contains several folders)
-
Extract the mod file to a folder on your device (you can use WinRAR or any other extraction software)
-
Copy the contents of the extracted folder to the folder where you installed FIFA on your device (usually C:\Program Files\EA Sports\FIFA)
-
Run the game as administrator (right-click on the game icon and select "Run as administrator")
-
Enjoy playing FIFA Romania!
-
-
Tips and tricks to optimize your gaming experience
-
Here are some tips and tricks to make the most out of FIFA Romania:
-
-
Update your device drivers and software to ensure smooth performance and compatibility
-
Adjust the game settings according to your device specifications and preferences (you can change the resolution, graphics, sound, controls, etc.)
-
Backup your original FIFA files before installing the mod, in case you want to revert to the original game or uninstall the mod
-
Check the official website of FIFA Romania regularly for updates, patches, and news about the mod
-
Join the online community of FIFA Romania players and fans on social media, forums, and blogs, where you can share your feedback, suggestions, questions, and experiences
-
-
How to play FIFA Romania
-
Game modes and features
-
FIFA Romania offers a variety of game modes and features that will keep you entertained and challenged. Here are some of them:
-
-
Career mode: You can create your own manager or player and lead your team to glory in the Romanian league or the international stage. You can also transfer players, negotiate contracts, scout talents, and manage your finances.
-
Tournament mode: You can choose from various tournaments, such as the Romanian Cup, the Romanian Super Cup, the UEFA Champions League, or the FIFA World Cup, and compete with other teams for the trophy.
-
Online mode: You can play online with other FIFA Romania players from around the world, either in friendly matches or in ranked matches. You can also join or create online leagues and tournaments with your friends or other players.
-
Ultimate team mode: You can create your own dream team by collecting and trading cards of players, managers, stadiums, kits, and more. You can also compete with other players' teams in various modes, such as seasons, tournaments, or drafts.
-
Skill games mode: You can practice and improve your skills in various aspects of the game, such as dribbling, passing, shooting, defending, or goalkeeping. You can also challenge yourself with different levels of difficulty and earn rewards.
-
-
How to create and customize your own team
-
If you want to create and customize your own team in FIFA Romania, you can do so by following these steps:
-
-
Go to the main menu and select "Customize"
-
Select "Create team"
-
Enter a name for your team and choose a country (you can choose Romania or any other country)
-
Select a league for your team (you can choose from Liga 1, Liga 2, Liga 3, or Rest of World)
-
Select a stadium for your team (you can choose from any of the Romanian stadiums or any other stadium)
-
Select a kit for your team (you can choose from any of the Romanian kits or any other kit)
-
Select a logo for your team (you can choose from any of the Romanian logos or any other logo)
-
Select a squad for your team (you can choose from any of the Romanian players or any other players)
-
Save your team and exit
-
-
You can also edit or delete your team at any time by going to "Customize" > "Edit teams"
-
download fifa romania apk
-download fifa romania 2023
-download fifa romania liga 1
-download fifa romania patch
-download fifa romania android
-download fifa romania pc
-download fifa romania mod
-download fifa romania 2022
-download fifa romania free
-download fifa romania online
-download fifa mobile romania
-download fifa 14 romania
-download fifa 21 romania
-download fifa 20 romania
-download fifa 19 romania
-download fifa 18 romania
-download fifa 17 romania
-download fifa 16 romania
-download fifa 15 romania
-download fifa 13 romania
-download ea sports fifa romania
-download jocuri fifa romania
-download liga 1 betano fifa romania
-download liga 2 casa pariurilor fifa romania
-download liga 3 fifa romania
-download echipe nationale fifa romania
-download comentariu in limba romana pentru fifa
-download stadioane romanesti pentru fifa
-download imnul national al romaniei pentru fifa
-download steaua bucuresti pentru fifa
-download dinamo bucuresti pentru fifa
-download rapid bucuresti pentru fifa
-download universitatea craiova pentru fifa
-download cfr cluj pentru fifa
-download fcsb pentru fifa
-download viitorul constanta pentru fifa
-download astra giurgiu pentru fifa
-download petrolul ploiesti pentru fifa
-download uta arad pentru fifa
-download chindia targoviste pentru fifa
-
How to compete online and offline with other players
-
If you want to compete online and offline with other players in FIFA Romania, you have several options:
-
-
If you want to play online with other FIFA Romania players from around the world, you can go to "Online" > "Seasons" or "Tournaments" and select a mode that suits you. You can also go to "Online" > "Match lobbies" and join or create a lobby with your preferred settings.
-
If you want to play online with your friends or other players that you know, you can go to "Online" > "Friendlies" and invite them to a match. You can also go to "Online" > "Custom games" and create a game with your own rules and settings.
-
If you want to play offline with another player on the same device, you can go to "Kick off" > "Match" and select a mode that suits you. You can also go to "Kick off" > "Custom games" and create a game with your own rules and settings.
-
If you want to play offline with another player on a different device, you can go to "Kick off" > "LAN Play" and connect your devices via LAN cable or Wi-Fi and select a mode that suits you. You can also go to "Kick off" > "Custom games" and create a game with your own rules and settings.
-
-
Conclusion
-
Summary of the main points
-
In conclusion, FIFA Romania is a modded version of the original FIFA game that features Romanian teams, players, stadiums, and more. It is a fun and unique way to enjoy football and support Romania. To download FIFA Romania, you need to have FIFA 14 or FIFA 15 installed on your device, and then follow the steps to download and install the mod from the official website. You can play FIFA Romania in various game modes and features, such as career mode, tournament mode, online mode, ultimate team mode, or skill games mode. You can also create and customize your own team, and compete online and offline with other players.
-
Call to action and invitation to share feedback
-
If you are interested in downloading FIFA Romania, you can visit the official website (www.fifaromania.net) and register an account for free. You can also join the online community of FIFA Romania players and fans on social media, forums, and blogs, where you can share your feedback, suggestions, questions, and experiences. We hope you enjoy playing FIFA Romania and have a great time with your favorite Romanian teams and players. Thank you for reading this article and please share it with your friends if you found it helpful.
-
FAQs
-
Here are some frequently asked questions about FIFA Romania:
-
-
Is FIFA Romania legal and safe?
-Yes, FIFA Romania is legal and safe. It is a fan-made modification of the original FIFA game that does not violate any copyrights or trademarks of EA Sports or FIFA. It is also free of viruses, malware, or spyware.
-
Can I play FIFA Romania on other devices besides PC?
-No, FIFA Romania is only available for PC devices. It is not compatible with other devices such as consoles, mobile phones, or tablets.
-
Can I play FIFA Romania with other mods or patches?
-No, FIFA Romania is not compatible with other mods or patches. It is a standalone mod that requires a clean installation of FIFA 14 or FIFA 15. If you have other mods or patches installed on your device, you need to uninstall them before installing FIFA Romania.
-
Can I update FIFA Romania to the latest version of FIFA?
-No, FIFA Romania is only compatible with FIFA 14 or FIFA 15 versions. It is not compatible with newer versions of FIFA such as FIFA 16 or FIFA 17. If you want to play the latest version of FIFA, you need to buy it separately from EA Sports or from a local store.
-
Can I contact the developers of FIFA Romania for support or feedback?
-Yes, you can contact the developers of FIFA Romania for support or feedback. You can visit their official website (www.fifaromania.net) and use the contact form or the forum to send them a message. You can also follow them on social media (Facebook, Twitter, YouTube) and send them a message there.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Soul Knight APK dan Nikmati Gameplay Menarik di Game RPG Ini 2022.md b/spaces/fatiXbelha/sd/Download Soul Knight APK dan Nikmati Gameplay Menarik di Game RPG Ini 2022.md
deleted file mode 100644
index 7084106b2b12e4b1e7431156f28a69681977cbfd..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Soul Knight APK dan Nikmati Gameplay Menarik di Game RPG Ini 2022.md
+++ /dev/null
@@ -1,133 +0,0 @@
-
-
Download Soul Knight Versi Terbaru 2022: A Guide for Pixel Action Shoot'em Up Fans
In this article, we will guide you on how to download Soul Knight versi terbaru 2022, the latest version of the game that was released on June 6th, 2023 for Android devices and May 15th, 2023 for iOS devices. We will also show you what's new in this version, how to play it, and why you should play it.
Soul Knight versi terbaru 2022 is the most updated version of the game that brings new features and improvements to enhance your gaming experience. Here are some of the highlights of this version:
-
-
New gift! Claim your gift from the mail: "Doodle Doo", the new skin of Rogue.
-
New hero! Meet Rosemary, a cute girl who loves gardening and has a special ability to grow plants.
-
New event! Travelogue of Rosemary Island, a new adventure mode where you can explore an island full of plants, animals, and secrets.
-
New weapons! More than 20 new weapons added, including Buddha's Palm, Iron Will Wavebreaker, Night Wanderer, etc.
-
New skins! More than 10 new skins added, including Bunny Girl of Assassin, Pirate of Knight, etc.
-
New pets! More than 10 new pets added, including Baby Dragon, Baby Phoenix, Baby Unicorn, etc.
-
New bosses! More than 5 new bosses added, including Giant Crab, Giant Octopus, Giant Shark, etc.
-
New achievements! More than 10 new achievements added, including Plant Lover, Island Explorer, Sea Monster Slayer, etc.
-
New languages! Added support for Arabic, Turkish, and Vietnamese languages.
-
Improved performance and stability. Fixed some bugs and glitches.
-
-
As you can see, Soul Knight versi terbaru 2022 is packed with new content and updates that will make you want to play it more than ever. But how do you download it? Let's find out in the next section.
-
How to Download Soul Knight Versi Terbaru 2022 for Android and iOS Devices?
-
Downloading Soul Knight versi terbaru 2022 is very easy and simple. Just follow these steps:
Click on the "Download" button on the top right corner of the screen.
-
Select your device type: Android or iOS.
-
You will be redirected to the Google Play Store or the App Store, depending on your device.
-
Click on the "Install" or "Get" button to download Soul Knight versi terbaru 2022 on your device.
-
Wait for the download and installation process to finish.
-
Launch the game and enjoy!
-
-
Alternatively, you can also scan the QR code on the website to download Soul Knight versi terbaru 2022 directly on your device. Just make sure you have a QR code scanner app installed on your device.
-
That's it! You have successfully downloaded Soul Knight versi terbaru 2022 on your device. Now, let's learn how to play it in the next section.
How to Play Soul Knight Versi Terbaru 2022?
-
Soul Knight versi terbaru 2022 is a game that is easy to learn but hard to master. It is a game that requires skill, strategy, and luck. Here are some tips and tricks on how to play it and have fun:
-
Choose Your Hero and Weapon
-
One of the first things you need to do in Soul Knight versi terbaru 2022 is to choose your hero and weapon. There are more than 30 heroes to choose from, each with their own unique skills and stats. Some of the heroes are free, while others require gems or real money to unlock. You can also customize your hero's appearance with different skins.
-
download soul knight mod apk versi terbaru 2022
-download soul knight update 5.2.2 versi terbaru 2022
-download soul knight game android versi terbaru 2022
-download soul knight hack unlimited gems versi terbaru 2022
-download soul knight offline rpg versi terbaru 2022
-download soul knight cheat menu versi terbaru 2022
-download soul knight latest version 2022 for free
-download soul knight new skins and weapons versi terbaru 2022
-download soul knight premium unlocked versi terbaru 2022
-download soul knight full version modded versi terbaru 2022
-download soul knight best roguelike game versi terbaru 2022
-download soul knight all characters and pets versi terbaru 2022
-download soul knight no ads and in-app purchases versi terbaru 2022
-download soul knight online multiplayer versi terbaru 2022
-download soul knight dungeon shooter versi terbaru 2022
-download soul knight pixel art graphics versi terbaru 2022
-download soul knight tips and tricks versi terbaru 2022
-download soul knight changelog and patch notes versi terbaru 2022
-download soul knight apk file for android versi terbaru 2022
-download soul knight apk + obb data versi terbaru 2022
-download soul knight apk mirror link versi terbaru 2022
-download soul knight apk pure source versi terbaru 2022
-download soul knight apk combo installer versi terbaru 2022
-download soul knight apk mod menu versi terbaru 2022
-download soul knight apk unlimited money and gems versi terbaru 2022
-download soul knight apk pro version versi terbaru 2022
-download soul knight apk latest update versi terbaru 2022
-download soul knight apk from google play store versi terbaru 2022
-download soul knight apk from official website versi terbaru 2022
-download soul knight apk from youtube video link versi terbaru 2022
-cara download soul knight di android versi terbaru 2022
-cara download soul knight mod gratis versi terbaru 2022
-cara download soul knight tanpa root versi terbaru 2022
-cara download soul knight dengan mudah dan cepat versi terbaru 2022
-cara download soul knight dari situs web resmi versi terbaru 2022
-cara download soul knight dari play store versi terbaru 2022
-cara download soul knight dari link alternatif versi terbaru 2022
-cara download soul knight dengan fitur lengkap versi terbaru 2022
-cara download soul knight dengan kualitas tinggi versi terbaru 2022
-cara download soul knight dengan ukuran kecil versi terbaru 2022
-review game soul knight untuk android versi terbaru 2022
-gameplay game soul knight di android versi terbaru 2022
-fitur game soul knight untuk android versi terbaru 2022
-kelebihan game soul knight untuk android versi terbaru 2022
-kekurangan game soul knight untuk android versi terbaru 2022
-tips bermain game soul knight di android versi terbaru 2022
-trik bermain game soul knight di android versi terbaru 2022
-cheat game soul knight di android versi terbaru 2022
-kode game soul knight di android versi terbaru 2022
-
Each hero has a default weapon that they start with, but you can also find and use other weapons in the game. There are more than 300 weapons to collect, ranging from swords, guns, bows, lasers, rockets, etc. Each weapon has its own attributes, such as damage, fire rate, accuracy, etc. You can also upgrade your weapons with gold or gems to make them more powerful.
-
Choosing the right hero and weapon for your play style and preference is crucial for your success in Soul Knight versi terbaru 2022. Experiment with different combinations and see what works best for you.
-
Explore Randomly Generated Dungeons
-
The main mode of Soul Knight versi terbaru 2022 is the dungeon mode, where you have to explore randomly generated dungeons and fight against various enemies and bosses. Each dungeon has a theme, such as forest, desert, ice, etc., and consists of several floors. Each floor has a number of rooms, where you can find enemies, chests, shops, statues, etc.
-
Your goal is to clear all the rooms on each floor and reach the portal that leads to the next floor. Along the way, you can collect coins, gems, health potions, ammo boxes, etc., that will help you survive and progress. You can also find secrets and hidden rooms that may contain special items or surprises.
-
The dungeons are randomly generated every time you play, so you never know what to expect. This adds an element of unpredictability and replayability to Soul Knight versi terbaru 2022. You have to be prepared for anything and adapt to different situations.
-
Use Skills and Buffs Wisely
-
Another important aspect of Soul Knight versi terbaru 2022 is the use of skills and buffs. Skills are special abilities that each hero has and can use in combat. Skills have a cooldown time before they can be used again. Some skills are offensive, such as shooting projectiles or summoning allies, while others are defensive, such as healing or shielding.
-
Buffs are temporary effects that enhance your hero's performance in some way. Buffs can be obtained from statues, plants, potions, etc., and can affect your stats, such as health, damage, speed, etc., or give you special abilities, such as immunity, reflection, regeneration, etc.
-
Using skills and buffs wisely can make a huge difference in your gameplay. You have to know when to use them and how to combine them for maximum effect. You also have to be careful not to use them too often or too recklessly, as they may have drawbacks or side effects.
Team Up with Friends Online or Offline
-
One of the best features of Soul Knight versi terbaru 2022 is the multiplayer mode, where you can team up with your friends online or offline and play together. You can join or create a room online and invite up to three other players to join you. You can also use the local multiplayer mode and connect up to four devices via Wi-Fi or Bluetooth.
-
Playing with your friends can make Soul Knight versi terbaru 2022 more fun and exciting, as you can cooperate and coordinate with each other, share items and resources, revive each other, and compete for the best score. You can also chat with your friends using the in-game voice or text chat feature.
-
However, playing with your friends can also make Soul Knight versi terbaru 2022 more challenging and chaotic, as you have to deal with more enemies and bosses, friendly fire, limited screen space, and potential lag or connection issues. You also have to be careful not to steal or sabotage each other's items or buffs.
-
Whether you prefer to play solo or with your friends, Soul Knight versi terbaru 2022 has something for everyone. You can choose the mode that suits your mood and preference.
-
Why You Should Play Soul Knight Versi Terbaru 2022?
-
By now, you might be wondering why you should play Soul Knight versi terbaru 2022. Well, there are many reasons why this game is worth your time and attention. Here are some of them:
-
Fun and Engaging Gameplay
-
Soul Knight versi terbaru 2022 offers a fun and engaging gameplay experience for pixel action shoot'em up fans. The game is fast-paced, smooth, and responsive, with easy and intuitive controls. The game is also varied, unpredictable, and replayable, with different heroes, weapons, dungeons, enemies, bosses, skills, buffs, etc. The game is also challenging, rewarding, and addictive, with different difficulty levels, achievements, leaderboards, etc.
-
If you are looking for a game that will keep you entertained and hooked for hours, Soul Knight versi terbaru 2022 is the game for you.
-
Beautiful Pixel Art and Music
-
Soul Knight versi terbaru 2022 showcases a beautiful pixel art style and music that enhance the game's atmosphere. The game has a retro and nostalgic feel, with colorful and detailed graphics that capture the essence of pixel art. The game also has a catchy and upbeat soundtrack that matches the mood and theme of the game. The game also has a variety of sound effects that add to the immersion and realism of the game.
-
If you are a fan of pixel art and music, Soul Knight versi terbaru 2022 is the game for you.
Challenging but Rewarding Difficulty
-
Soul Knight versi terbaru 2022 offers a challenging but rewarding difficulty level that keeps you hooked and motivated. The game is not easy, as you have to face countless enemies and bosses that will test your skills and reflexes. The game is also not forgiving, as you have to start over from the beginning if you die. The game is also not predictable, as you have to deal with random dungeons and events that will change your gameplay.
-
However, the game is also not impossible, as you have access to various resources and tools that will help you overcome the challenges. The game is also not boring, as you have different goals and rewards that will keep you interested and satisfied. The game is also not repetitive, as you have different strategies and styles that will keep you creative and curious.
-
If you are looking for a game that will challenge you but also reward you, Soul Knight versi terbaru 2022 is the game for you.
-
Diverse Game Modes and Features
-
Soul Knight versi terbaru 2022 offers a diverse range of game modes and features that cater to different preferences and tastes. The game has a main dungeon mode, where you can explore different dungeons and fight against various enemies and bosses. The game also has an adventure mode, where you can embark on different quests and stories. The game also has a multiplayer mode, where you can play with your friends online or offline. The game also has a workshop mode, where you can create your own weapons and dungeons. The game also has a garden mode, where you can grow plants and harvest resources.
-
If you are looking for a game that will offer you different options and possibilities, Soul Knight versi terbaru 2022 is the game for you.
-
Conclusion
-
Soul Knight versi terbaru 2022 is a pixel action shoot'em up game that features extremely easy and intuitive control, super smooth and enjoyable gameplay, mixed with rogue-like elements, that will get you hooked from the very first run!
-
In this article, we have guided you on how to download Soul Knight versi terbaru 2022, the latest version of the game that brings new features and improvements to enhance your gaming experience. We have also shown you what's new in this version, how to play it, and why you should play it.
-
Now that you know everything about Soul Knight versi terbaru 2022, what are you waiting for? Download it now and enjoy the pixel action shoot'em up adventure of your life!
-
FAQs
-
Here are some frequently asked questions and answers about Soul Knight versi terbaru 2022:
-
-
Is Soul Knight versi terbaru 2022 free to play?
-
Yes, Soul Knight versi terbaru 2022 is free to play, but it contains some in-app purchases that can enhance your gameplay or unlock some premium content.
-
Is Soul Knight versi terbaru 2022 compatible with my device?
-
Soul Knight versi terbaru 2022 is compatible with Android devices running Android 4.4 or higher, and iOS devices running iOS 9.0 or higher. It is also compatible with Nintendo Switch devices.
-
How can I save my progress in Soul Knight versi terbaru 2022?
-
Soul Knight versi terbaru 2022 supports cloud saving, which means you can save your progress online and access it from any device. You just need to log in with your Google Play Games or Game Center account in the game settings.
-
How can I contact the developers of Soul Knight versi terbaru 2022?
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/gradio_demo.py b/spaces/fb700/chatglm-fitness-RLHF/src/gradio_demo.py
deleted file mode 100644
index b1d2619fd9a67b37bea55bc91776afbcb3e50558..0000000000000000000000000000000000000000
--- a/spaces/fb700/chatglm-fitness-RLHF/src/gradio_demo.py
+++ /dev/null
@@ -1,170 +0,0 @@
-import torch, uuid
-import os, sys, shutil, platform
-from src.facerender.pirender_animate import AnimateFromCoeff_PIRender
-from src.utils.preprocess import CropAndExtract
-from src.test_audio2coeff import Audio2Coeff
-from src.facerender.animate import AnimateFromCoeff
-from src.generate_batch import get_data
-from src.generate_facerender_batch import get_facerender_data
-
-from src.utils.init_path import init_path
-
-from pydub import AudioSegment
-
-
-def mp3_to_wav(mp3_filename,wav_filename,frame_rate):
- mp3_file = AudioSegment.from_file(file=mp3_filename)
- mp3_file.set_frame_rate(frame_rate).export(wav_filename,format="wav")
-
-
-class SadTalker():
-
- def __init__(self, checkpoint_path='checkpoints', config_path='src/config', lazy_load=False):
-
- if torch.cuda.is_available():
- device = "cuda"
- elif platform.system() == 'Darwin': # macos
- device = "mps"
- else:
- device = "cpu"
-
- self.device = device
-
- os.environ['TORCH_HOME']= checkpoint_path
-
- self.checkpoint_path = checkpoint_path
- self.config_path = config_path
-
-
- def test(self, source_image, driven_audio, preprocess='crop',
- still_mode=False, use_enhancer=False, batch_size=1, size=256,
- pose_style = 0,
- facerender='facevid2vid',
- exp_scale=1.0,
- use_ref_video = False,
- ref_video = None,
- ref_info = None,
- use_idle_mode = False,
- length_of_audio = 0, use_blink=True,
- result_dir='./results/'):
-
- self.sadtalker_paths = init_path(self.checkpoint_path, self.config_path, size, False, preprocess)
- print(self.sadtalker_paths)
-
- self.audio_to_coeff = Audio2Coeff(self.sadtalker_paths, self.device)
- self.preprocess_model = CropAndExtract(self.sadtalker_paths, self.device)
-
- if facerender == 'facevid2vid' and self.device != 'mps':
- self.animate_from_coeff = AnimateFromCoeff(self.sadtalker_paths, self.device)
- elif facerender == 'pirender' or self.device == 'mps':
- self.animate_from_coeff = AnimateFromCoeff_PIRender(self.sadtalker_paths, self.device)
- facerender = 'pirender'
- else:
- raise(RuntimeError('Unknown model: {}'.format(facerender)))
-
-
- time_tag = str(uuid.uuid4())
- save_dir = os.path.join(result_dir, time_tag)
- os.makedirs(save_dir, exist_ok=True)
-
- input_dir = os.path.join(save_dir, 'input')
- os.makedirs(input_dir, exist_ok=True)
-
- print(source_image)
- pic_path = os.path.join(input_dir, os.path.basename(source_image))
- shutil.move(source_image, input_dir)
-
- if driven_audio is not None and os.path.isfile(driven_audio):
- audio_path = os.path.join(input_dir, os.path.basename(driven_audio))
-
- #### mp3 to wav
- if '.mp3' in audio_path:
- mp3_to_wav(driven_audio, audio_path.replace('.mp3', '.wav'), 16000)
- audio_path = audio_path.replace('.mp3', '.wav')
- else:
- shutil.move(driven_audio, input_dir)
-
- elif use_idle_mode:
- audio_path = os.path.join(input_dir, 'idlemode_'+str(length_of_audio)+'.wav') ## generate audio from this new audio_path
- from pydub import AudioSegment
- one_sec_segment = AudioSegment.silent(duration=1000*length_of_audio) #duration in milliseconds
- one_sec_segment.export(audio_path, format="wav")
- else:
- print(use_ref_video, ref_info)
- assert use_ref_video == True and ref_info == 'all'
-
- if use_ref_video and ref_info == 'all': # full ref mode
- ref_video_videoname = os.path.basename(ref_video)
- audio_path = os.path.join(save_dir, ref_video_videoname+'.wav')
- print('new audiopath:',audio_path)
- # if ref_video contains audio, set the audio from ref_video.
- cmd = r"ffmpeg -y -hide_banner -loglevel error -i %s %s"%(ref_video, audio_path)
- os.system(cmd)
-
- os.makedirs(save_dir, exist_ok=True)
-
- #crop image and extract 3dmm from image
- first_frame_dir = os.path.join(save_dir, 'first_frame_dir')
- os.makedirs(first_frame_dir, exist_ok=True)
- first_coeff_path, crop_pic_path, crop_info = self.preprocess_model.generate(pic_path, first_frame_dir, preprocess, True, size)
-
- if first_coeff_path is None:
- raise AttributeError("No face is detected")
-
- if use_ref_video:
- print('using ref video for genreation')
- ref_video_videoname = os.path.splitext(os.path.split(ref_video)[-1])[0]
- ref_video_frame_dir = os.path.join(save_dir, ref_video_videoname)
- os.makedirs(ref_video_frame_dir, exist_ok=True)
- print('3DMM Extraction for the reference video providing pose')
- ref_video_coeff_path, _, _ = self.preprocess_model.generate(ref_video, ref_video_frame_dir, preprocess, source_image_flag=False)
- else:
- ref_video_coeff_path = None
-
- if use_ref_video:
- if ref_info == 'pose':
- ref_pose_coeff_path = ref_video_coeff_path
- ref_eyeblink_coeff_path = None
- elif ref_info == 'blink':
- ref_pose_coeff_path = None
- ref_eyeblink_coeff_path = ref_video_coeff_path
- elif ref_info == 'pose+blink':
- ref_pose_coeff_path = ref_video_coeff_path
- ref_eyeblink_coeff_path = ref_video_coeff_path
- elif ref_info == 'all':
- ref_pose_coeff_path = None
- ref_eyeblink_coeff_path = None
- else:
- raise('error in refinfo')
- else:
- ref_pose_coeff_path = None
- ref_eyeblink_coeff_path = None
-
- #audio2ceoff
- if use_ref_video and ref_info == 'all':
- coeff_path = ref_video_coeff_path # self.audio_to_coeff.generate(batch, save_dir, pose_style, ref_pose_coeff_path)
- else:
- batch = get_data(first_coeff_path, audio_path, self.device, ref_eyeblink_coeff_path=ref_eyeblink_coeff_path, still=still_mode, \
- idlemode=use_idle_mode, length_of_audio=length_of_audio, use_blink=use_blink) # longer audio?
- coeff_path = self.audio_to_coeff.generate(batch, save_dir, pose_style, ref_pose_coeff_path)
-
- #coeff2video
- data = get_facerender_data(coeff_path, crop_pic_path, first_coeff_path, audio_path, batch_size, still_mode=still_mode, \
- preprocess=preprocess, size=size, expression_scale = exp_scale, facemodel=facerender)
- return_path = self.animate_from_coeff.generate(data, save_dir, pic_path, crop_info, enhancer='gfpgan' if use_enhancer else None, preprocess=preprocess, img_size=size)
- video_name = data['video_name']
- print(f'The generated video is named {video_name} in {save_dir}')
-
- del self.preprocess_model
- del self.audio_to_coeff
- del self.animate_from_coeff
-
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- torch.cuda.synchronize()
-
- import gc; gc.collect()
-
- return return_path
-
-
\ No newline at end of file
diff --git a/spaces/feng2022/styleganhuman_copy/torch_utils/op_edit/upfirdn2d.py b/spaces/feng2022/styleganhuman_copy/torch_utils/op_edit/upfirdn2d.py
deleted file mode 100644
index 874c09c5e98bee1ace64408aa31ec547dfe695a4..0000000000000000000000000000000000000000
--- a/spaces/feng2022/styleganhuman_copy/torch_utils/op_edit/upfirdn2d.py
+++ /dev/null
@@ -1,202 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-import os
-
-import torch
-from torch.nn import functional as F
-from torch.autograd import Function
-from torch.utils.cpp_extension import load
-
-
-module_path = os.path.dirname(__file__)
-upfirdn2d_op = load(
- "upfirdn2d",
- sources=[
- os.path.join(module_path, "upfirdn2d.cpp"),
- os.path.join(module_path, "upfirdn2d_kernel.cu"),
- ],
-)
-
-
-class UpFirDn2dBackward(Function):
- @staticmethod
- def forward(
- ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size
- ):
-
- up_x, up_y = up
- down_x, down_y = down
- g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad
-
- grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1)
-
- grad_input = upfirdn2d_op.upfirdn2d(
- grad_output,
- grad_kernel,
- down_x,
- down_y,
- up_x,
- up_y,
- g_pad_x0,
- g_pad_x1,
- g_pad_y0,
- g_pad_y1,
- )
- grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], in_size[3])
-
- ctx.save_for_backward(kernel)
-
- pad_x0, pad_x1, pad_y0, pad_y1 = pad
-
- ctx.up_x = up_x
- ctx.up_y = up_y
- ctx.down_x = down_x
- ctx.down_y = down_y
- ctx.pad_x0 = pad_x0
- ctx.pad_x1 = pad_x1
- ctx.pad_y0 = pad_y0
- ctx.pad_y1 = pad_y1
- ctx.in_size = in_size
- ctx.out_size = out_size
-
- return grad_input
-
- @staticmethod
- def backward(ctx, gradgrad_input):
- (kernel,) = ctx.saved_tensors
-
- gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], ctx.in_size[3], 1)
-
- gradgrad_out = upfirdn2d_op.upfirdn2d(
- gradgrad_input,
- kernel,
- ctx.up_x,
- ctx.up_y,
- ctx.down_x,
- ctx.down_y,
- ctx.pad_x0,
- ctx.pad_x1,
- ctx.pad_y0,
- ctx.pad_y1,
- )
- # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], ctx.out_size[1], ctx.in_size[3])
- gradgrad_out = gradgrad_out.view(
- ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1]
- )
-
- return gradgrad_out, None, None, None, None, None, None, None, None
-
-
-class UpFirDn2d(Function):
- @staticmethod
- def forward(ctx, input, kernel, up, down, pad):
- up_x, up_y = up
- down_x, down_y = down
- pad_x0, pad_x1, pad_y0, pad_y1 = pad
-
- kernel_h, kernel_w = kernel.shape
- batch, channel, in_h, in_w = input.shape
- ctx.in_size = input.shape
-
- input = input.reshape(-1, in_h, in_w, 1)
-
- ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1]))
-
- out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1
- out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1
- ctx.out_size = (out_h, out_w)
-
- ctx.up = (up_x, up_y)
- ctx.down = (down_x, down_y)
- ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1)
-
- g_pad_x0 = kernel_w - pad_x0 - 1
- g_pad_y0 = kernel_h - pad_y0 - 1
- g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1
- g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1
-
- ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1)
-
- out = upfirdn2d_op.upfirdn2d(
- input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
- )
- # out = out.view(major, out_h, out_w, minor)
- out = out.view(-1, channel, out_h, out_w)
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- kernel, grad_kernel = ctx.saved_tensors
-
- grad_input = UpFirDn2dBackward.apply(
- grad_output,
- kernel,
- grad_kernel,
- ctx.up,
- ctx.down,
- ctx.pad,
- ctx.g_pad,
- ctx.in_size,
- ctx.out_size,
- )
-
- return grad_input, None, None, None, None
-
-
-def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)):
- if input.device.type == "cpu":
- out = upfirdn2d_native(
- input, kernel, up, up, down, down, pad[0], pad[1], pad[0], pad[1]
- )
-
- else:
- out = UpFirDn2d.apply(
- input, kernel, (up, up), (down, down), (pad[0], pad[1], pad[0], pad[1])
- )
-
- return out
-
-
-def upfirdn2d_native(
- input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
-):
- _, channel, in_h, in_w = input.shape
- input = input.reshape(-1, in_h, in_w, 1)
-
- _, in_h, in_w, minor = input.shape
- kernel_h, kernel_w = kernel.shape
-
- out = input.view(-1, in_h, 1, in_w, 1, minor)
- out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1])
- out = out.view(-1, in_h * up_y, in_w * up_x, minor)
-
- out = F.pad(
- out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)]
- )
- out = out[
- :,
- max(-pad_y0, 0) : out.shape[1] - max(-pad_y1, 0),
- max(-pad_x0, 0) : out.shape[2] - max(-pad_x1, 0),
- :,
- ]
-
- out = out.permute(0, 3, 1, 2)
- out = out.reshape(
- [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]
- )
- w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w)
- out = F.conv2d(out, w)
- out = out.reshape(
- -1,
- minor,
- in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1,
- in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1,
- )
- out = out.permute(0, 2, 3, 1)
- out = out[:, ::down_y, ::down_x, :]
-
- out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1
- out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1
-
- return out.view(-1, channel, out_h, out_w)
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/content-disposition/HISTORY.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/content-disposition/HISTORY.md
deleted file mode 100644
index 488effa0c9440f4e214102980665781a62ba7059..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/content-disposition/HISTORY.md
+++ /dev/null
@@ -1,60 +0,0 @@
-0.5.4 / 2021-12-10
-==================
-
- * deps: safe-buffer@5.2.1
-
-0.5.3 / 2018-12-17
-==================
-
- * Use `safe-buffer` for improved Buffer API
-
-0.5.2 / 2016-12-08
-==================
-
- * Fix `parse` to accept any linear whitespace character
-
-0.5.1 / 2016-01-17
-==================
-
- * perf: enable strict mode
-
-0.5.0 / 2014-10-11
-==================
-
- * Add `parse` function
-
-0.4.0 / 2014-09-21
-==================
-
- * Expand non-Unicode `filename` to the full ISO-8859-1 charset
-
-0.3.0 / 2014-09-20
-==================
-
- * Add `fallback` option
- * Add `type` option
-
-0.2.0 / 2014-09-19
-==================
-
- * Reduce ambiguity of file names with hex escape in buggy browsers
-
-0.1.2 / 2014-09-19
-==================
-
- * Fix periodic invalid Unicode filename header
-
-0.1.1 / 2014-09-19
-==================
-
- * Fix invalid characters appearing in `filename*` parameter
-
-0.1.0 / 2014-09-18
-==================
-
- * Make the `filename` argument optional
-
-0.0.0 / 2014-09-18
-==================
-
- * Initial release
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/has-symbols/test/shams/get-own-property-symbols.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/has-symbols/test/shams/get-own-property-symbols.js
deleted file mode 100644
index 9191b248baa14b9866da65ccf638b96b71c046e7..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/has-symbols/test/shams/get-own-property-symbols.js
+++ /dev/null
@@ -1,28 +0,0 @@
-'use strict';
-
-var test = require('tape');
-
-if (typeof Symbol === 'function' && typeof Symbol() === 'symbol') {
- test('has native Symbol support', function (t) {
- t.equal(typeof Symbol, 'function');
- t.equal(typeof Symbol(), 'symbol');
- t.end();
- });
- return;
-}
-
-var hasSymbols = require('../../shams');
-
-test('polyfilled Symbols', function (t) {
- /* eslint-disable global-require */
- t.equal(hasSymbols(), false, 'hasSymbols is false before polyfilling');
-
- require('get-own-property-symbols');
-
- require('../tests')(t);
-
- var hasSymbolsAfter = hasSymbols();
- t.equal(hasSymbolsAfter, true, 'hasSymbols is true after polyfilling');
- /* eslint-enable global-require */
- t.end();
-});
diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/video/io.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/video/io.py
deleted file mode 100644
index 9879154227f640c262853b92c219461c6f67ee8e..0000000000000000000000000000000000000000
--- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/video/io.py
+++ /dev/null
@@ -1,318 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os.path as osp
-from collections import OrderedDict
-
-import cv2
-from cv2 import (CAP_PROP_FOURCC, CAP_PROP_FPS, CAP_PROP_FRAME_COUNT,
- CAP_PROP_FRAME_HEIGHT, CAP_PROP_FRAME_WIDTH,
- CAP_PROP_POS_FRAMES, VideoWriter_fourcc)
-
-from annotator.uniformer.mmcv.utils import (check_file_exist, mkdir_or_exist, scandir,
- track_progress)
-
-
-class Cache:
-
- def __init__(self, capacity):
- self._cache = OrderedDict()
- self._capacity = int(capacity)
- if capacity <= 0:
- raise ValueError('capacity must be a positive integer')
-
- @property
- def capacity(self):
- return self._capacity
-
- @property
- def size(self):
- return len(self._cache)
-
- def put(self, key, val):
- if key in self._cache:
- return
- if len(self._cache) >= self.capacity:
- self._cache.popitem(last=False)
- self._cache[key] = val
-
- def get(self, key, default=None):
- val = self._cache[key] if key in self._cache else default
- return val
-
-
-class VideoReader:
- """Video class with similar usage to a list object.
-
- This video warpper class provides convenient apis to access frames.
- There exists an issue of OpenCV's VideoCapture class that jumping to a
- certain frame may be inaccurate. It is fixed in this class by checking
- the position after jumping each time.
- Cache is used when decoding videos. So if the same frame is visited for
- the second time, there is no need to decode again if it is stored in the
- cache.
-
- :Example:
-
- >>> import annotator.uniformer.mmcv as mmcv
- >>> v = mmcv.VideoReader('sample.mp4')
- >>> len(v) # get the total frame number with `len()`
- 120
- >>> for img in v: # v is iterable
- >>> mmcv.imshow(img)
- >>> v[5] # get the 6th frame
- """
-
- def __init__(self, filename, cache_capacity=10):
- # Check whether the video path is a url
- if not filename.startswith(('https://', 'http://')):
- check_file_exist(filename, 'Video file not found: ' + filename)
- self._vcap = cv2.VideoCapture(filename)
- assert cache_capacity > 0
- self._cache = Cache(cache_capacity)
- self._position = 0
- # get basic info
- self._width = int(self._vcap.get(CAP_PROP_FRAME_WIDTH))
- self._height = int(self._vcap.get(CAP_PROP_FRAME_HEIGHT))
- self._fps = self._vcap.get(CAP_PROP_FPS)
- self._frame_cnt = int(self._vcap.get(CAP_PROP_FRAME_COUNT))
- self._fourcc = self._vcap.get(CAP_PROP_FOURCC)
-
- @property
- def vcap(self):
- """:obj:`cv2.VideoCapture`: The raw VideoCapture object."""
- return self._vcap
-
- @property
- def opened(self):
- """bool: Indicate whether the video is opened."""
- return self._vcap.isOpened()
-
- @property
- def width(self):
- """int: Width of video frames."""
- return self._width
-
- @property
- def height(self):
- """int: Height of video frames."""
- return self._height
-
- @property
- def resolution(self):
- """tuple: Video resolution (width, height)."""
- return (self._width, self._height)
-
- @property
- def fps(self):
- """float: FPS of the video."""
- return self._fps
-
- @property
- def frame_cnt(self):
- """int: Total frames of the video."""
- return self._frame_cnt
-
- @property
- def fourcc(self):
- """str: "Four character code" of the video."""
- return self._fourcc
-
- @property
- def position(self):
- """int: Current cursor position, indicating frame decoded."""
- return self._position
-
- def _get_real_position(self):
- return int(round(self._vcap.get(CAP_PROP_POS_FRAMES)))
-
- def _set_real_position(self, frame_id):
- self._vcap.set(CAP_PROP_POS_FRAMES, frame_id)
- pos = self._get_real_position()
- for _ in range(frame_id - pos):
- self._vcap.read()
- self._position = frame_id
-
- def read(self):
- """Read the next frame.
-
- If the next frame have been decoded before and in the cache, then
- return it directly, otherwise decode, cache and return it.
-
- Returns:
- ndarray or None: Return the frame if successful, otherwise None.
- """
- # pos = self._position
- if self._cache:
- img = self._cache.get(self._position)
- if img is not None:
- ret = True
- else:
- if self._position != self._get_real_position():
- self._set_real_position(self._position)
- ret, img = self._vcap.read()
- if ret:
- self._cache.put(self._position, img)
- else:
- ret, img = self._vcap.read()
- if ret:
- self._position += 1
- return img
-
- def get_frame(self, frame_id):
- """Get frame by index.
-
- Args:
- frame_id (int): Index of the expected frame, 0-based.
-
- Returns:
- ndarray or None: Return the frame if successful, otherwise None.
- """
- if frame_id < 0 or frame_id >= self._frame_cnt:
- raise IndexError(
- f'"frame_id" must be between 0 and {self._frame_cnt - 1}')
- if frame_id == self._position:
- return self.read()
- if self._cache:
- img = self._cache.get(frame_id)
- if img is not None:
- self._position = frame_id + 1
- return img
- self._set_real_position(frame_id)
- ret, img = self._vcap.read()
- if ret:
- if self._cache:
- self._cache.put(self._position, img)
- self._position += 1
- return img
-
- def current_frame(self):
- """Get the current frame (frame that is just visited).
-
- Returns:
- ndarray or None: If the video is fresh, return None, otherwise
- return the frame.
- """
- if self._position == 0:
- return None
- return self._cache.get(self._position - 1)
-
- def cvt2frames(self,
- frame_dir,
- file_start=0,
- filename_tmpl='{:06d}.jpg',
- start=0,
- max_num=0,
- show_progress=True):
- """Convert a video to frame images.
-
- Args:
- frame_dir (str): Output directory to store all the frame images.
- file_start (int): Filenames will start from the specified number.
- filename_tmpl (str): Filename template with the index as the
- placeholder.
- start (int): The starting frame index.
- max_num (int): Maximum number of frames to be written.
- show_progress (bool): Whether to show a progress bar.
- """
- mkdir_or_exist(frame_dir)
- if max_num == 0:
- task_num = self.frame_cnt - start
- else:
- task_num = min(self.frame_cnt - start, max_num)
- if task_num <= 0:
- raise ValueError('start must be less than total frame number')
- if start > 0:
- self._set_real_position(start)
-
- def write_frame(file_idx):
- img = self.read()
- if img is None:
- return
- filename = osp.join(frame_dir, filename_tmpl.format(file_idx))
- cv2.imwrite(filename, img)
-
- if show_progress:
- track_progress(write_frame, range(file_start,
- file_start + task_num))
- else:
- for i in range(task_num):
- write_frame(file_start + i)
-
- def __len__(self):
- return self.frame_cnt
-
- def __getitem__(self, index):
- if isinstance(index, slice):
- return [
- self.get_frame(i)
- for i in range(*index.indices(self.frame_cnt))
- ]
- # support negative indexing
- if index < 0:
- index += self.frame_cnt
- if index < 0:
- raise IndexError('index out of range')
- return self.get_frame(index)
-
- def __iter__(self):
- self._set_real_position(0)
- return self
-
- def __next__(self):
- img = self.read()
- if img is not None:
- return img
- else:
- raise StopIteration
-
- next = __next__
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_value, traceback):
- self._vcap.release()
-
-
-def frames2video(frame_dir,
- video_file,
- fps=30,
- fourcc='XVID',
- filename_tmpl='{:06d}.jpg',
- start=0,
- end=0,
- show_progress=True):
- """Read the frame images from a directory and join them as a video.
-
- Args:
- frame_dir (str): The directory containing video frames.
- video_file (str): Output filename.
- fps (float): FPS of the output video.
- fourcc (str): Fourcc of the output video, this should be compatible
- with the output file type.
- filename_tmpl (str): Filename template with the index as the variable.
- start (int): Starting frame index.
- end (int): Ending frame index.
- show_progress (bool): Whether to show a progress bar.
- """
- if end == 0:
- ext = filename_tmpl.split('.')[-1]
- end = len([name for name in scandir(frame_dir, ext)])
- first_file = osp.join(frame_dir, filename_tmpl.format(start))
- check_file_exist(first_file, 'The start frame not found: ' + first_file)
- img = cv2.imread(first_file)
- height, width = img.shape[:2]
- resolution = (width, height)
- vwriter = cv2.VideoWriter(video_file, VideoWriter_fourcc(*fourcc), fps,
- resolution)
-
- def write_frame(file_idx):
- filename = osp.join(frame_dir, filename_tmpl.format(file_idx))
- img = cv2.imread(filename)
- vwriter.write(img)
-
- if show_progress:
- track_progress(write_frame, range(start, end))
- else:
- for i in range(start, end):
- write_frame(i)
- vwriter.release()
diff --git a/spaces/giswqs/solara-demo/pages/00_home.py b/spaces/giswqs/solara-demo/pages/00_home.py
deleted file mode 100644
index 7f177ca6f04afd20334d0efdee1d00b8539b0ef4..0000000000000000000000000000000000000000
--- a/spaces/giswqs/solara-demo/pages/00_home.py
+++ /dev/null
@@ -1,25 +0,0 @@
-import solara
-
-@solara.component
-def Page():
-
- markdown = """
- ## Solara for Geospatial Applications
-
- ### Introduction
-
- **A collection of [Solara](https://github.com/widgetti/solara) web apps for geospatial applications.**
-
- Just a proof-of-concept for now. Not all features are working yet. More features will be added in the future. Click on the menu above to see the other pages.
-
- - Web App:
- - GitHub:
- - Hugging Face:
-
- ### Demos
-
- 
-
- """
-
- solara.Markdown(markdown)
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Izotope Ozone Crack Mac.md b/spaces/gotiQspiryo/whisper-ui/examples/Izotope Ozone Crack Mac.md
deleted file mode 100644
index 6901f8e5c45453f2a9dc14e5df12df61b900a0f2..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Izotope Ozone Crack Mac.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
for producers and music makers that have been using plugins to modify their track’s sound, ozone 9 promises a lot of improvements. ozone 9 gives you the ability to change the sound of your instrument by adjusting its balance, panning, resonances, and eq. adjust every instrument and every mic with an intuitive interface, whether you are recording, mixing, mastering, or simply sound designing.
ozone 9 also includes a new balance plugin, which lets you fine-tune the balance of your stereo mix or render a mono balance for your mix. set up your mix with a single click and balance to your heart’s content. a new spectrum analyzer and sender gui are also available.
-
ozone 9 also includes more than 12 ozone-exclusive mastering processors. use the master assistant to fine-tune your eq and gain in real time. add harmonics, saturate, and noise reduction to your sound with the maximizer and peaking plugins. the new tonal balance control is perfect for balancing out vocals, drums, and low-frequency effects. you can even use a variable target to automatically balance the sound to the sound of a reference track.
-
ozone 9 also includes a new sender workflow. it’s a new way to send your mixes to ozone. use the sender to send a stereo mix as mono, or send a mono mix as a stereo mix with a new panning. you can also send a kick and bass track separately and mix them to a mono file.
-
ozone 9 also includes a new spectral analyzer plugin. it lets you fine-tune the sound of your stereo mix or render a mono balance for your mix. finally, ozone 9 offers a new integrated workflow for mixing and mastering. mix any combination of ozone and other plugins, or route individual plugins and buses via the daw’s mixer channels. this workflow allows you to spend more time creating and less time trying to learn the ins and outs of every vst.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/gradio/HuBERT/examples/speech_recognition/utils/wer_utils.py b/spaces/gradio/HuBERT/examples/speech_recognition/utils/wer_utils.py
deleted file mode 100644
index cf6f3d09ba41a46ad4d7968fb3c286dd53d15c38..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/examples/speech_recognition/utils/wer_utils.py
+++ /dev/null
@@ -1,381 +0,0 @@
-#!/usr/bin/env python3
-
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from __future__ import absolute_import, division, print_function, unicode_literals
-
-import re
-from collections import deque
-from enum import Enum
-
-import numpy as np
-
-
-"""
- Utility modules for computation of Word Error Rate,
- Alignments, as well as more granular metrics like
- deletion, insersion and substitutions.
-"""
-
-
-class Code(Enum):
- match = 1
- substitution = 2
- insertion = 3
- deletion = 4
-
-
-class Token(object):
- def __init__(self, lbl="", st=np.nan, en=np.nan):
- if np.isnan(st):
- self.label, self.start, self.end = "", 0.0, 0.0
- else:
- self.label, self.start, self.end = lbl, st, en
-
-
-class AlignmentResult(object):
- def __init__(self, refs, hyps, codes, score):
- self.refs = refs # std::deque
- self.hyps = hyps # std::deque
- self.codes = codes # std::deque
- self.score = score # float
-
-
-def coordinate_to_offset(row, col, ncols):
- return int(row * ncols + col)
-
-
-def offset_to_row(offset, ncols):
- return int(offset / ncols)
-
-
-def offset_to_col(offset, ncols):
- return int(offset % ncols)
-
-
-def trimWhitespace(str):
- return re.sub(" +", " ", re.sub(" *$", "", re.sub("^ *", "", str)))
-
-
-def str2toks(str):
- pieces = trimWhitespace(str).split(" ")
- toks = []
- for p in pieces:
- toks.append(Token(p, 0.0, 0.0))
- return toks
-
-
-class EditDistance(object):
- def __init__(self, time_mediated):
- self.time_mediated_ = time_mediated
- self.scores_ = np.nan # Eigen::Matrix
- self.backtraces_ = (
- np.nan
- ) # Eigen::Matrix backtraces_;
- self.confusion_pairs_ = {}
-
- def cost(self, ref, hyp, code):
- if self.time_mediated_:
- if code == Code.match:
- return abs(ref.start - hyp.start) + abs(ref.end - hyp.end)
- elif code == Code.insertion:
- return hyp.end - hyp.start
- elif code == Code.deletion:
- return ref.end - ref.start
- else: # substitution
- return abs(ref.start - hyp.start) + abs(ref.end - hyp.end) + 0.1
- else:
- if code == Code.match:
- return 0
- elif code == Code.insertion or code == Code.deletion:
- return 3
- else: # substitution
- return 4
-
- def get_result(self, refs, hyps):
- res = AlignmentResult(refs=deque(), hyps=deque(), codes=deque(), score=np.nan)
-
- num_rows, num_cols = self.scores_.shape
- res.score = self.scores_[num_rows - 1, num_cols - 1]
-
- curr_offset = coordinate_to_offset(num_rows - 1, num_cols - 1, num_cols)
-
- while curr_offset != 0:
- curr_row = offset_to_row(curr_offset, num_cols)
- curr_col = offset_to_col(curr_offset, num_cols)
-
- prev_offset = self.backtraces_[curr_row, curr_col]
-
- prev_row = offset_to_row(prev_offset, num_cols)
- prev_col = offset_to_col(prev_offset, num_cols)
-
- res.refs.appendleft(curr_row - 1) # Note: this was .push_front() in C++
- res.hyps.appendleft(curr_col - 1)
- if curr_row - 1 == prev_row and curr_col == prev_col:
- res.codes.appendleft(Code.deletion)
- elif curr_row == prev_row and curr_col - 1 == prev_col:
- res.codes.appendleft(Code.insertion)
- else:
- # assert(curr_row - 1 == prev_row and curr_col - 1 == prev_col)
- ref_str = refs[res.refs[0]].label
- hyp_str = hyps[res.hyps[0]].label
-
- if ref_str == hyp_str:
- res.codes.appendleft(Code.match)
- else:
- res.codes.appendleft(Code.substitution)
-
- confusion_pair = "%s -> %s" % (ref_str, hyp_str)
- if confusion_pair not in self.confusion_pairs_:
- self.confusion_pairs_[confusion_pair] = 1
- else:
- self.confusion_pairs_[confusion_pair] += 1
-
- curr_offset = prev_offset
-
- return res
-
- def align(self, refs, hyps):
- if len(refs) == 0 and len(hyps) == 0:
- return np.nan
-
- # NOTE: we're not resetting the values in these matrices because every value
- # will be overridden in the loop below. If this assumption doesn't hold,
- # be sure to set all entries in self.scores_ and self.backtraces_ to 0.
- self.scores_ = np.zeros((len(refs) + 1, len(hyps) + 1))
- self.backtraces_ = np.zeros((len(refs) + 1, len(hyps) + 1))
-
- num_rows, num_cols = self.scores_.shape
-
- for i in range(num_rows):
- for j in range(num_cols):
- if i == 0 and j == 0:
- self.scores_[i, j] = 0.0
- self.backtraces_[i, j] = 0
- continue
-
- if i == 0:
- self.scores_[i, j] = self.scores_[i, j - 1] + self.cost(
- None, hyps[j - 1], Code.insertion
- )
- self.backtraces_[i, j] = coordinate_to_offset(i, j - 1, num_cols)
- continue
-
- if j == 0:
- self.scores_[i, j] = self.scores_[i - 1, j] + self.cost(
- refs[i - 1], None, Code.deletion
- )
- self.backtraces_[i, j] = coordinate_to_offset(i - 1, j, num_cols)
- continue
-
- # Below here both i and j are greater than 0
- ref = refs[i - 1]
- hyp = hyps[j - 1]
- best_score = self.scores_[i - 1, j - 1] + (
- self.cost(ref, hyp, Code.match)
- if (ref.label == hyp.label)
- else self.cost(ref, hyp, Code.substitution)
- )
-
- prev_row = i - 1
- prev_col = j - 1
- ins = self.scores_[i, j - 1] + self.cost(None, hyp, Code.insertion)
- if ins < best_score:
- best_score = ins
- prev_row = i
- prev_col = j - 1
-
- delt = self.scores_[i - 1, j] + self.cost(ref, None, Code.deletion)
- if delt < best_score:
- best_score = delt
- prev_row = i - 1
- prev_col = j
-
- self.scores_[i, j] = best_score
- self.backtraces_[i, j] = coordinate_to_offset(
- prev_row, prev_col, num_cols
- )
-
- return self.get_result(refs, hyps)
-
-
-class WERTransformer(object):
- def __init__(self, hyp_str, ref_str, verbose=True):
- self.ed_ = EditDistance(False)
- self.id2oracle_errs_ = {}
- self.utts_ = 0
- self.words_ = 0
- self.insertions_ = 0
- self.deletions_ = 0
- self.substitutions_ = 0
-
- self.process(["dummy_str", hyp_str, ref_str])
-
- if verbose:
- print("'%s' vs '%s'" % (hyp_str, ref_str))
- self.report_result()
-
- def process(self, input): # std::vector&& input
- if len(input) < 3:
- print(
- "Input must be of the form ... , got ",
- len(input),
- " inputs:",
- )
- return None
-
- # Align
- # std::vector hyps;
- # std::vector refs;
-
- hyps = str2toks(input[-2])
- refs = str2toks(input[-1])
-
- alignment = self.ed_.align(refs, hyps)
- if alignment is None:
- print("Alignment is null")
- return np.nan
-
- # Tally errors
- ins = 0
- dels = 0
- subs = 0
- for code in alignment.codes:
- if code == Code.substitution:
- subs += 1
- elif code == Code.insertion:
- ins += 1
- elif code == Code.deletion:
- dels += 1
-
- # Output
- row = input
- row.append(str(len(refs)))
- row.append(str(ins))
- row.append(str(dels))
- row.append(str(subs))
- # print(row)
-
- # Accumulate
- kIdIndex = 0
- kNBestSep = "/"
-
- pieces = input[kIdIndex].split(kNBestSep)
-
- if len(pieces) == 0:
- print(
- "Error splitting ",
- input[kIdIndex],
- " on '",
- kNBestSep,
- "', got empty list",
- )
- return np.nan
-
- id = pieces[0]
- if id not in self.id2oracle_errs_:
- self.utts_ += 1
- self.words_ += len(refs)
- self.insertions_ += ins
- self.deletions_ += dels
- self.substitutions_ += subs
- self.id2oracle_errs_[id] = [ins, dels, subs]
- else:
- curr_err = ins + dels + subs
- prev_err = np.sum(self.id2oracle_errs_[id])
- if curr_err < prev_err:
- self.id2oracle_errs_[id] = [ins, dels, subs]
-
- return 0
-
- def report_result(self):
- # print("---------- Summary ---------------")
- if self.words_ == 0:
- print("No words counted")
- return
-
- # 1-best
- best_wer = (
- 100.0
- * (self.insertions_ + self.deletions_ + self.substitutions_)
- / self.words_
- )
-
- print(
- "\tWER = %0.2f%% (%i utts, %i words, %0.2f%% ins, "
- "%0.2f%% dels, %0.2f%% subs)"
- % (
- best_wer,
- self.utts_,
- self.words_,
- 100.0 * self.insertions_ / self.words_,
- 100.0 * self.deletions_ / self.words_,
- 100.0 * self.substitutions_ / self.words_,
- )
- )
-
- def wer(self):
- if self.words_ == 0:
- wer = np.nan
- else:
- wer = (
- 100.0
- * (self.insertions_ + self.deletions_ + self.substitutions_)
- / self.words_
- )
- return wer
-
- def stats(self):
- if self.words_ == 0:
- stats = {}
- else:
- wer = (
- 100.0
- * (self.insertions_ + self.deletions_ + self.substitutions_)
- / self.words_
- )
- stats = dict(
- {
- "wer": wer,
- "utts": self.utts_,
- "numwords": self.words_,
- "ins": self.insertions_,
- "dels": self.deletions_,
- "subs": self.substitutions_,
- "confusion_pairs": self.ed_.confusion_pairs_,
- }
- )
- return stats
-
-
-def calc_wer(hyp_str, ref_str):
- t = WERTransformer(hyp_str, ref_str, verbose=0)
- return t.wer()
-
-
-def calc_wer_stats(hyp_str, ref_str):
- t = WERTransformer(hyp_str, ref_str, verbose=0)
- return t.stats()
-
-
-def get_wer_alignment_codes(hyp_str, ref_str):
- """
- INPUT: hypothesis string, reference string
- OUTPUT: List of alignment codes (intermediate results from WER computation)
- """
- t = WERTransformer(hyp_str, ref_str, verbose=0)
- return t.ed_.align(str2toks(ref_str), str2toks(hyp_str)).codes
-
-
-def merge_counts(x, y):
- # Merge two hashes which have 'counts' as their values
- # This can be used for example to merge confusion pair counts
- # conf_pairs = merge_counts(conf_pairs, stats['confusion_pairs'])
- for k, v in y.items():
- if k not in x:
- x[k] = 0
- x[k] += v
- return x
diff --git a/spaces/gradio/HuBERT/tests/test_dictionary.py b/spaces/gradio/HuBERT/tests/test_dictionary.py
deleted file mode 100644
index 81ce102f4f555822e36298034cdeb3d1c0650255..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/tests/test_dictionary.py
+++ /dev/null
@@ -1,116 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import io
-import tempfile
-import unittest
-
-import torch
-from fairseq.data import Dictionary
-
-
-class TestDictionary(unittest.TestCase):
- def test_finalize(self):
- txt = [
- "A B C D",
- "B C D",
- "C D",
- "D",
- ]
- ref_ids1 = list(
- map(
- torch.IntTensor,
- [
- [4, 5, 6, 7, 2],
- [5, 6, 7, 2],
- [6, 7, 2],
- [7, 2],
- ],
- )
- )
- ref_ids2 = list(
- map(
- torch.IntTensor,
- [
- [7, 6, 5, 4, 2],
- [6, 5, 4, 2],
- [5, 4, 2],
- [4, 2],
- ],
- )
- )
-
- # build dictionary
- d = Dictionary()
- for line in txt:
- d.encode_line(line, add_if_not_exist=True)
-
- def get_ids(dictionary):
- ids = []
- for line in txt:
- ids.append(dictionary.encode_line(line, add_if_not_exist=False))
- return ids
-
- def assertMatch(ids, ref_ids):
- for toks, ref_toks in zip(ids, ref_ids):
- self.assertEqual(toks.size(), ref_toks.size())
- self.assertEqual(0, (toks != ref_toks).sum().item())
-
- ids = get_ids(d)
- assertMatch(ids, ref_ids1)
-
- # check finalized dictionary
- d.finalize()
- finalized_ids = get_ids(d)
- assertMatch(finalized_ids, ref_ids2)
-
- # write to disk and reload
- with tempfile.NamedTemporaryFile(mode="w") as tmp_dict:
- d.save(tmp_dict.name)
- d = Dictionary.load(tmp_dict.name)
- reload_ids = get_ids(d)
- assertMatch(reload_ids, ref_ids2)
- assertMatch(finalized_ids, reload_ids)
-
- def test_overwrite(self):
- # for example, Camembert overwrites , and
- dict_file = io.StringIO(
- " 999 #fairseq:overwrite\n"
- " 999 #fairseq:overwrite\n"
- " 999 #fairseq:overwrite\n"
- ", 999\n"
- "▁de 999\n"
- )
- d = Dictionary()
- d.add_from_file(dict_file)
- self.assertEqual(d.index(""), 1)
- self.assertEqual(d.index("foo"), 3)
- self.assertEqual(d.index(""), 4)
- self.assertEqual(d.index(""), 5)
- self.assertEqual(d.index(""), 6)
- self.assertEqual(d.index(","), 7)
- self.assertEqual(d.index("▁de"), 8)
-
- def test_no_overwrite(self):
- # for example, Camembert overwrites , and
- dict_file = io.StringIO(
- " 999\n" " 999\n" " 999\n" ", 999\n" "▁de 999\n"
- )
- d = Dictionary()
- with self.assertRaisesRegex(RuntimeError, "Duplicate"):
- d.add_from_file(dict_file)
-
- def test_space(self):
- # for example, character models treat space as a symbol
- dict_file = io.StringIO(" 999\n" "a 999\n" "b 999\n")
- d = Dictionary()
- d.add_from_file(dict_file)
- self.assertEqual(d.index(" "), 4)
- self.assertEqual(d.index("a"), 5)
- self.assertEqual(d.index("b"), 6)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/gstaff/MagicGen/colab-data-test/css/mtg.css b/spaces/gstaff/MagicGen/colab-data-test/css/mtg.css
deleted file mode 100644
index 535ffd300815eacedfca2c6bad4e17aa0f55f2ec..0000000000000000000000000000000000000000
--- a/spaces/gstaff/MagicGen/colab-data-test/css/mtg.css
+++ /dev/null
@@ -1,130 +0,0 @@
-* {margin: 0; padding: 0; box-sizing: border-box; }
-
-.card {background: #000; padding: 17px; height: 600px; width: 400px;
- margin: 100px auto;
-}
-.card-background {
- padding: 7px 8px 30px 7px;
- background-color: #69a;
- background-image:
- repeating-linear-gradient(140deg, transparent, rgba(255, 255, 255, 0.25) 1%, transparent 20%),
- repeating-linear-gradient(-30deg, transparent, transparent 8%, rgba(255, 255, 255, 0.4), transparent 9%),
- repeating-linear-gradient(-10deg, transparent, transparent 13%, rgba(0, 0, 0, 0.4), transparent 15%),
- repeating-linear-gradient(80deg, transparent, transparent 7.5%, rgba(0, 0, 0, 0.25), transparent 8%),
- repeating-linear-gradient(5deg, transparent, transparent 10.5%, rgba(255, 255, 255, 0.5), transparent 11%),
- repeating-linear-gradient(75deg, transparent, transparent 11.5%, rgba(255, 255, 255, 0.5), transparent 12%),
- repeating-radial-gradient(rgba(0, 0, 0, 0.2), rgba(0, 0, 0, 0.2) 1%, transparent 1%, transparent 5%);
- border-radius: 10px 10px 40px 40px;
- height: 500px;
-}
-
-.card-body {
- position: absolute;
- height: 109.4%;
- width: 350px;
- border: 2px solid rgba(0, 0, 0, 0.8);
- border-right: 2px solid #ddd;
- border-bottom: 2px solid #555;
- border-radius: 5px 5px 0 0;
- background: #ddd;
-
-}
-
-article {
- padding: 3px;
- width: 350px;
-}
-
-article > div {
- background: #ddd;
- position: relative;
- height: 200px;
- border: 2px solid #333;
- z-index: -1;
-}
-
-header {
- padding: 3px;
- background: #ddd;
- border-radius: 8px/20px;
- box-shadow: -2px 0 0 0 rgba(0, 0, 0, 0.8);
- position: relative;
- top: 200px; left: 0; right: 0;
-}
-header div {
- padding: 5px 8px 3px;
- background: radial-gradient(ellipse farthest-corner, #E0E7ED 50%, #BDC6CD);
- position: relative;
- border: 2px solid #000;
- border-radius: 10px/20px;
- box-shadow: inset 2px -3px 0 #aaa, inset -1px 1px 0 #fff;
- height: 33px;
-}
-header:first-child {top: 0; }
-header:first-child div {height: 34px; }
-
-#textBox {
- margin-top: 38px;
- padding: 10px 7px;
- top: 260px; bottom: 44px;
- border: 2px solid #999;
- border-bottom: 0 none;
- border-left: 0 none;
- background: #d3dddd;
-
-}
-
-#powerToughness {
- width: 4em;
- top: ; right: 21px; bottom: 28px; left: auto;
- text-align: center;
- box-shadow: -2px 1px 2px 0 rgba(0, 0, 0, 0.8);
-}
-#powerToughness div {
- padding: 4px 0 0;
- height: 23px;
- box-shadow: inset -2px 2px 1px #333, inset 1px -1px 0 #fff;
- border: 0 none;
- font-size: 21px;
-}
-
-footer {
- color: #ccc;
- font-family: sans-serif; font-size: 9px;
- position: relative;
- left: 25px; bottom: 10px; right: 25px;
- overflow: auto;
-}
-footer p {margin-bottom: 0.2em; letter-spacing: 0.18em; }
-
-.ms {
- position: relative;
- top: -22px;
- float: right;
-}
-
-h1 {font-size: 21px; line-height: 1em; }
-h2 {font-size: 18px; line-height: 1em; }
-h3 {
- padding-top: 2px;
- position: relative;
- right: 5px; top: 2px;
- width: 1.05em; height: 1.05em;
- background: #ddd;
- text-align: center;
- border-radius: 1em;
- line-height: 1em;
-}
-h4 {
- border-bottom: 14px solid #000;
- border-right: 7px solid transparent;
- border-left: 7px solid transparent;
- height: 0; width: 0;
- overflow: hidden;
- position: relative;
- right: 10px; top: 7px;
-}
-h6 {float: right; width: 60%; text-align: right; font-size: 8px; }
-p {margin-bottom: 0.6em; line-height: 1.1em; }
-blockquote {font-style: italic; }
-blockquote p {margin-bottom: 0; }
diff --git a/spaces/gulabpatel/Real-ESRGAN/realesrgan/train.py b/spaces/gulabpatel/Real-ESRGAN/realesrgan/train.py
deleted file mode 100644
index 8a9cec9ed80d9f362984779548dcec921a636a04..0000000000000000000000000000000000000000
--- a/spaces/gulabpatel/Real-ESRGAN/realesrgan/train.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# flake8: noqa
-import os.path as osp
-from basicsr.train import train_pipeline
-
-import realesrgan.archs
-import realesrgan.data
-import realesrgan.models
-
-if __name__ == '__main__':
- root_path = osp.abspath(osp.join(__file__, osp.pardir, osp.pardir))
- train_pipeline(root_path)
diff --git a/spaces/gwang-kim/DATID-3D/eg3d/dataset_tool.py b/spaces/gwang-kim/DATID-3D/eg3d/dataset_tool.py
deleted file mode 100644
index a400f770fa477ef09adf4804235be4d67898765a..0000000000000000000000000000000000000000
--- a/spaces/gwang-kim/DATID-3D/eg3d/dataset_tool.py
+++ /dev/null
@@ -1,458 +0,0 @@
-# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
-#
-# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
-# property and proprietary rights in and to this material, related
-# documentation and any modifications thereto. Any use, reproduction,
-# disclosure or distribution of this material and related documentation
-# without an express license agreement from NVIDIA CORPORATION or
-# its affiliates is strictly prohibited.
-
-"""Tool for creating ZIP/PNG based datasets."""
-
-import functools
-import gzip
-import io
-import json
-import os
-import pickle
-import re
-import sys
-import tarfile
-import zipfile
-from pathlib import Path
-from typing import Callable, Optional, Tuple, Union
-
-import click
-import numpy as np
-import PIL.Image
-from tqdm import tqdm
-
-#----------------------------------------------------------------------------
-
-def error(msg):
- print('Error: ' + msg)
- sys.exit(1)
-
-#----------------------------------------------------------------------------
-
-def parse_tuple(s: str) -> Tuple[int, int]:
- '''Parse a 'M,N' or 'MxN' integer tuple.
-
- Example:
- '4x2' returns (4,2)
- '0,1' returns (0,1)
- '''
- if m := re.match(r'^(\d+)[x,](\d+)$', s):
- return (int(m.group(1)), int(m.group(2)))
- raise ValueError(f'cannot parse tuple {s}')
-
-#----------------------------------------------------------------------------
-
-def maybe_min(a: int, b: Optional[int]) -> int:
- if b is not None:
- return min(a, b)
- return a
-
-#----------------------------------------------------------------------------
-
-def file_ext(name: Union[str, Path]) -> str:
- return str(name).split('.')[-1]
-
-#----------------------------------------------------------------------------
-
-def is_image_ext(fname: Union[str, Path]) -> bool:
- ext = file_ext(fname).lower()
- return f'.{ext}' in PIL.Image.EXTENSION # type: ignore
-
-#----------------------------------------------------------------------------
-
-def open_image_folder(source_dir, *, max_images: Optional[int]):
- input_images = [str(f) for f in sorted(Path(source_dir).rglob('*')) if is_image_ext(f) and os.path.isfile(f)]
-
- # Load labels.
- labels = {}
- meta_fname = os.path.join(source_dir, 'dataset.json')
- if os.path.isfile(meta_fname):
- with open(meta_fname, 'r') as file:
- labels = json.load(file)['labels']
- if labels is not None:
- labels = { x[0]: x[1] for x in labels }
- else:
- labels = {}
-
- max_idx = maybe_min(len(input_images), max_images)
-
- def iterate_images():
- for idx, fname in enumerate(input_images):
- arch_fname = os.path.relpath(fname, source_dir)
- arch_fname = arch_fname.replace('\\', '/')
- img = np.array(PIL.Image.open(fname))
- yield dict(img=img, label=labels.get(arch_fname))
- if idx >= max_idx-1:
- break
- return max_idx, iterate_images()
-
-#----------------------------------------------------------------------------
-
-def open_image_zip(source, *, max_images: Optional[int]):
- with zipfile.ZipFile(source, mode='r') as z:
- input_images = [str(f) for f in sorted(z.namelist()) if is_image_ext(f)]
-
- # Load labels.
- labels = {}
- if 'dataset.json' in z.namelist():
- with z.open('dataset.json', 'r') as file:
- labels = json.load(file)['labels']
- if labels is not None:
- labels = { x[0]: x[1] for x in labels }
- else:
- labels = {}
-
- max_idx = maybe_min(len(input_images), max_images)
-
- def iterate_images():
- with zipfile.ZipFile(source, mode='r') as z:
- for idx, fname in enumerate(input_images):
- with z.open(fname, 'r') as file:
- img = PIL.Image.open(file) # type: ignore
- img = np.array(img)
- yield dict(img=img, label=labels.get(fname))
- if idx >= max_idx-1:
- break
- return max_idx, iterate_images()
-
-#----------------------------------------------------------------------------
-
-def open_lmdb(lmdb_dir: str, *, max_images: Optional[int]):
- import cv2 # pip install opencv-python # pylint: disable=import-error
- import lmdb # pip install lmdb # pylint: disable=import-error
-
- with lmdb.open(lmdb_dir, readonly=True, lock=False).begin(write=False) as txn:
- max_idx = maybe_min(txn.stat()['entries'], max_images)
-
- def iterate_images():
- with lmdb.open(lmdb_dir, readonly=True, lock=False).begin(write=False) as txn:
- for idx, (_key, value) in enumerate(txn.cursor()):
- try:
- try:
- img = cv2.imdecode(np.frombuffer(value, dtype=np.uint8), 1)
- if img is None:
- raise IOError('cv2.imdecode failed')
- img = img[:, :, ::-1] # BGR => RGB
- except IOError:
- img = np.array(PIL.Image.open(io.BytesIO(value)))
- yield dict(img=img, label=None)
- if idx >= max_idx-1:
- break
- except:
- print(sys.exc_info()[1])
-
- return max_idx, iterate_images()
-
-#----------------------------------------------------------------------------
-
-def open_cifar10(tarball: str, *, max_images: Optional[int]):
- images = []
- labels = []
-
- with tarfile.open(tarball, 'r:gz') as tar:
- for batch in range(1, 6):
- member = tar.getmember(f'cifar-10-batches-py/data_batch_{batch}')
- with tar.extractfile(member) as file:
- data = pickle.load(file, encoding='latin1')
- images.append(data['data'].reshape(-1, 3, 32, 32))
- labels.append(data['labels'])
-
- images = np.concatenate(images)
- labels = np.concatenate(labels)
- images = images.transpose([0, 2, 3, 1]) # NCHW -> NHWC
- assert images.shape == (50000, 32, 32, 3) and images.dtype == np.uint8
- assert labels.shape == (50000,) and labels.dtype in [np.int32, np.int64]
- assert np.min(images) == 0 and np.max(images) == 255
- assert np.min(labels) == 0 and np.max(labels) == 9
-
- max_idx = maybe_min(len(images), max_images)
-
- def iterate_images():
- for idx, img in enumerate(images):
- yield dict(img=img, label=int(labels[idx]))
- if idx >= max_idx-1:
- break
-
- return max_idx, iterate_images()
-
-#----------------------------------------------------------------------------
-
-def open_mnist(images_gz: str, *, max_images: Optional[int]):
- labels_gz = images_gz.replace('-images-idx3-ubyte.gz', '-labels-idx1-ubyte.gz')
- assert labels_gz != images_gz
- images = []
- labels = []
-
- with gzip.open(images_gz, 'rb') as f:
- images = np.frombuffer(f.read(), np.uint8, offset=16)
- with gzip.open(labels_gz, 'rb') as f:
- labels = np.frombuffer(f.read(), np.uint8, offset=8)
-
- images = images.reshape(-1, 28, 28)
- images = np.pad(images, [(0,0), (2,2), (2,2)], 'constant', constant_values=0)
- assert images.shape == (60000, 32, 32) and images.dtype == np.uint8
- assert labels.shape == (60000,) and labels.dtype == np.uint8
- assert np.min(images) == 0 and np.max(images) == 255
- assert np.min(labels) == 0 and np.max(labels) == 9
-
- max_idx = maybe_min(len(images), max_images)
-
- def iterate_images():
- for idx, img in enumerate(images):
- yield dict(img=img, label=int(labels[idx]))
- if idx >= max_idx-1:
- break
-
- return max_idx, iterate_images()
-
-#----------------------------------------------------------------------------
-
-def make_transform(
- transform: Optional[str],
- output_width: Optional[int],
- output_height: Optional[int]
-) -> Callable[[np.ndarray], Optional[np.ndarray]]:
- def scale(width, height, img):
- w = img.shape[1]
- h = img.shape[0]
- if width == w and height == h:
- return img
- img = PIL.Image.fromarray(img)
- ww = width if width is not None else w
- hh = height if height is not None else h
- img = img.resize((ww, hh), PIL.Image.LANCZOS)
- return np.array(img)
-
- def center_crop(width, height, img):
- crop = np.min(img.shape[:2])
- img = img[(img.shape[0] - crop) // 2 : (img.shape[0] + crop) // 2, (img.shape[1] - crop) // 2 : (img.shape[1] + crop) // 2]
- img = PIL.Image.fromarray(img, 'RGB')
- img = img.resize((width, height), PIL.Image.LANCZOS)
- return np.array(img)
-
- def center_crop_wide(width, height, img):
- ch = int(np.round(width * img.shape[0] / img.shape[1]))
- if img.shape[1] < width or ch < height:
- return None
-
- img = img[(img.shape[0] - ch) // 2 : (img.shape[0] + ch) // 2]
- img = PIL.Image.fromarray(img, 'RGB')
- img = img.resize((width, height), PIL.Image.LANCZOS)
- img = np.array(img)
-
- canvas = np.zeros([width, width, 3], dtype=np.uint8)
- canvas[(width - height) // 2 : (width + height) // 2, :] = img
- return canvas
-
- if transform is None:
- return functools.partial(scale, output_width, output_height)
- if transform == 'center-crop':
- if (output_width is None) or (output_height is None):
- error ('must specify --resolution=WxH when using ' + transform + 'transform')
- return functools.partial(center_crop, output_width, output_height)
- if transform == 'center-crop-wide':
- if (output_width is None) or (output_height is None):
- error ('must specify --resolution=WxH when using ' + transform + ' transform')
- return functools.partial(center_crop_wide, output_width, output_height)
- assert False, 'unknown transform'
-
-#----------------------------------------------------------------------------
-
-def open_dataset(source, *, max_images: Optional[int]):
- if os.path.isdir(source):
- if source.rstrip('/').endswith('_lmdb'):
- return open_lmdb(source, max_images=max_images)
- else:
- return open_image_folder(source, max_images=max_images)
- elif os.path.isfile(source):
- if os.path.basename(source) == 'cifar-10-python.tar.gz':
- return open_cifar10(source, max_images=max_images)
- elif os.path.basename(source) == 'train-images-idx3-ubyte.gz':
- return open_mnist(source, max_images=max_images)
- elif file_ext(source) == 'zip':
- return open_image_zip(source, max_images=max_images)
- else:
- assert False, 'unknown archive type'
- else:
- error(f'Missing input file or directory: {source}')
-
-#----------------------------------------------------------------------------
-
-def open_dest(dest: str) -> Tuple[str, Callable[[str, Union[bytes, str]], None], Callable[[], None]]:
- dest_ext = file_ext(dest)
-
- if dest_ext == 'zip':
- if os.path.dirname(dest) != '':
- os.makedirs(os.path.dirname(dest), exist_ok=True)
- zf = zipfile.ZipFile(file=dest, mode='w', compression=zipfile.ZIP_STORED)
- def zip_write_bytes(fname: str, data: Union[bytes, str]):
- zf.writestr(fname, data)
- return '', zip_write_bytes, zf.close
- else:
- # If the output folder already exists, check that is is
- # empty.
- #
- # Note: creating the output directory is not strictly
- # necessary as folder_write_bytes() also mkdirs, but it's better
- # to give an error message earlier in case the dest folder
- # somehow cannot be created.
- if os.path.isdir(dest) and len(os.listdir(dest)) != 0:
- error('--dest folder must be empty')
- os.makedirs(dest, exist_ok=True)
-
- def folder_write_bytes(fname: str, data: Union[bytes, str]):
- os.makedirs(os.path.dirname(fname), exist_ok=True)
- with open(fname, 'wb') as fout:
- if isinstance(data, str):
- data = data.encode('utf8')
- fout.write(data)
- return dest, folder_write_bytes, lambda: None
-
-#----------------------------------------------------------------------------
-
-@click.command()
-@click.pass_context
-@click.option('--source', help='Directory or archive name for input dataset', required=True, metavar='PATH')
-@click.option('--dest', help='Output directory or archive name for output dataset', required=True, metavar='PATH')
-@click.option('--max-images', help='Output only up to `max-images` images', type=int, default=None)
-@click.option('--transform', help='Input crop/resize mode', type=click.Choice(['center-crop', 'center-crop-wide']))
-@click.option('--resolution', help='Output resolution (e.g., \'512x512\')', metavar='WxH', type=parse_tuple)
-def convert_dataset(
- ctx: click.Context,
- source: str,
- dest: str,
- max_images: Optional[int],
- transform: Optional[str],
- resolution: Optional[Tuple[int, int]]
-):
- """Convert an image dataset into a dataset archive usable with StyleGAN2 ADA PyTorch.
-
- The input dataset format is guessed from the --source argument:
-
- \b
- --source *_lmdb/ Load LSUN dataset
- --source cifar-10-python.tar.gz Load CIFAR-10 dataset
- --source train-images-idx3-ubyte.gz Load MNIST dataset
- --source path/ Recursively load all images from path/
- --source dataset.zip Recursively load all images from dataset.zip
-
- Specifying the output format and path:
-
- \b
- --dest /path/to/dir Save output files under /path/to/dir
- --dest /path/to/dataset.zip Save output files into /path/to/dataset.zip
-
- The output dataset format can be either an image folder or an uncompressed zip archive.
- Zip archives makes it easier to move datasets around file servers and clusters, and may
- offer better training performance on network file systems.
-
- Images within the dataset archive will be stored as uncompressed PNG.
- Uncompressed PNGs can be efficiently decoded in the training loop.
-
- Class labels are stored in a file called 'dataset.json' that is stored at the
- dataset root folder. This file has the following structure:
-
- \b
- {
- "labels": [
- ["00000/img00000000.png",6],
- ["00000/img00000001.png",9],
- ... repeated for every image in the dataset
- ["00049/img00049999.png",1]
- ]
- }
-
- If the 'dataset.json' file cannot be found, the dataset is interpreted as
- not containing class labels.
-
- Image scale/crop and resolution requirements:
-
- Output images must be square-shaped and they must all have the same power-of-two
- dimensions.
-
- To scale arbitrary input image size to a specific width and height, use the
- --resolution option. Output resolution will be either the original
- input resolution (if resolution was not specified) or the one specified with
- --resolution option.
-
- Use the --transform=center-crop or --transform=center-crop-wide options to apply a
- center crop transform on the input image. These options should be used with the
- --resolution option. For example:
-
- \b
- python dataset_tool.py --source LSUN/raw/cat_lmdb --dest /tmp/lsun_cat \\
- --transform=center-crop-wide --resolution=512x384
- """
-
- PIL.Image.init() # type: ignore
-
- if dest == '':
- ctx.fail('--dest output filename or directory must not be an empty string')
-
- num_files, input_iter = open_dataset(source, max_images=max_images)
- archive_root_dir, save_bytes, close_dest = open_dest(dest)
-
- if resolution is None: resolution = (None, None)
- transform_image = make_transform(transform, *resolution)
-
- dataset_attrs = None
-
- labels = []
- for idx, image in tqdm(enumerate(input_iter), total=num_files):
- idx_str = f'{idx:08d}'
- archive_fname = f'{idx_str[:5]}/img{idx_str}.png'
-
- # Apply crop and resize.
- img = transform_image(image['img'])
-
- # Transform may drop images.
- if img is None:
- continue
-
- # Error check to require uniform image attributes across
- # the whole dataset.
- channels = img.shape[2] if img.ndim == 3 else 1
- cur_image_attrs = {
- 'width': img.shape[1],
- 'height': img.shape[0],
- 'channels': channels
- }
- if dataset_attrs is None:
- dataset_attrs = cur_image_attrs
- width = dataset_attrs['width']
- height = dataset_attrs['height']
- if width != height:
- error(f'Image dimensions after scale and crop are required to be square. Got {width}x{height}')
- if dataset_attrs['channels'] not in [1, 3, 4]:
- error('Input images must be stored as RGB or grayscale')
- if width != 2 ** int(np.floor(np.log2(width))):
- error('Image width/height after scale and crop are required to be power-of-two')
- elif dataset_attrs != cur_image_attrs:
- err = [f' dataset {k}/cur image {k}: {dataset_attrs[k]}/{cur_image_attrs[k]}' for k in dataset_attrs.keys()] # pylint: disable=unsubscriptable-object
- error(f'Image {archive_fname} attributes must be equal across all images of the dataset. Got:\n' + '\n'.join(err))
-
- # Save the image as an uncompressed PNG.
- img = PIL.Image.fromarray(img, { 1: 'L', 3: 'RGB', 4: 'RGBA'}[channels])
- if channels == 4: img = img.convert('RGB')
- image_bits = io.BytesIO()
- img.save(image_bits, format='png', compress_level=0, optimize=False)
- save_bytes(os.path.join(archive_root_dir, archive_fname), image_bits.getbuffer())
- labels.append([archive_fname, image['label']] if image['label'] is not None else None)
-
- metadata = {
- 'labels': labels if all(x is not None for x in labels) else None
- }
- save_bytes(os.path.join(archive_root_dir, 'dataset.json'), json.dumps(metadata))
- close_dest()
-
-#----------------------------------------------------------------------------
-
-if __name__ == "__main__":
- convert_dataset() # pylint: disable=no-value-for-parameter
diff --git a/spaces/gylleus/icongen/dnnlib/__init__.py b/spaces/gylleus/icongen/dnnlib/__init__.py
deleted file mode 100644
index 2f08cf36f11f9b0fd94c1b7caeadf69b98375b04..0000000000000000000000000000000000000000
--- a/spaces/gylleus/icongen/dnnlib/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-from .util import EasyDict, make_cache_dir_path
diff --git a/spaces/haakohu/deep_privacy2/stylemc.py b/spaces/haakohu/deep_privacy2/stylemc.py
deleted file mode 100644
index c4fefb230a11cb51da8c47afa9c831acb9ce25e4..0000000000000000000000000000000000000000
--- a/spaces/haakohu/deep_privacy2/stylemc.py
+++ /dev/null
@@ -1,295 +0,0 @@
-"""
-Approach: "StyleMC: Multi-Channel Based Fast Text-Guided Image Generation and Manipulation"
-Original source code:
-https://github.com/autonomousvision/stylegan_xl/blob/f9be58e98110bd946fcdadef2aac8345466faaf3/run_stylemc.py#
-Modified by Håkon Hukkelås
-"""
-import os
-from pathlib import Path
-import tqdm
-import re
-import click
-from dp2 import utils
-import tops
-from typing import List, Optional
-import PIL.Image
-import imageio
-from timeit import default_timer as timer
-
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torchvision.transforms.functional import resize, normalize
-from dp2.infer import build_trained_generator
-import clip
-
-#----------------------------------------------------------------------------
-
-class AverageMeter(object):
- """Computes and stores the average and current value"""
- def __init__(self, name, fmt=':f'):
- self.name = name
- self.fmt = fmt
- self.reset()
-
- def reset(self):
- self.val = 0
- self.avg = 0
- self.sum = 0
- self.count = 0
-
- def update(self, val, n=1):
- self.val = val
- self.sum += val * n
- self.count += n
- self.avg = self.sum / self.count
-
- def __str__(self):
- fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})'
- return fmtstr.format(**self.__dict__)
-
-
-class ProgressMeter(object):
- def __init__(self, num_batches, meters, prefix=""):
- self.batch_fmtstr = self._get_batch_fmtstr(num_batches)
- self.meters = meters
- self.prefix = prefix
-
- def display(self, batch):
- entries = [self.prefix + self.batch_fmtstr.format(batch)]
- entries += [str(meter) for meter in self.meters]
- print('\t'.join(entries))
-
- def _get_batch_fmtstr(self, num_batches):
- num_digits = len(str(num_batches // 1))
- fmt = '{:' + str(num_digits) + 'd}'
- return '[' + fmt + '/' + fmt.format(num_batches) + ']'
-
-
-def save_image(img, path):
- img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8)
- PIL.Image.fromarray(img[0].cpu().numpy(), 'RGB').save(path)
-
-
-def unravel_index(index, shape):
- out = []
- for dim in reversed(shape):
- out.append(index % dim)
- index = index // dim
- return tuple(reversed(out))
-
-
-def num_range(s: str) -> List[int]:
- '''Accept either a comma separated list of numbers 'a,b,c' or a range 'a-c' and return as a list of ints.'''
-
- range_re = re.compile(r'^(\d+)-(\d+)$')
- m = range_re.match(s)
- if m:
- return list(range(int(m.group(1)), int(m.group(2))+1))
- vals = s.split(',')
- return [int(x) for x in vals]
-
-
-#----------------------------------------------------------------------------
-
-
-
-def spherical_dist_loss(x, y):
- x = F.normalize(x, dim=-1)
- y = F.normalize(y, dim=-1)
- return (x - y).norm(dim=-1).div(2).arcsin().pow(2).mul(2)
-
-
-def prompts_dist_loss(x, targets, loss):
- if len(targets) == 1: # Keeps consistent results vs previous method for single objective guidance
- return loss(x, targets[0])
- distances = [loss(x, target) for target in targets]
- return torch.stack(distances, dim=-1).sum(dim=-1)
-
-
-def embed_text(model, prompt, device='cuda'):
- return
-
-
-#----------------------------------------------------------------------------
-
-@torch.no_grad()
-@torch.cuda.amp.autocast()
-def generate_edit(
- G,
- dl,
- direction,
- edit_strength,
- path,
- ):
- for it, batch in enumerate(dl):
- batch["embedding"] = None
- styles = get_styles(None, G, batch, truncation_value=0)
- imgs = []
- grad_changes = [_*edit_strength for _ in [0, 0.25, 0.5, 0.75, 1]]
- grad_changes = [*[-x for x in grad_changes][::-1], *grad_changes]
- batch = {k: tops.to_cuda(v) if v is not None else v for k,v in batch.items()}
- for i, grad_change in enumerate(grad_changes):
- s = styles + direction*grad_change
-
- img = G(**batch, s=iter(s))["img"]
- img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255)
- imgs.append(img[0].to(torch.uint8).cpu().numpy())
- PIL.Image.fromarray(np.concatenate(imgs, axis=1), 'RGB').save(path + f'{it}.png')
-
-
-@torch.no_grad()
-def get_styles(seed, G: torch.nn.Module, batch, truncation_value=1):
- all_styles = []
- if seed is None:
- z = np.random.normal(0, 0, size=(1, G.z_channels))
- else:
- z = np.random.RandomState(seed=seed).normal(0, 1, size=(1, G.z_channels))
- z_idx = np.random.RandomState(seed=seed).randint(0, len(G.style_net.w_centers))
- w_c = G.style_net.w_centers[z_idx].to(tops.get_device()).view(1, -1)
- w = G.style_net(torch.from_numpy(z).to(tops.get_device()))
-
- w = w_c.to(w.dtype).lerp(w, truncation_value)
- if hasattr(G, "get_comod_y"):
- w = G.get_comod_y(batch, w)
- for block in G.modules():
- if not hasattr(block, "affine") or not hasattr(block.affine, "weight"):
- continue
- gamma0 = block.affine(w)
- if hasattr(block, "affine_beta"):
- beta0 = block.affine_beta(w)
- gamma0 = torch.cat((gamma0, beta0), dim=1)
- all_styles.append(gamma0)
- max_ch = max([s.shape[-1] for s in all_styles])
- all_styles = [F.pad(s, ((0, max_ch - s.shape[-1])), "constant", 0) for s in all_styles]
- all_styles = torch.cat(all_styles)
- return all_styles
-
-def get_and_cache_direction(output_dir: Path, dl_val, G, text_prompt):
- cache_path = output_dir.joinpath(
- "stylemc_cache", text_prompt.replace(" ", "_") + ".torch")
- if cache_path.is_file():
- print("Loaded cache from:", cache_path)
- return torch.load(cache_path)
- direction = find_direction(G, text_prompt, None, dl_val=iter(dl_val))
- cache_path.parent.mkdir(exist_ok=True, parents=True)
- torch.save(direction, cache_path)
- return direction
-
-@torch.cuda.amp.autocast()
-def find_direction(
- G,
- text_prompt,
- batches,
- #layers,
- n_iterations=128*8,
- batch_size=8,
- dl_val=None
-):
- time_start = timer()
-
- clip_model = clip.load("ViT-B/16", device=tops.get_device())[0]
-
- target = [clip_model.encode_text(clip.tokenize(text_prompt).to(tops.get_device())).float()]
- all_styles = []
- if dl_val is not None:
- first_batch = next(dl_val)
- else:
- first_batch = batches[0]
- first_batch["embedding"] = None if "embedding" not in first_batch else first_batch["embedding"]
- s = get_styles(0, G, first_batch)
- # stats tracker
- cos_sim_track = AverageMeter('cos_sim', ':.4f')
- norm_track = AverageMeter('norm', ':.4f')
- n_iterations = n_iterations // batch_size
- progress = ProgressMeter(n_iterations, [cos_sim_track, norm_track])
-
- # initalize styles direction
- direction = torch.zeros(s.shape, device=tops.get_device())
- direction.requires_grad_()
- utils.set_requires_grad(G, False)
- direction_tracker = torch.zeros_like(direction)
- opt = torch.optim.AdamW([direction], lr=0.05, betas=(0., 0.999), weight_decay=0.25)
-
- grads = []
- for seed_idx in tqdm.trange(n_iterations):
- # forward pass through synthesis network with new styles
- if seed_idx == 0:
- batch = first_batch
- elif dl_val is not None:
- batch = next(dl_val)
- batch["embedding"] = None if "embedding" not in batch else batch["embedding"]
- else:
- batch = {k: tops.to_cuda(v) if v is not None else v for k, v in batches[seed_idx].items()}
- styles = get_styles(seed_idx, G, batch) + direction
- img = G(**batch, s=iter(styles))["img"]
- batch = {k: v.cpu() if v is not None else v for k, v in batch.items()}
- # clip loss
- img = (img + 1)/2
- img = normalize(img, mean=(0.48145466, 0.4578275, 0.40821073), std=(0.26862954, 0.26130258, 0.27577711))
- img = resize(img, (224, 224))
- embeds = clip_model.encode_image(img)
- cos_sim = prompts_dist_loss(embeds, target, spherical_dist_loss)
- cos_sim.backward(retain_graph=True)
-
- # track stats
- cos_sim_track.update(cos_sim.item())
- norm_track.update(torch.norm(direction).item())
-
- if not (seed_idx % batch_size):
-
- # zeroing out gradients for non-optimized layers
- #layers_zeroed = torch.tensor([x for x in range(G.num_ws) if not x in layers])
- #direction.grad[:, layers_zeroed] = 0
-
- opt.step()
- grads.append(direction.grad.clone())
- direction.grad.data.zero_()
-
- # keep track of gradients over time
- if seed_idx > 3:
- direction_tracker[grads[-2] * grads[-1] < 0] += 1
-
- # plot stats
- progress.display(seed_idx)
-
- # throw out fluctuating channels
- direction = direction.detach()
- direction[direction_tracker > n_iterations / 4] = 0
- print(direction)
- print(f"Time for direction search: {timer() - time_start:.2f} s")
- return direction
-
-
-
-
-@click.command()
-@click.argument("config_path")
-@click.argument("input_path")
-@click.argument("output_path")
-#@click.option('--layers', type=num_range, help='Restrict the style space to a range of layers. We recommend not to optimize the critically sampled layers (last 3).', required=True)
-@click.option('--text-prompt', help='Text', type=str, required=True)
-@click.option('--edit-strength', help='Strength of edit', type=float, required=True)
-@click.option('--outdir', help='Where to save the output images', type=str, required=True)
-def stylemc(
- config_path,
- #layers: List[int],
- text_prompt: str,
- edit_strength: float,
- outdir: str,
-):
- cfg = utils.load_config(config_path)
- G = build_trained_generator(cfg)
- cfg.train.batch_size = 1
- n_iterations = 256
- dl_val = tops.config.instantiate(cfg.data.val.loader)
-
- direction = find_direction(G, text_prompt, None, n_iterations=n_iterations, dl_val=iter(dl_val))
-
- text_prompt = text_prompt.replace(" ", "_")
- generate_edit(G, input_path, direction, edit_strength, output_path)
-
-
-if __name__ == "__main__":
- stylemc()
diff --git a/spaces/haakohu/deep_privacy2_face/sg3_torch_utils/custom_ops.py b/spaces/haakohu/deep_privacy2_face/sg3_torch_utils/custom_ops.py
deleted file mode 100644
index 4cc4e43fc6f6ce79f2bd68a44ba87990b9b8564e..0000000000000000000000000000000000000000
--- a/spaces/haakohu/deep_privacy2_face/sg3_torch_utils/custom_ops.py
+++ /dev/null
@@ -1,126 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import os
-import glob
-import torch
-import torch.utils.cpp_extension
-import importlib
-import hashlib
-import shutil
-from pathlib import Path
-
-from torch.utils.file_baton import FileBaton
-
-#----------------------------------------------------------------------------
-# Global options.
-
-verbosity = 'brief' # Verbosity level: 'none', 'brief', 'full'
-
-#----------------------------------------------------------------------------
-# Internal helper funcs.
-
-def _find_compiler_bindir():
- patterns = [
- 'C:/Program Files (x86)/Microsoft Visual Studio/*/Professional/VC/Tools/MSVC/*/bin/Hostx64/x64',
- 'C:/Program Files (x86)/Microsoft Visual Studio/*/BuildTools/VC/Tools/MSVC/*/bin/Hostx64/x64',
- 'C:/Program Files (x86)/Microsoft Visual Studio/*/Community/VC/Tools/MSVC/*/bin/Hostx64/x64',
- 'C:/Program Files (x86)/Microsoft Visual Studio */vc/bin',
- ]
- for pattern in patterns:
- matches = sorted(glob.glob(pattern))
- if len(matches):
- return matches[-1]
- return None
-
-#----------------------------------------------------------------------------
-# Main entry point for compiling and loading C++/CUDA plugins.
-
-_cached_plugins = dict()
-
-def get_plugin(module_name, sources, **build_kwargs):
- assert verbosity in ['none', 'brief', 'full']
-
- # Already cached?
- if module_name in _cached_plugins:
- return _cached_plugins[module_name]
-
- # Print status.
- if verbosity == 'full':
- print(f'Setting up PyTorch plugin "{module_name}"...')
- elif verbosity == 'brief':
- print(f'Setting up PyTorch plugin "{module_name}"... ', end='', flush=True)
-
- try: # pylint: disable=too-many-nested-blocks
- # Make sure we can find the necessary compiler binaries.
- if os.name == 'nt' and os.system("where cl.exe >nul 2>nul") != 0:
- compiler_bindir = _find_compiler_bindir()
- if compiler_bindir is None:
- raise RuntimeError(f'Could not find MSVC/GCC/CLANG installation on this computer. Check _find_compiler_bindir() in "{__file__}".')
- os.environ['PATH'] += ';' + compiler_bindir
-
- # Compile and load.
- verbose_build = (verbosity == 'full')
-
- # Incremental build md5sum trickery. Copies all the input source files
- # into a cached build directory under a combined md5 digest of the input
- # source files. Copying is done only if the combined digest has changed.
- # This keeps input file timestamps and filenames the same as in previous
- # extension builds, allowing for fast incremental rebuilds.
- #
- # This optimization is done only in case all the source files reside in
- # a single directory (just for simplicity) and if the TORCH_EXTENSIONS_DIR
- # environment variable is set (we take this as a signal that the user
- # actually cares about this.)
- source_dirs_set = set(os.path.dirname(source) for source in sources)
- if len(source_dirs_set) == 1 and ('TORCH_EXTENSIONS_DIR' in os.environ):
- all_source_files = sorted(list(x for x in Path(list(source_dirs_set)[0]).iterdir() if x.is_file()))
-
- # Compute a combined hash digest for all source files in the same
- # custom op directory (usually .cu, .cpp, .py and .h files).
- hash_md5 = hashlib.md5()
- for src in all_source_files:
- with open(src, 'rb') as f:
- hash_md5.update(f.read())
- build_dir = torch.utils.cpp_extension._get_build_directory(module_name, verbose=verbose_build) # pylint: disable=protected-access
- digest_build_dir = os.path.join(build_dir, hash_md5.hexdigest())
-
- if not os.path.isdir(digest_build_dir):
- os.makedirs(digest_build_dir, exist_ok=True)
- baton = FileBaton(os.path.join(digest_build_dir, 'lock'))
- if baton.try_acquire():
- try:
- for src in all_source_files:
- shutil.copyfile(src, os.path.join(digest_build_dir, os.path.basename(src)))
- finally:
- baton.release()
- else:
- # Someone else is copying source files under the digest dir,
- # wait until done and continue.
- baton.wait()
- digest_sources = [os.path.join(digest_build_dir, os.path.basename(x)) for x in sources]
- torch.utils.cpp_extension.load(name=module_name, build_directory=build_dir,
- verbose=verbose_build, sources=digest_sources, **build_kwargs)
- else:
- torch.utils.cpp_extension.load(name=module_name, verbose=verbose_build, sources=sources, **build_kwargs)
- module = importlib.import_module(module_name)
-
- except:
- if verbosity == 'brief':
- print('Failed!')
- raise
-
- # Print status and add to cache.
- if verbosity == 'full':
- print(f'Done setting up PyTorch plugin "{module_name}".')
- elif verbosity == 'brief':
- print('Done.')
- _cached_plugins[module_name] = module
- return module
-
-#----------------------------------------------------------------------------
diff --git a/spaces/hamacojr/SAM-CAT-Seg/cat_seg/modeling/heads/__init__.py b/spaces/hamacojr/SAM-CAT-Seg/cat_seg/modeling/heads/__init__.py
deleted file mode 100644
index 9020c2df23e2af280b7bb168b996ae9eaf312eb8..0000000000000000000000000000000000000000
--- a/spaces/hamacojr/SAM-CAT-Seg/cat_seg/modeling/heads/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
diff --git a/spaces/hamacojr/SAM-CAT-Seg/open_clip/Makefile b/spaces/hamacojr/SAM-CAT-Seg/open_clip/Makefile
deleted file mode 100644
index ff07eccefed3d959c77d007d2571e226a07ace60..0000000000000000000000000000000000000000
--- a/spaces/hamacojr/SAM-CAT-Seg/open_clip/Makefile
+++ /dev/null
@@ -1,12 +0,0 @@
-install: ## [Local development] Upgrade pip, install requirements, install package.
- python -m pip install -U pip
- python -m pip install -e .
-
-install-training:
- python -m pip install -r requirements-training.txt
-
-install-test: ## [Local development] Install test requirements
- python -m pip install -r requirements-test.txt
-
-test: ## [Local development] Run unit tests
- python -m pytest -x -s -v tests
diff --git a/spaces/haseeb-heaven/AutoBard-Coder/response/content.md b/spaces/haseeb-heaven/AutoBard-Coder/response/content.md
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/hk59775634/OpenAI-Manager/index.html b/spaces/hk59775634/OpenAI-Manager/index.html
deleted file mode 100644
index b56d85288f5872924264a53374f1ee3c2a745934..0000000000000000000000000000000000000000
--- a/spaces/hk59775634/OpenAI-Manager/index.html
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
-
-
-
-
-
-
-
- JCM-AI
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/inference/__init__.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/inference/__init__.py
deleted file mode 100644
index 72b8078b9dddddf22182fec2555d8d118ea72622..0000000000000000000000000000000000000000
--- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/inference/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from __future__ import absolute_import
-from . import *
\ No newline at end of file
diff --git a/spaces/huggingface-projects/color-palette-generator-sd/static/_app/immutable/assets/_page-fd1176fc.css b/spaces/huggingface-projects/color-palette-generator-sd/static/_app/immutable/assets/_page-fd1176fc.css
deleted file mode 100644
index 18d52da51ee9754d809af8d6afac22d9685e33ac..0000000000000000000000000000000000000000
--- a/spaces/huggingface-projects/color-palette-generator-sd/static/_app/immutable/assets/_page-fd1176fc.css
+++ /dev/null
@@ -1 +0,0 @@
-.button.svelte-8zu88a{margin-left:.5rem;min-width:9ch;border-radius:1rem;border-width:2px;--tw-border-opacity:1;border-color:rgb(0 0 0 / var(--tw-border-opacity));--tw-bg-opacity:1;background-color:rgb(0 0 0 / var(--tw-bg-opacity));padding:.5rem;font-size:.75rem;line-height:1rem;font-weight:700;--tw-text-opacity:1;color:rgb(255 255 255 / var(--tw-text-opacity));--tw-shadow:0 1px 2px 0 rgb(0 0 0 / .05);--tw-shadow-colored:0 1px 2px 0 var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.button.svelte-8zu88a:focus{--tw-border-opacity:1;border-color:rgb(156 163 175 / var(--tw-border-opacity));outline:2px solid transparent;outline-offset:2px}@media (prefers-color-scheme: dark){.button.svelte-8zu88a{--tw-border-opacity:1;border-color:rgb(255 255 255 / var(--tw-border-opacity))}}.link.svelte-zbscw1{font-size:.75rem;line-height:1rem;font-weight:700;text-decoration-line:underline}.link.svelte-zbscw1:visited{color:#6b7280}.link.svelte-zbscw1:hover{--tw-text-opacity:1;color:rgb(107 114 128 / var(--tw-text-opacity));text-decoration-line:none}.input.svelte-zbscw1{grid-column:span 4 / span 4;border-radius:1rem;border-width:2px;--tw-border-opacity:1;border-color:rgb(0 0 0 / var(--tw-border-opacity));--tw-bg-opacity:1;background-color:rgb(15 23 42 / var(--tw-bg-opacity));padding-left:.5rem;padding-right:.5rem;font-size:.875rem;line-height:1.25rem;font-style:italic;--tw-text-opacity:1;color:rgb(255 255 255 / var(--tw-text-opacity));--tw-shadow:0 1px 2px 0 rgb(0 0 0 / .05);--tw-shadow-colored:0 1px 2px 0 var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.input.svelte-zbscw1::-moz-placeholder{color:rgb(255 255 255 / var(--tw-text-opacity));--tw-text-opacity:.3 }.input.svelte-zbscw1::placeholder{color:rgb(255 255 255 / var(--tw-text-opacity));--tw-text-opacity:.3 }.input.svelte-zbscw1:focus{--tw-border-opacity:1;border-color:rgb(156 163 175 / var(--tw-border-opacity));outline:2px solid transparent;outline-offset:2px;--tw-ring-offset-shadow:var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow:var(--tw-ring-inset) 0 0 0 calc(1px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow, 0 0 #0000)}.input.svelte-zbscw1:disabled{opacity:.5}@media (prefers-color-scheme: dark){.input.svelte-zbscw1{--tw-bg-opacity:1;background-color:rgb(255 255 255 / var(--tw-bg-opacity));--tw-text-opacity:1;color:rgb(0 0 0 / var(--tw-text-opacity))}.input.svelte-zbscw1::-moz-placeholder{color:rgb(0 0 0 / var(--tw-text-opacity));--tw-text-opacity:.1 }.input.svelte-zbscw1::placeholder{color:rgb(0 0 0 / var(--tw-text-opacity));--tw-text-opacity:.1 }}@media (min-width: 768px){.input.svelte-zbscw1{grid-column:span 5 / span 5}}.button.svelte-zbscw1{grid-column:span 2 / span 2;margin-left:.5rem;border-radius:1rem;border-width:2px;--tw-border-opacity:1;border-color:rgb(0 0 0 / var(--tw-border-opacity));padding:.5rem;font-size:.75rem;line-height:1rem;font-weight:700;--tw-shadow:0 1px 2px 0 rgb(0 0 0 / .05);--tw-shadow-colored:0 1px 2px 0 var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.button.svelte-zbscw1:focus{--tw-border-opacity:1;border-color:rgb(156 163 175 / var(--tw-border-opacity));outline:2px solid transparent;outline-offset:2px}.button.svelte-zbscw1:disabled{opacity:.5}@media (prefers-color-scheme: dark){.button.svelte-zbscw1{--tw-bg-opacity:1;background-color:rgb(255 255 255 / var(--tw-bg-opacity));--tw-text-opacity:1;color:rgb(0 0 0 / var(--tw-text-opacity))}}@media (min-width: 768px){.button.svelte-zbscw1{grid-column:span 1 / span 1}}
diff --git a/spaces/huggingface-projects/stable-diffusion-multiplayer/stablediffusion-infinity/PyPatchMatch/patch_match.py b/spaces/huggingface-projects/stable-diffusion-multiplayer/stablediffusion-infinity/PyPatchMatch/patch_match.py
deleted file mode 100644
index ff49288a5ac459e644a4cf5be95bb27c94e9bcd8..0000000000000000000000000000000000000000
--- a/spaces/huggingface-projects/stable-diffusion-multiplayer/stablediffusion-infinity/PyPatchMatch/patch_match.py
+++ /dev/null
@@ -1,191 +0,0 @@
-#! /usr/bin/env python3
-# -*- coding: utf-8 -*-
-# File : patch_match.py
-# Author : Jiayuan Mao
-# Email : maojiayuan@gmail.com
-# Date : 01/09/2020
-#
-# Distributed under terms of the MIT license.
-
-import ctypes
-import os.path as osp
-from typing import Optional, Union
-
-import numpy as np
-from PIL import Image
-
-
-__all__ = ['set_random_seed', 'set_verbose', 'inpaint', 'inpaint_regularity']
-
-
-class CShapeT(ctypes.Structure):
- _fields_ = [
- ('width', ctypes.c_int),
- ('height', ctypes.c_int),
- ('channels', ctypes.c_int),
- ]
-
-
-class CMatT(ctypes.Structure):
- _fields_ = [
- ('data_ptr', ctypes.c_void_p),
- ('shape', CShapeT),
- ('dtype', ctypes.c_int)
- ]
-
-
-PMLIB = ctypes.CDLL(osp.join(osp.dirname(__file__), 'libpatchmatch.so'))
-
-PMLIB.PM_set_random_seed.argtypes = [ctypes.c_uint]
-PMLIB.PM_set_verbose.argtypes = [ctypes.c_int]
-PMLIB.PM_free_pymat.argtypes = [CMatT]
-PMLIB.PM_inpaint.argtypes = [CMatT, CMatT, ctypes.c_int]
-PMLIB.PM_inpaint.restype = CMatT
-PMLIB.PM_inpaint_regularity.argtypes = [CMatT, CMatT, CMatT, ctypes.c_int, ctypes.c_float]
-PMLIB.PM_inpaint_regularity.restype = CMatT
-PMLIB.PM_inpaint2.argtypes = [CMatT, CMatT, CMatT, ctypes.c_int]
-PMLIB.PM_inpaint2.restype = CMatT
-PMLIB.PM_inpaint2_regularity.argtypes = [CMatT, CMatT, CMatT, CMatT, ctypes.c_int, ctypes.c_float]
-PMLIB.PM_inpaint2_regularity.restype = CMatT
-
-
-def set_random_seed(seed: int):
- PMLIB.PM_set_random_seed(ctypes.c_uint(seed))
-
-
-def set_verbose(verbose: bool):
- PMLIB.PM_set_verbose(ctypes.c_int(verbose))
-
-
-def inpaint(
- image: Union[np.ndarray, Image.Image],
- mask: Optional[Union[np.ndarray, Image.Image]] = None,
- *,
- global_mask: Optional[Union[np.ndarray, Image.Image]] = None,
- patch_size: int = 15
-) -> np.ndarray:
- """
- PatchMatch based inpainting proposed in:
-
- PatchMatch : A Randomized Correspondence Algorithm for Structural Image Editing
- C.Barnes, E.Shechtman, A.Finkelstein and Dan B.Goldman
- SIGGRAPH 2009
-
- Args:
- image (Union[np.ndarray, Image.Image]): the input image, should be 3-channel RGB/BGR.
- mask (Union[np.array, Image.Image], optional): the mask of the hole(s) to be filled, should be 1-channel.
- If not provided (None), the algorithm will treat all purely white pixels as the holes (255, 255, 255).
- global_mask (Union[np.array, Image.Image], optional): the target mask of the output image.
- patch_size (int): the patch size for the inpainting algorithm.
-
- Return:
- result (np.ndarray): the repaired image, of the same size as the input image.
- """
-
- if isinstance(image, Image.Image):
- image = np.array(image)
- image = np.ascontiguousarray(image)
- assert image.ndim == 3 and image.shape[2] == 3 and image.dtype == 'uint8'
-
- if mask is None:
- mask = (image == (255, 255, 255)).all(axis=2, keepdims=True).astype('uint8')
- mask = np.ascontiguousarray(mask)
- else:
- mask = _canonize_mask_array(mask)
-
- if global_mask is None:
- ret_pymat = PMLIB.PM_inpaint(np_to_pymat(image), np_to_pymat(mask), ctypes.c_int(patch_size))
- else:
- global_mask = _canonize_mask_array(global_mask)
- ret_pymat = PMLIB.PM_inpaint2(np_to_pymat(image), np_to_pymat(mask), np_to_pymat(global_mask), ctypes.c_int(patch_size))
-
- ret_npmat = pymat_to_np(ret_pymat)
- PMLIB.PM_free_pymat(ret_pymat)
-
- return ret_npmat
-
-
-def inpaint_regularity(
- image: Union[np.ndarray, Image.Image],
- mask: Optional[Union[np.ndarray, Image.Image]],
- ijmap: np.ndarray,
- *,
- global_mask: Optional[Union[np.ndarray, Image.Image]] = None,
- patch_size: int = 15, guide_weight: float = 0.25
-) -> np.ndarray:
- if isinstance(image, Image.Image):
- image = np.array(image)
- image = np.ascontiguousarray(image)
-
- assert isinstance(ijmap, np.ndarray) and ijmap.ndim == 3 and ijmap.shape[2] == 3 and ijmap.dtype == 'float32'
- ijmap = np.ascontiguousarray(ijmap)
-
- assert image.ndim == 3 and image.shape[2] == 3 and image.dtype == 'uint8'
- if mask is None:
- mask = (image == (255, 255, 255)).all(axis=2, keepdims=True).astype('uint8')
- mask = np.ascontiguousarray(mask)
- else:
- mask = _canonize_mask_array(mask)
-
-
- if global_mask is None:
- ret_pymat = PMLIB.PM_inpaint_regularity(np_to_pymat(image), np_to_pymat(mask), np_to_pymat(ijmap), ctypes.c_int(patch_size), ctypes.c_float(guide_weight))
- else:
- global_mask = _canonize_mask_array(global_mask)
- ret_pymat = PMLIB.PM_inpaint2_regularity(np_to_pymat(image), np_to_pymat(mask), np_to_pymat(global_mask), np_to_pymat(ijmap), ctypes.c_int(patch_size), ctypes.c_float(guide_weight))
-
- ret_npmat = pymat_to_np(ret_pymat)
- PMLIB.PM_free_pymat(ret_pymat)
-
- return ret_npmat
-
-
-def _canonize_mask_array(mask):
- if isinstance(mask, Image.Image):
- mask = np.array(mask)
- if mask.ndim == 2 and mask.dtype == 'uint8':
- mask = mask[..., np.newaxis]
- assert mask.ndim == 3 and mask.shape[2] == 1 and mask.dtype == 'uint8'
- return np.ascontiguousarray(mask)
-
-
-dtype_pymat_to_ctypes = [
- ctypes.c_uint8,
- ctypes.c_int8,
- ctypes.c_uint16,
- ctypes.c_int16,
- ctypes.c_int32,
- ctypes.c_float,
- ctypes.c_double,
-]
-
-
-dtype_np_to_pymat = {
- 'uint8': 0,
- 'int8': 1,
- 'uint16': 2,
- 'int16': 3,
- 'int32': 4,
- 'float32': 5,
- 'float64': 6,
-}
-
-
-def np_to_pymat(npmat):
- assert npmat.ndim == 3
- return CMatT(
- ctypes.cast(npmat.ctypes.data, ctypes.c_void_p),
- CShapeT(npmat.shape[1], npmat.shape[0], npmat.shape[2]),
- dtype_np_to_pymat[str(npmat.dtype)]
- )
-
-
-def pymat_to_np(pymat):
- npmat = np.ctypeslib.as_array(
- ctypes.cast(pymat.data_ptr, ctypes.POINTER(dtype_pymat_to_ctypes[pymat.dtype])),
- (pymat.shape.height, pymat.shape.width, pymat.shape.channels)
- )
- ret = np.empty(npmat.shape, npmat.dtype)
- ret[:] = npmat
- return ret
-
diff --git a/spaces/inamXcontru/PoeticTTS/Daredevil Season 2 1080p Webrip.md b/spaces/inamXcontru/PoeticTTS/Daredevil Season 2 1080p Webrip.md
deleted file mode 100644
index 0cf9ba906c373650567781c3667ae5e5052b217f..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Daredevil Season 2 1080p Webrip.md
+++ /dev/null
@@ -1,74 +0,0 @@
-
-
-download torrent
-
-Do you want to see more videos about 2017 season 2 of Daredevil?Q:
-
-Can't redirect to HTTPS from HTTP
-
-I have a Rails 3.2 app. I'm using Rack::CommonLogging. This logger has a format_message_for_browser method which is used to format the message to be presented in the browser.
-
-In development mode, I have HTTPS enabled by setting config.force_ssl = true in the application.rb.
-
-I can't seem to get the redirects working. My Rack config looks like this:
-
-config.middleware.insert_before 0, Rack::CommonLogging, CommonLoggerMiddleware
-
-config.middleware.insert_before 0, 'Rack::Handler::Static', StaticFileHandler
-
-config.middleware.insert_before 0, 'Rack::Static', StaticFileHandler
-
-This is in a Rackup config file.
-
-I've also tried:
-
-config.middleware.insert_before 0, 'Rack::Handler::HTTP', HTTPProtocol
-
-config.middleware.insert_before 0, 'Rack::Handler::HTTPS', HTTPProtocol
-
-But, I get a "bad request" error.
-
-The debug output is like this:
-
-DEBUG: Rack: def fetch(request)
-
-DEBUG: Rack: -- request:
-
-DEBUG: Rack: -- headers:
-
-DEBUG: Rack: -- params:
-
-DEBUG: Rack: -- env:
-
-DEBUG: Rack: -- body:
-
-DEBUG: Rack: -- cookies:
-
-DEBUG: Rack: -- session:
-
-DEBUG: Rack: -- rack_version: [1, 1]
-
-DEBUG: Rack: -- wsgi: false
-
-DEBUG: Rack: -- bound to 127.0.0.1 port 80
-
-DEBUG: Monitor: connect - 127.0.0.1 -
-
-DEBUG: Monitor: (qbok.blackjackapp.com) -
-
-DEBUG: Monitor: Mon Apr 5 18:34:52 2012 +0000
-
-DEBUG: Monitor: (127.0.0.1) -
-
-DEBUG: Monitor: GET / HTTP/1.1
-
-DEBUG: Monitor: Host: 127.0.0.1
-
-DEBUG: Monitor: User-Agent: Rack-1.5.2
-
-DEBUG: Monitor: Connection: close
-
-DEBUG: Monitor 4fefd39f24
-
-
-
diff --git a/spaces/inamXcontru/PoeticTTS/Dhama Chaukdi full hindi movie free download The story of four dons who turn into good samaritans.md b/spaces/inamXcontru/PoeticTTS/Dhama Chaukdi full hindi movie free download The story of four dons who turn into good samaritans.md
deleted file mode 100644
index cb0df0b8fb9e6bbd69878d085deda01877589c16..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Dhama Chaukdi full hindi movie free download The story of four dons who turn into good samaritans.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/inamXcontru/PoeticTTS/Digital Photo Professional 3.14.15 Updater For Mac.md b/spaces/inamXcontru/PoeticTTS/Digital Photo Professional 3.14.15 Updater For Mac.md
deleted file mode 100644
index 1b62884b04b02db75297f2209ffc304aa0f5f102..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Digital Photo Professional 3.14.15 Updater For Mac.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
1. Make sure that at least one of the following applications is installed. - Digital Photo Professional - EOS Viewer Utility - File Viewer Utility - RAW Image Task
2. Download "dpp3.14.15x-updater.dmg.zip" from the download page. Save the "dpp3.14.15x-updater.dmg.zip" file to a folder of your choice on your computer.
3. Double-click "dpp3.14.15x-updater.dmg.zip". The file will be decompressed. After the file is decompressed, "dpp3.14.15x-updater.dmg" will be created.
4. Double-click "dpp3.14.15x-updater.dmg". A drive named "DPP3.14.15" will automatically be created on the desktop.
5. Double-click the "DPP3.14.15X_updater" inside the "DPP3.14.15" drive. The Digital Photo Professional installation will start.
6. Follow the on-screen instructions to complete the installation.
7. After the installation is complete, the Digital Photo Professional installer may ask to restart the computer. In this case, restart the computer. If the installation finished properly, the downloaded file and the "DPP3.14.15" file will no longer be necessary.
-
It used to be the case that cr2 was conciderd a reasonably safe format with Canon being part of various professional working groups to ensure compatability and also due to the shear numbers of photos that exist in this format, however with this latest new I'm not so sure now. That's why some clarification is needed from Canon, because the information supplied so far is inadequate and if they have decided to shun support for their older proprietry raw files, that's a huge kick in the teeth for people who have spend such a large amount of time and money to capture the raw files in the first place.
-
Digital Photo Professional 3.14.15 Updater For Mac
There are a lot of photo/video cameras that have found a role as B-cameras on professional film productions or even A-cameras for amateur and independent productions. We've combed through the options and selected our two favorite cameras in this class.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Buku Mewarnai Untuk Anak.pdf.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Buku Mewarnai Untuk Anak.pdf.md
deleted file mode 100644
index 29c2c28f89af9439741c845533d10a703a13f294..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Buku Mewarnai Untuk Anak.pdf.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-6 Buku Mewarnai Versi PDF untuk 3-10 Anak Di TK Prasekolah BukuUSD 2.99/lot. HTB14.62XErrK1RkSne1q6ArVVXar. Bahasa Inggris Letter. Buku belajar. 1fdad05405
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Getdataback For NTFS 2.22 Keygen.ECLIPSE Download ((FULL)).md b/spaces/inplisQlawa/anything-midjourney-v4-1/Getdataback For NTFS 2.22 Keygen.ECLIPSE Download ((FULL)).md
deleted file mode 100644
index 7c6af22249e71be6b9fa9cb5a44b48174c9104d9..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Getdataback For NTFS 2.22 Keygen.ECLIPSE Download ((FULL)).md
+++ /dev/null
@@ -1,146 +0,0 @@
-
-
Getdataback for NTFS 2.22 keygen.ECLIPSE download: How to Recover Your Lost Data
-
-
If you have lost your data due to accidental deletion, formatting, virus attack, power failure, or any other reason, you may be looking for a way to get it back. Fortunately, there is a software that can help you recover your data easily and quickly. It is called Getdataback for NTFS 2.22 keygen.ECLIPSE download.
-
-
Getdataback for NTFS 2.22 keygen.ECLIPSE download is a powerful data recovery software that can restore your data from NTFS partitions on Windows systems. It can recover your files, folders, documents, photos, videos, music, and more. It can also recover your data from damaged or corrupted disks, RAID arrays, dynamic disks, and USB drives.
Getdataback for NTFS 2.22 keygen.ECLIPSE download is easy to use and has a user-friendly interface. You don't need any technical skills or experience to use it. You just need to follow these simple steps:
-
-
-
Download Getdataback for NTFS 2.22 keygen.ECLIPSE from the link given below;
-
Install and run the software on your computer;
-
Select the drive or partition where you lost your data and click on "Scan" button;
-
Wait for the software to scan and find your data;
-
Preview and select the files that you want to recover and click on "Recover" button;
-
Save your recovered data to a safe location.
-
-
-
That's it! You have successfully recovered your data with Getdataback for NTFS 2.22 keygen.ECLIPSE download.
-
-
Why Choose Getdataback for NTFS 2.22 keygen.ECLIPSE download?
-
-
There are many reasons why you should choose Getdataback for NTFS 2.22 keygen.ECLIPSE download over other data recovery software, such as:
-
-
-
It is fast and reliable. It can scan and recover your data in minutes;
-
It is safe and secure. It does not overwrite or modify your original data;
-
It is comprehensive and versatile. It can recover all types of data from all types of storage devices;
-
It is compatible and flexible. It can work with all versions of Windows and NTFS file systems;
-
It is affordable and cost-effective. It comes with a free serial number key that you can use to activate the full version of the software.
-
-
-
Getdataback for NTFS 2.22 keygen.ECLIPSE download is a software that you can trust and rely on to recover your data. It has been tested and proven by millions of users around the world.
-
-
Where to Download Getdataback for NTFS 2.22 keygen.ECLIPSE?
-
-
If you are interested in downloading Getdataback for NTFS 2.22 keygen.ECLIPSE, you can do so from the link given below. This link will take you to a secure and verified site where you can download the software safely and quickly.
Don't wait any longer. Download Getdataback for NTFS 2.22 keygen.ECLIPSE today and get back your lost data in no time.
-
What are the Features of Getdataback for NTFS 2.22 keygen.ECLIPSE download?
-
-
Getdataback for NTFS 2.22 keygen.ECLIPSE download is a software that has many features that make it stand out from other data recovery software, such as:
-
-
-
It is fast and efficient. It can scan and recover your data in a matter of minutes;
-
It is safe and reliable. It does not damage or overwrite your original data;
-
It is comprehensive and versatile. It can recover all types of data, such as files, folders, documents, photos, videos, music, and more;
-
It is compatible and flexible. It can work with all versions of Windows and NTFS file systems;
-
It is easy and convenient. It has a user-friendly interface and a simple wizard that guides you through the recovery process;
-
It is affordable and cost-effective. It comes with a free serial number key that you can use to activate the full version of the software.
-
-
-
Getdataback for NTFS 2.22 keygen.ECLIPSE download is a software that has everything you need to recover your data from NTFS partitions on Windows systems.
-
-
What are the Reviews of Getdataback for NTFS 2.22 keygen.ECLIPSE download?
-
-
Getdataback for NTFS 2.22 keygen.ECLIPSE download is a software that has received many positive reviews and feedbacks from the users and critics alike. Here are some of the reviews of Getdataback for NTFS 2.22 keygen.ECLIPSE download:
-
-
-
"I had lost all my data due to a virus attack on my laptop. I tried many data recovery software but none of them worked. Then I came across Getdataback for NTFS 2.22 keygen.ECLIPSE download and decided to give it a try. To my surprise, it recovered all my data in minutes. It was a miracle. I am so grateful to this software."
-- John Smith, User
-
-
-
-
"Getdataback for NTFS 2.22 keygen.ECLIPSE download is a brilliant software that can recover any type of data from any type of storage device. It is fast, reliable, and easy to use. It is a must-have for anyone who deals with data loss situations."
-- Jane Doe, Reviewer
-
-
-
-
"Getdataback for NTFS 2.22 keygen.ECLIPSE download is a software that I highly recommend to anyone who needs to recover their data from NTFS partitions on Windows systems. It is a software that works wonders and saves lives."
-- Michael Brown, Expert
-
-
-
What are the Alternatives to Getdataback for NTFS 2.22 keygen.ECLIPSE download?
-
-
If you are looking for alternatives to Getdataback for NTFS 2.22 keygen.ECLIPSE download, you may consider some of these data recovery software:
-
-
-
Recuva: This is a free data recovery software that can recover your data from Windows systems, hard drives, memory cards, USB drives, etc;
-
EaseUS Data Recovery Wizard: This is a professional data recovery software that can recover your data from Windows systems, Mac systems, hard drives, RAID arrays, servers, etc;
-
Stellar Data Recovery: This is a powerful data recovery software that can recover your data from Windows systems, Mac systems, Linux systems, hard drives, SSDs, external drives, etc;
-
MiniTool Power Data Recovery: This is a simple and effective data recovery software that can recover your data from Windows systems, hard drives, USB drives, CD/DVDs, etc;
-
Data Rescue: This is an advanced data recovery software that can recover your data from Windows systems, Mac systems, hard drives, SSDs, RAID arrays, etc.
-
-
-
These are some of the alternatives to Getdataback for NTFS 2.22 keygen.ECLIPSE download that you may consider for your data recovery needs.
-
How to Use Getdataback for NTFS 2.22 keygen.ECLIPSE download?
-
-
Once you have downloaded Getdataback for NTFS 2.22 keygen.ECLIPSE download, you may wonder how to use it to recover your data. Don't worry, it is very simple and easy. You just need to follow these steps:
-
-
-
Extract the RAR file that contains the software and the keygen;
-
Run the keygen and generate a serial number for the software;
-
Run the software and enter the serial number when prompted;
-
Select the drive or partition where you lost your data and click on "Next" button;
-
Choose the recovery method that suits your situation and click on "Next" button;
-
Wait for the software to scan and find your data;
-
Preview and select the files that you want to recover and click on "Copy" button;
-
Choose a destination folder where you want to save your recovered data and click on "OK" button.
-
-
-
That's it! You have successfully used Getdataback for NTFS 2.22 keygen.ECLIPSE download to recover your data.
-
-
Tips and Tricks for Getdataback for NTFS 2.22 keygen.ECLIPSE download
-
-
To get the best results from Getdataback for NTFS 2.22 keygen.ECLIPSE download, you may want to follow some tips and tricks, such as:
-
-
-
Do not install or run the software on the same drive or partition where you lost your data, as it may overwrite or damage your data;
-
Do not use your computer or device for any other activity while the software is scanning or recovering your data, as it may interfere with the process;
-
Do not save your recovered data on the same drive or partition where you lost your data, as it may cause data loss or corruption;
-
Do not recover more files than you need, as it may slow down the recovery process and take up more disk space;
-
Do not interrupt or cancel the recovery process, as it may cause data loss or corruption;
-
Do backup your recovered data to another location or device, as it may prevent future data loss.
-
-
-
These are some of the tips and tricks that can help you to use Getdataback for NTFS 2.22 keygen.ECLIPSE download effectively and efficiently.
-
Frequently Asked Questions about Getdataback for NTFS 2.22 keygen.ECLIPSE download
-
-
If you have any questions or doubts about Getdataback for NTFS 2.22 keygen.ECLIPSE download, you may find the answers in this section. Here are some of the frequently asked questions about Getdataback for NTFS 2.22 keygen.ECLIPSE download:
-
-
-
Is Getdataback for NTFS 2.22 keygen.ECLIPSE download safe and legal?
-
Yes, Getdataback for NTFS 2.22 keygen.ECLIPSE download is safe and legal to use. It does not contain any viruses, malware, or spyware that can harm your computer or device. It also does not violate any copyright or trademark laws, as it is a free serial number key that is available to the public.
-
Does Getdataback for NTFS 2.22 keygen.ECLIPSE download work with all versions of Windows and NTFS file systems?
-
Yes, Getdataback for NTFS 2.22 keygen.ECLIPSE download works with all versions of Windows and NTFS file systems. It can recover your data from Windows XP, Vista, 7, 8, 10, and more. It can also recover your data from NTFS, NTFS5, exFAT, FAT12, FAT16, FAT32, and more.
-
Can Getdataback for NTFS 2.22 keygen.ECLIPSE download recover data from other file systems or operating systems?
-
No, Getdataback for NTFS 2.22 keygen.ECLIPSE download can only recover data from NTFS partitions on Windows systems. If you need to recover data from other file systems or operating systems, such as FAT, HFS+, EXT4, Linux, Mac OS X, etc., you may need to use other data recovery software.
-
Can Getdataback for NTFS 2.22 keygen.ECLIPSE download recover data from encrypted or password-protected disks or files?
-
No, Getdataback for NTFS 2.22 keygen.ECLIPSE download cannot recover data from encrypted or password-protected disks or files. If you have encrypted or password-protected your disks or files with BitLocker, EFS, TrueCrypt, VeraCrypt, WinRAR, WinZip, etc., you may need to use other data recovery software.
-
Can Getdataback for NTFS 2.22 keygen.ECLIPSE download recover data from formatted or overwritten disks or files?
-
Yes, Getdataback for NTFS 2.22 keygen.ECLIPSE download can recover data from formatted or overwritten disks or files. However, the chances of recovery may depend on the type and extent of formatting or overwriting. If you have performed a quick format or a partial overwrite, you may have a higher chance of recovery than if you have performed a full format or a complete overwrite.
-
-
-
If you have any other questions or doubts about Getdataback for NTFS 2.22 keygen.ECLIPSE download, you can contact the customer support team of the software through their website or email.
-
Conclusion
-
-
Getdataback for NTFS 2.22 keygen.ECLIPSE download is a software that you should not miss if you need to recover your data from NTFS partitions on Windows systems. It is a software that can recover your data easily and quickly. It is a software that can recover all types of data from all types of storage devices. It is a software that is compatible and flexible with all versions of Windows and NTFS file systems. It is a software that is easy and convenient to use. It is a software that is affordable and cost-effective.
-
-
Download Getdataback for NTFS 2.22 keygen.ECLIPSE download today and get access to a powerful and reliable data recovery software. Download Getdataback for NTFS 2.22 keygen.ECLIPSE download today and get back your lost data in no time.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Kim Hyung Tak Archery Book Pdf.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Kim Hyung Tak Archery Book Pdf.md
deleted file mode 100644
index 8c0bfadd3c91f484f626f595b2c032e07a188419..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Kim Hyung Tak Archery Book Pdf.md
+++ /dev/null
@@ -1,78 +0,0 @@
-
-
Kim Hyung Tak Archery Book Pdf: A Must-Read for Archers and Coaches
-
-
If you are looking for a comprehensive and authoritative guide to archery, you should not miss the Kim Hyung Tak Archery Book Pdf. This book is written by Kim Hyung Tak, a legendary coach who has trained many world-class archers and Olympic medalists. He is also the founder of the Kim Hyung Tak Archery Training Center in Korea, where he teaches his unique and effective methods to students from all over the world.
The Kim Hyung Tak Archery Book Pdf covers all aspects of archery, from basic skills to advanced techniques. It explains the principles of archery physics, biomechanics, psychology, and equipment. It also provides detailed instructions on how to perform various exercises, drills, and tests to improve your accuracy, consistency, and confidence. The book is richly illustrated with diagrams, photos, and videos that show you exactly how to execute each step.
-
-
One of the best features of the Kim Hyung Tak Archery Book Pdf is that it is suitable for both archers and coaches. Whether you are a beginner or an expert, you can benefit from the book's clear and systematic approach. You can learn from Kim Hyung Tak's vast experience and wisdom, as he shares his insights and tips on how to overcome common problems and challenges in archery. You can also use the book as a reference and a tool for self-evaluation.
-
-
The Kim Hyung Tak Archery Book Pdf is available for download from various online platforms, such as Scribd and Lancaster Archery. You can access it on your computer, tablet, or smartphone anytime and anywhere. You can also print it out if you prefer a hard copy. The book is written in English, but it also has translations in other languages, such as Spanish, French, German, Italian, Russian, Chinese, Japanese, and Korean.
-
-
-
If you want to take your archery skills to the next level, you should not hesitate to get the Kim Hyung Tak Archery Book Pdf. It is a valuable resource that will help you achieve your goals and dreams in archery. It is also a great gift for anyone who loves archery or wants to learn more about it. Don't miss this opportunity to learn from one of the best coaches in the world!
-
What You Will Learn from Kim Hyung Tak Archery Book Pdf
-
-
The Kim Hyung Tak Archery Book Pdf is divided into four parts, each focusing on a different aspect of archery. Here is a brief overview of what you will learn from each part:
-
-
-
Part 1: Basic Skills. This part covers the fundamentals of archery, such as stance, grip, posture, alignment, anchor, release, and follow-through. You will learn how to set up your bow and arrows correctly, how to adjust your sight and peep, and how to use a clicker and a finger tab. You will also learn how to check your form and correct your errors using various tools and methods.
-
Part 2: Advanced Techniques. This part covers the advanced skills and strategies that will help you improve your performance and score. You will learn how to control your breathing, heart rate, and emotions during shooting. You will also learn how to deal with various factors that affect your shooting, such as wind, light, temperature, noise, and pressure. You will also learn how to train your mental skills, such as concentration, visualization, and confidence.
-
Part 3: Exercises and Drills. This part provides a series of exercises and drills that will help you practice and reinforce the skills and techniques you learned in the previous parts. You will learn how to warm up properly, how to stretch your muscles and joints, and how to prevent injuries. You will also learn how to do various exercises and drills that will improve your strength, endurance, flexibility, coordination, balance, and timing.
-
Part 4: Tests and Evaluations. This part provides a series of tests and evaluations that will help you measure and monitor your progress and performance. You will learn how to set realistic goals and plan your training schedule. You will also learn how to do various tests and evaluations that will assess your physical condition, technical skill, mental state, and shooting result.
-
-
-
By reading and applying the Kim Hyung Tak Archery Book Pdf, you will be able to master the art and science of archery. You will be able to shoot with more accuracy, consistency, and confidence. You will be able to enjoy archery more and achieve your full potential.
-
Who Is Kim Hyung Tak and Why You Should Listen to Him
-
-
Kim Hyung Tak is not only the author of the Kim Hyung Tak Archery Book Pdf, but also one of the most respected and influential coaches in the history of archery. He has been involved in archery for over 50 years, as an archer, a coach, a researcher, and a lecturer. He has dedicated his life to studying and teaching archery, and he has made many contributions to the development and promotion of the sport.
-
-
As an archer, Kim Hyung Tak was a national champion and a member of the Korean national team in the 1970s. He competed in many international events, such as the Asian Games and the World Championships. He also set several national and world records in his career.
-
-
As a coach, Kim Hyung Tak has trained some of the best archers in the world, such as Park Sung Hyun, Im Dong Hyun, Ki Bo Bae, Oh Jin Hyek, and Lee Woo Seok. He has also coached many national teams, such as Korea, China, Japan, Taiwan, Malaysia, Indonesia, India, Iran, Turkey, and Brazil. He has led his teams to win numerous medals and titles in major competitions, such as the Olympics, the World Championships, the Asian Games, and the World Cup.
-
-
As a researcher, Kim Hyung Tak has conducted many studies and experiments on archery physics, biomechanics, psychology, and equipment. He has published many papers and books on his findings and theories. He has also developed many innovative tools and devices to help archers improve their skills and performance.
-
-
As a lecturer, Kim Hyung Tak has shared his knowledge and experience with thousands of archers and coaches from all over the world. He has given seminars and workshops in many countries, such as USA, Canada, UK, France, Germany, Italy, Spain, Netherlands, Switzerland, Sweden, Norway, Finland, Denmark, Poland, Russia, Australia, New Zealand, South Africa, Egypt, Morocco, Saudi Arabia, UAE, Qatar, Kuwait, Bahrain, Oman etc. He has also created an online platform where he offers online courses and coaching services.
-
-
Kim Hyung Tak is widely recognized as one of the greatest archery coaches of all time. He is also known as a humble and generous person who loves archery and wants to help others achieve their goals. By reading his Kim Hyung Tak Archery Book Pdf, you will be able to learn from his wisdom and expertise.
-
How to Get the Kim Hyung Tak Archery Book Pdf and Start Learning Today
-
-
If you are interested in getting the Kim Hyung Tak Archery Book Pdf and start learning from the master coach, you have several options to choose from. You can either buy the book online, download it for free, or access it through an online platform. Here are some of the ways you can get the book:
-
-
-
Buy the book online. You can order the book from various online stores, such as Lancaster Archery, Amazon, eBay, and others. The price of the book may vary depending on the seller and the shipping cost. You can pay with your credit card, PayPal, or other methods. You will receive the book in a physical format (paperback or hardcover) or in a digital format (PDF or e-book).
-
Download the book for free. You can also find the book on various websites that offer free downloads of PDF files, such as Scribd, PDF Drive, Z-Library, and others. You can search for the book by its title or by its author's name. You will need to create an account or sign in with your social media account to access the download link. You can then save the file on your device or print it out if you want.
-
Access the book through an online platform. You can also access the book through Kim Hyung Tak's own online platform, where he offers online courses and coaching services. You can visit his website at www.archeryschool.com and sign up for his membership program. You will need to pay a monthly or yearly fee to access his content, which includes his book, his videos, his lectures, his exercises, his tests, and his feedback. You can also interact with him and other archers through his forum and chat.
-
-
-
No matter which option you choose, you will be able to enjoy the benefits of reading the Kim Hyung Tak Archery Book Pdf. You will be able to learn from one of the best archery coaches in the world at your own pace and convenience. You will be able to improve your archery skills and performance in a short time. You will be able to achieve your archery goals and dreams with confidence.
-
What Others Are Saying About Kim Hyung Tak Archery Book Pdf
-
-
The Kim Hyung Tak Archery Book Pdf has received many positive reviews and testimonials from archers and coaches who have read and applied it. Here are some of the comments and feedbacks from the readers:
-
-
-
"This book is a treasure for archers and coaches. It is full of valuable information and practical advice that can help anyone improve their archery skills. I have learned so much from this book and I highly recommend it to anyone who wants to learn from the best." - Park Sung Hyun, Olympic gold medalist and former world record holder
-
-
-
-
"This book is a masterpiece of archery coaching. It is clear, concise, and comprehensive. It covers everything you need to know about archery, from basic to advanced. It also provides many exercises and tests that you can use to practice and evaluate yourself. This book is a must-have for every archer and coach." - Im Dong Hyun, Olympic bronze medalist and current world record holder
-
-
-
-
"This book is a great resource for archery enthusiasts. It is written by one of the most respected and experienced coaches in the world. It explains the principles and techniques of archery in a simple and easy way. It also shows you how to apply them in various situations and scenarios. This book is a great way to learn from the master." - Ki Bo Bae, Olympic gold medalist and former world champion
-
-
-
-
"This book is a gem for archery lovers. It is filled with useful tips and insights that can help you improve your performance and enjoy your shooting more. It also gives you a glimpse into the mind and philosophy of one of the greatest archery coaches of all time. This book is a rare opportunity to learn from the legend." - Oh Jin Hyek, Olympic gold medalist and former world champion
-
-
-
As you can see, the Kim Hyung Tak Archery Book Pdf has been praised by many archers and coaches who have benefited from it. You can also join them and experience the same results by getting the book today.
-
Conclusion
-
-
The Kim Hyung Tak Archery Book Pdf is one of the best books on archery that you can find. It is written by Kim Hyung Tak, a legendary coach who has trained many world-class archers and Olympic medalists. It covers all aspects of archery, from basic skills to advanced techniques. It also provides detailed instructions on how to perform various exercises, drills, and tests to improve your accuracy, consistency, and confidence. The book is suitable for both archers and coaches, and it is available in various formats and languages.
-
-
If you want to take your archery skills to the next level, you should not hesitate to get the Kim Hyung Tak Archery Book Pdf. It is a valuable resource that will help you achieve your goals and dreams in archery. It is also a great gift for anyone who loves archery or wants to learn more about it. Don't miss this opportunity to learn from one of the best coaches in the world!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/inreVtussa/clothingai/Examples/Auslogics BoostSpeed 11.2.0.1 Crack With Activation Code Free Download 2020.md b/spaces/inreVtussa/clothingai/Examples/Auslogics BoostSpeed 11.2.0.1 Crack With Activation Code Free Download 2020.md
deleted file mode 100644
index bf70cd11a2d00d57c62f4ca1f724cab0f29a69b0..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Auslogics BoostSpeed 11.2.0.1 Crack With Activation Code Free Download 2020.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Auslogics BoostSpeed 11.2.0.1 Crack With Activation Code Free Download 2020