diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/8bf Download BEST Full.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/8bf Download BEST Full.md
deleted file mode 100644
index 02e58d194e6bd96f6054e347ec35c1d1b63d33df..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/8bf Download BEST Full.md
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
8bf Download Full: How to Enhance Your Image Editing with Free Plugins
-
If you are an avid user of Photoshop or other image editing software, you may have heard of 8bf files. These are files that contain Photoshop filter plug-ins, which are extensions that add extra functionality, such as new image filters, to Photoshop and compatible programs. These plug-ins can help you customize your Photoshop experience and create stunning images with ease.
-
In this article, we will show you how to download and install 8bf plugins from reliable sources, and how to use them in your image editing projects. We will also introduce you to some of the best 8bf plugins that you can get for free, and how they can enhance your creative and professional image editing. Whether you are a beginner or an expert, you will find something useful and interesting in this article.
The first step to using 8bf plugins is to download them from the internet. There are many websites that offer free or paid plugins for Photoshop and other image editing software, but not all of them are trustworthy or compatible. You need to be careful when choosing where to download your plugins from, and make sure they are safe and suitable for your software version.
-
One of the best places to get free Photoshop plug-ins is Adobe's own website. You can sort the hundreds of free resources by rating, popularity, or date added, and find what you need easily. These plug-ins are installed differently than the others on this list. You must have a free Adobe account and the Creative Cloud program installed to use them.
-
Another good source of free Photoshop filters and plug-ins is Lifewire, which has compiled a list of five best sites for free Photoshop filters and plug-ins. You can find links to these sites on their page, along with directions on how to install them.
-
Once you have downloaded your desired plugin, you need to install it on your computer. The installation process may vary depending on the file format and the software you are using, but here are some general steps that you can follow:
-
-
If the plugin is downloaded in .8bf format, you need to copy and paste the file into Photoshop's filter folder. On Windows, it's usually here: C:\Program Files\Adobe\Adobe Photoshop (version)\Plug-ins\Filters\. However, if putting the filter in that folder doesn't work, try this one: C:\Program Files\Common Files\Adobe\Plug-Ins\CC.
-
If the plugin is downloaded in .zxp format, you need to use a program called Adobe Extension Manager or ZXPInstaller to install it. You can download these programs for free from their official websites . Once you have them, you can drag and drop the zxp file into the program and follow the instructions.
-
If the plugin is downloaded in .exe format, you need to run the file and follow the installation wizard. Make sure you choose the correct destination folder for your Photoshop version.
-
-
After you have installed your plugin, you need to access it from your image editing software. In Photoshop, you can usually find your plugins under the window menu, under extensions or filters. You can also use the search bar at the top of Photoshop to find your plugin by name. Once you have opened your plugin, you can use it as instructed by the developer.
-
Best 8bf Plugins for Creative and Professional Image Editing
-
Now that you know how to download and install 8bf plugins, you may be wondering which ones are worth trying. There are thousands of plugins available online, but not all of them are equally useful or high-quality. To help you narrow down your choices, we have selected some of the best 8bf plugins that you can get for free, and how they can enhance your creative and professional image editing.
-
-
Adobe's Free Photoshop Plug-ins
-
If you want to get the most out of your Photoshop experience, you should definitely check out Adobe's own collection of free plug-ins. These plug-ins are designed by Adobe experts and offer a huge variety of features and effects that can improve your workflow and creativity. Some of the most popular and useful plug-ins are:
-
-
Kuler Panel: This plug-in allows you to create, explore, and share color themes that you can use in your projects. You can browse thousands of color combinations created by other users, or create your own using various color rules and modes. You can also sync your themes with other Adobe products, such as Illustrator or InDesign.
-
Lens Profile Creator: This plug-in helps you correct lens distortions, such as barrel or pincushion distortion, vignetting, or chromatic aberration. You can create custom profiles for your lenses based on calibration images, or use predefined profiles for common lenses. You can also share your profiles with other users or download profiles created by others.
-
Perspective Warp: This plug-in allows you to adjust the perspective of your images without distorting them. You can create multiple planes in your image and manipulate them independently, or merge them into a single plane. You can also change the viewpoint of your image, such as changing from a bird's eye view to a worm's eye view.
-
Pixlr-o-matic: This plug-in lets you add retro effects to your images with just a few clicks. You can choose from hundreds of filters, overlays, and borders to create vintage-looking photos. You can also mix and match different effects to create your own unique style.
-
Social Kit Pro: This plug-in helps you create professional-looking social media graphics, such as cover photos, profile pictures, or ads. You can choose from various templates that match the dimensions and guidelines of different social platforms, such as Facebook, Twitter, or YouTube. You can also customize your graphics with text, shapes, images, or logos.
-
-
Mehdi's Free Photoshop Filters
-
If you are looking for some simple but powerful filters that can transform your images in amazing ways, you should try Mehdi's free Photoshop filters. These filters are created by Mehdi Rabah, a French developer who has been making Photoshop plugins since 2002. His website offers dozens of filters with detailed explanations and examples of what they do. Some of his most popular and useful filters are:
-
-
Kaleidoscope 2.1: This filter allows you to create kaleidoscopic patterns from any image. You can adjust the number of segments, the angle, the zoom, and the offset of the pattern. You can also apply different blending modes and colors to create stunning effects.
-
Weaver 2.0: This filter allows you to create realistic woven textures from any image. You can adjust the size, the shape, the color, and the opacity of the threads. You can also apply different effects, such as embossing or shadowing, to make the texture more 3D.
-
Seamless Border 2.0: This filter allows you to create seamless borders from any image. You can adjust the width, the height, the angle, and the offset of the border. You can also apply different effects, such as mirroring, flipping, or rotating, to make the border more interesting.
-
Sorting Tiles 1.1: This filter allows you to create mosaic-like effects from any image. You can adjust the size, the shape, the color, and the order of the tiles. You can also apply different effects, such as blurring, sharpening, or inverting, to make the tiles more varied.
-
Wavy Lab 1.1: This filter allows you to create wavy patterns from any image. You can adjust the frequency, the amplitude, the phase, and the direction of the waves. You can also apply different effects, such as colorization, gradient, or transparency, to make the waves more colorful.
-
-
The Plugin Site's Free Photoshop Filters
-
If you want to get a lot of filters for a single download, you should check out The Plugin Site's free Photoshop filters. These filters are created by Harald Heim, a German developer who has been making Photoshop plugins since 1997. His website offers a single download that contains 70 image effects that can be applied to any image. Some of his most popular and useful filters are:
-
-
Color MegaMix 1.1: This filter allows you to modify the colors of your image in various ways. You can choose from 20 color modes and adjust the intensity and contrast of each mode. You can also mix different modes together to create new color effects.
-
Contrast Mask 1.0: This filter allows you to enhance the contrast and details of your image without losing quality. You can adjust the strength and radius of the contrast mask and apply it to different tonal ranges of your image.
-
Edge Detector 1.0: This filter allows you to detect and highlight the edges of your image in various ways. You can choose from 10 edge modes and adjust the threshold and smoothness of each mode. You can also invert or colorize the edges to create different effects.
-
Old Movie 1.0: This filter allows you to simulate the look of old movies on your image. You can adjust the amount and size of scratches, dust, hair, jitter, flicker, and noise on your image. You can also change the color and brightness of your image to make it look more aged.
-
Posterizer 1.0: This filter allows you to reduce the number of colors in your image and create poster-like effects. You can adjust the number of colors and levels for each color channel and apply dithering or smoothing to your image.
-
-
Lokas Software's Free 3D Shadow Filter
-
If you want to add realistic shadows to your images, you should try Lokas Software's free 3D Shadow filter. This filter is created by Lokas Software, a Russian company that specializes in graphics software development since 1997. Their website offers a free filter that can create various types of shadows from any image or text layer. Some of the features of this filter are:
-
-
Shadow Type: You can choose from four types of shadows: drop shadow, perspective shadow, inner shadow, or reflection shadow.
-
Shadow Position: You can adjust the angle, distance, scale, and perspective of your shadow.
-
Shadow Color: You can choose any color for your shadow or use a gradient or a texture.
-
Shadow Quality: You can adjust the opacity, blur, noise, and softness of your shadow.
-
Shadow Effects: You can apply various effects to your shadow, such as glow, bevel, emboss, or contour.
-
-
Flaticon
-
If you need icons for your projects, you should check out Flaticon. Flaticon is a website that offers a large collection of free icons in various formats: PNG, SVG, EPS, PSD, or BASE 64. You can browse thousands of icons by category or keyword or use their online editor to customize them. Some of the benefits of using Flaticon are:
-
-
Variety: You can find icons for any topic or theme you need: business, education, health, technology, etc. You can also find icons in different styles: flat, outline, 3D, hand-drawn, etc.
-
Quality: You can download icons in high resolution and vector format, which means they can be scaled and edited without losing quality. You can also use their online editor to change the color, size, orientation, or shape of the icons.
-
Compatibility: You can use Flaticon's icons in any software or platform that supports images. You can also use their Photoshop plugin to access and insert icons directly from Photoshop's window menu.
-
License: You can use Flaticon's icons for free for personal and commercial projects, as long as you credit the author and Flaticon. You can also get a premium subscription to access more icons and features without attribution.
-
-
Ink
-
If you are a designer who works with developers, you should try Ink. Ink is a Photoshop plugin that helps you create comprehensive design specifications for your projects. You can use Ink to generate useful information about your layers, such as dimensions, typography, colors, effects, etc. You can also export your design specifications as a PNG file or a HTML document. Some of the advantages of using Ink are:
-
-
Accuracy: You can ensure that your design is implemented exactly as you intended by providing precise and detailed information about your layers. You can also avoid misunderstandings and errors by communicating clearly with your developers.
-
Efficiency: You can save time and effort by generating design specifications automatically with Ink. You don't have to manually measure, label, or document your layers. You can also update your specifications easily if you make any changes to your design.
-
Convenience: You can access Ink from Photoshop's window menu or by using a keyboard shortcut. You can also customize Ink's settings to suit your preferences and needs. You can choose which information to include or exclude, how to format it, and how to export it.
-
Compatibility: You can use Ink with any version of Photoshop from CS6 to CC 2021. You can also use Ink with any language or operating system that supports Photoshop.
-
-
Conclusion
-
In conclusion, 8bf plugins are files that contain Photoshop filter plug-ins, which are extensions that add extra functionality to Photoshop and compatible programs. These plug-ins can help you customize your Photoshop experience and create stunning images with ease.
-
To use 8bf plugins, you need to download them from reliable sources, install them on your computer, and access them from your image editing software. In this article, we have shown you how to do that, and introduced you to some of the best 8bf plugins that you can get for free.
-
We hope you have found this article useful and informative. If you want to learn more about 8bf plugins and how to use them in your projects, you can check out the following resources:
Here are some frequently asked questions about 8bf plugins and their answers:
-
What is the difference between filters and plugins?
-
Filters are a type of plugin that apply specific effects or transformations to an image or a layer. Plugins are a broader term that includes filters as well as other extensions that add extra functionality to Photoshop or compatible programs.
-
How can I uninstall or disable a plugin that I don't need?
-
To uninstall a plugin, you need to delete the file from the folder where you installed it. To disable a plugin temporarily, you can rename the file extension from .8bf or .zxp to something else, such as .bak. To enable it again, you need to rename it back to its original extension.
-
Are there any risks or drawbacks of using 8bf plugins?
-
Using 8bf plugins is generally safe and beneficial, as long as you download them from reputable sources and install them correctly. However, there are some potential risks or drawbacks that you should be aware of, such as:
-
-
Compatibility issues: Some plugins may not work well with your software version or operating system. They may cause errors, crashes, or performance issues. To avoid this, you should always check the compatibility and requirements of the plugins before installing them.
-
Security issues: Some plugins may contain malware or viruses that can harm your computer or compromise your data. To avoid this, you should always scan the files with an antivirus program before installing them. You should also only download plugins from trusted sources and avoid clicking on suspicious links or pop-ups.
-
Quality issues: Some plugins may not be well-designed or well-maintained. They may have bugs, glitches, or limitations that can affect your image quality or user experience. To avoid this, you should always read the reviews and ratings of the plugins before installing them. You should also update your plugins regularly and report any problems to the developers.
-
-
How can I update or troubleshoot my plugins?
-
To update your plugins, you need to check the websites of the developers for any new versions or updates. You can also use programs like Adobe Extension Manager or ZXPInstaller to manage your plugins and check for updates. To troubleshoot your plugins, you need to identify the source of the problem and try some common solutions, such as:
-
-
Restarting your software or computer: This can help resolve any temporary issues or conflicts that may cause your plugins to malfunction.
-
Reinstalling your plugins: This can help fix any corrupted or missing files that may prevent your plugins from working properly.
-
Disabling other plugins: This can help determine if there is any incompatibility or interference between your plugins that may cause errors or crashes.
-
Contacting the developers: This can help get support and guidance from the creators of the plugins. You can find their contact information on their websites or in the plugin documentation.
-
-
Where can I find more resources and tutorials on using 8bf plugins?
-
If you want to learn more about using 8bf plugins in your projects, you can find many resources and tutorials online. Some of the best ones are:
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Can 39t Install 32 Bit Windows 10.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Can 39t Install 32 Bit Windows 10.md
deleted file mode 100644
index b507b71796cbaed7a791b26a3bd3dbb1a3b5e603..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Can 39t Install 32 Bit Windows 10.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
What to Do If You Can't Install 32 Bit Windows 10
-
Windows 10 is the latest and most advanced operating system from Microsoft. It comes in two versions: 32 bit and 64 bit. The 32 bit version is designed for older computers that have less than 4 GB of RAM, while the 64 bit version is designed for newer computers that have more than 4 GB of RAM. The 64 bit version also has some advantages over the 32 bit version, such as better security, performance, and compatibility.
However, some users may prefer to install the 32 bit version of Windows 10 on their computers for various reasons. For example, they may have some legacy software or hardware that only works with the 32 bit version, or they may want to save some disk space or memory. In some cases, users may also encounter problems when trying to install the 64 bit version of Windows 10, such as compatibility issues, error messages, or slow installation.
-
If you are one of those users who want to install the 32 bit version of Windows 10 on your computer, but you can't do it for some reason, don't worry. There are some possible solutions that can help you fix this problem and enjoy the benefits of Windows 10. Here are some of them:
-
-
Check your system requirements. Before you try to install the 32 bit version of Windows 10, make sure that your computer meets the minimum system requirements for it. According to Microsoft, you need at least a 1 GHz processor, 1 GB of RAM, 16 GB of free disk space, a DirectX 9 compatible graphics card, and a DVD drive or a USB port. If your computer does not meet these requirements, you may not be able to install the 32 bit version of Windows 10.
-
Check your BIOS settings. Another possible reason why you can't install the 32 bit version of Windows 10 is that your BIOS settings are preventing it. BIOS stands for Basic Input/Output System, and it is a software that controls the basic functions of your computer, such as booting up, detecting hardware, and setting up the operating system. Sometimes, the BIOS settings may be configured to only allow the installation of the 64 bit version of Windows 10. To fix this, you need to access your BIOS settings and change them accordingly. The exact steps may vary depending on your computer model and manufacturer, but generally, you need to restart your computer and press a certain key (such as F2, F10, or Del) to enter the BIOS setup menu. Then, look for an option that says something like "OS Type", "Boot Mode", or "UEFI/Legacy". Change this option to "Legacy" or "Other OS" if it is set to "UEFI" or "Windows". Save your changes and exit the BIOS setup menu.
-
Use a bootable USB drive or DVD. Another possible solution is to use a bootable USB drive or DVD that contains the installation files for the 32 bit version of Windows 10. You can create one using another computer that has Windows 10 installed on it. To do this, you need a USB drive or a DVD with at least 8 GB of storage space, and a tool called Media Creation Tool from Microsoft. You can download this tool from here. Once you have downloaded and run the tool, follow the instructions on the screen to create a bootable USB drive or DVD with the 32 bit version of Windows 10. Then, insert the USB drive or DVD into your computer and restart it. You should see a message that prompts you to press any key to boot from the USB drive or DVD. Press any key and follow the instructions on the screen to install the 32 bit version of Windows 10.
-
Contact Microsoft support. If none of the above solutions work for you, you may need to contact Microsoft support for further assistance. You can do this by visiting this page and choosing the option that best suits your problem. You can also call them at +1-800-642-7676 (US) or +44-800-026-03-30 (UK). They will guide you through the steps to troubleshoot and resolve your issue.
-
-
Installing the 32 bit version of Windows 10 on your computer can be tricky sometimes
- ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Edius 5 Free Download Full Version with Key 64 Bit What You Need to Know.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Edius 5 Free Download Full Version with Key 64 Bit What You Need to Know.md
deleted file mode 100644
index c778d0fed6f4074b771f6cb855ccc98a85bdc992..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Edius 5 Free Download Full Version with Key 64 Bit What You Need to Know.md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-
Edius 5 Free Download Full Version with Key 64 Bit: How to Edit Videos Like a Pro
-
Edius 5 is a professional video editing software that can handle various formats and resolutions. It is widely used by broadcasters, filmmakers, and enthusiasts who want to create high-quality videos with ease and speed. However, Edius 5 is not a free software, and it requires a valid license key to activate and use it. If you are looking for a way to get Edius 5 for free, you may have come across some websites that offer Edius 5 free download full version with key 64 bit. A key is a tool that can generate and inject product keys into your software to bypass the activation process. In this article, we will explain what Edius 5 free download full version with key 64 bit is, how it works, and how to download and use it safely.
-
edius 5 free download full version with key 64 bit
What is Edius 5 Free Download Full Version with Key 64 Bit?
-
Edius 5 free download full version with key 64 bit is a package that contains the installation files of Edius 5 and a key tool that can create and apply product keys for Edius 5. The product key is a code that identifies your software license and allows you to activate and use it. Normally, you need to purchase a product key from Grass Valley or an authorized reseller, but with a key, you can generate your own product key for free.
-
The key tool that comes with Edius 5 free download full version with key 64 bit is called X-Force 2016. It is a popular and reliable tool that can activate various Grass Valley products, such as Edius, ProCoder, Storm, etc. X-Force 2016 works by contacting a custom KMS server instead of the official Grass Valley Activation Server. KMS stands for Key Management Service, which is a feature that allows large organizations to activate multiple devices with a single product key. X-Force 2016 mimics this feature and creates new product keys that are verified by the custom KMS server. This way, your Edius 5 will think it is activated by a legitimate source.
-
How to Download and Use Edius 5 Free Download Full Version with Key 64 Bit?
-
Before you download and use Edius 5 free download full version with key 64 bit, you should know that it is not an official or legal product. It may violate Grass Valley's terms of service and cause some security risks. Therefore, you should use it at your own discretion and responsibility.
-
-
That being said, here are the steps to download and use Edius 5 free download full version with key 64 bit:
-
-
Download Edius 5 free download full version with key 64 bit from a reliable source. You can find many websites that offer this package, but some of them may contain malware or viruses. We recommend you to download it from this website, which is a free download manager that can help you find and download various software. You will get a ZIP file with an executable file named EDIUS_5.exe.
-
Extract the ZIP file using a password provided. The password is "www.downloadly.ir" (without quotes).
-
Run EDIUS_5.exe as administrator. You may see a Windows Protected Your PC message, but you can ignore it and choose Run Anyway.
-
Follow the on-screen instructions to complete the installation. You will need to enter a serial number and a product key during the installation. You can use any of these serial numbers:
666-69696969
667-98989898
400-45454545
And this product key:
001H1
-
After the installation is finished, do not run Edius 5 yet. You need to apply the keygen first.
-
Go to the folder where you extracted the ZIP file and find the folder named "xf-adsk2016_x64". Inside this folder, you will see another executable file named xf-adsk2016_x ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fonelab For Ios WORK Crack.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fonelab For Ios WORK Crack.md
deleted file mode 100644
index 2b3353ede2c93164d0163fedc6117ae66914434c..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fonelab For Ios WORK Crack.md
+++ /dev/null
@@ -1,32 +0,0 @@
-
-
How to Recover Lost Data from iOS Devices with FoneLab for iOS Crack
-
If you have ever lost or deleted important data from your iPhone, iPad, or iPod touch, you know how frustrating it can be. Whether it's because of accidental deletion, water damage, system crash, forgotten passcode, or any other reason, losing your precious data can be a nightmare.
Fortunately, there is a way to recover your lost data without spending a fortune on professional services or risking further damage to your device. FoneLab for iOS Crack is a powerful and reliable data recovery software that can help you restore your contacts, photos, messages, videos, music, notes, and more from any iOS device or iTunes/iCloud backup.
-
FoneLab for iOS Crack is easy to use and works with all iOS devices and iOS versions. You can download it for free from HaxPC.net and follow the simple steps below to recover your data in minutes.
-
Step 1: Download and install FoneLab for iOS Crack
-
Go to https://haxpc.net/fonelab-crack/ and download the FoneLab for iOS Crack file. Extract the file and run the setup to install the software on your computer. Launch the program and choose the "Recover from iOS Device" mode.
-
Step 2: Connect your iOS device to the computer
-
Use a USB cable to connect your iPhone, iPad, or iPod touch to the computer. The software will automatically detect your device and show its information on the interface. If your device is locked or disabled, you can use FoneLab iOS Unlocker Crack to remove the passcode or Apple ID first.
-
-
Step 3: Scan your device for lost data
-
Click the "Start Scan" button to let the software scan your device for lost or deleted data. The scanning process may take some time depending on the amount of data on your device. You can preview the scanned data by category on the left panel.
-
Step 4: Recover your data
-
Select the data you want to recover and click the "Recover" button. You can choose to recover the data to your computer or directly to your device. The software will start recovering your data and save it in the specified location. You can check the recovered data on your computer or device.
-
Congratulations! You have successfully recovered your lost data from your iOS device with FoneLab for iOS Crack. You can also use this software to recover data from iTunes or iCloud backup if you have one. FoneLab for iOS Crack is a lifesaver for anyone who wants to recover their precious data from their iOS devices without hassle.
-
-
Why Choose FoneLab for iOS Crack?
-
There are many data recovery software available on the market, but FoneLab for iOS Crack stands out for its features and benefits. Here are some of the reasons why you should choose FoneLab for iOS Crack to recover your lost data from your iOS devices:
-
-
It supports all iOS devices and iOS versions, including the latest iPhone 12 and iOS 14.
-
It can recover various types of data, such as contacts, photos, messages, videos, music, notes, WhatsApp, iMessage, call history, etc.
-
It can recover data from your device directly or from iTunes/iCloud backup.
-
It can recover data in different scenarios, such as accidental deletion, water damage, system crash, forgotten passcode, device lost/stolen, etc.
-
It can recover data without damaging your device or overwriting your data.
-
It can recover data quickly and easily with a few clicks.
-
It can preview the data before recovering it and selectively recover the data you want.
-
It can recover data to your computer or directly to your device.
-
-
With FoneLab for iOS Crack, you can rest assured that your data is safe and secure. You can download it for free from HaxPC.net and enjoy its full features without any limitations. FoneLab for iOS Crack is the best choice for anyone who wants to recover their lost data from their iOS devices with ease and efficiency.
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Autodesk AutoCad 2019.1.1 (x86x64) Crack Keygen.md b/spaces/1gistliPinn/ChatGPT4/Examples/Autodesk AutoCad 2019.1.1 (x86x64) Crack Keygen.md
deleted file mode 100644
index a26f178d324c47648647ac6b8bc064443269cc3b..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Autodesk AutoCad 2019.1.1 (x86x64) Crack Keygen.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Download and Install Sage 50 2014 (2015 - 2016 Academic Year) ... Learn Accounting in 1 HOUR First ... 1fdad05405
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/D16 Group PunchBOX V1.0.1 WiN OSX.md b/spaces/1gistliPinn/ChatGPT4/Examples/D16 Group PunchBOX V1.0.1 WiN OSX.md
deleted file mode 100644
index 426d28f72302d4e59652fd6a4bb927b3c5166363..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/D16 Group PunchBOX V1.0.1 WiN OSX.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
music samples germany das programm punchbox kann sowohl mit dem vorproduktionen-klanten als auch mit der handwerksklientel kommunizieren. es enthält viele extras, die es dem musiker erlauben, seine klangfarben einzusetzen. dadurch ist es von jedem musiker empfohlen.
-
mix magazine germany punchbox ist ein muss für jeden musiker, der elektronische musik produziert. neben der massiven presetauswahl haben wir hier im handumdrehen berzeugend klingende bassdrums mit charakter für die eigene produktion erstellt. neben den sehr guten presets spielen auch die mitgelieferten samples in der obersten liga mit. wer auf der suche nach der bassdrum für den nchsten trap-, edm-, dubstep- oder techno-track ist, findet mit punchbox innerhalb krzester zeit die passende lsung. bei einem preis von 79 euro muss man da gar nicht lange berlegen.
if you're obsessed (as we are at sweetwater) with crafting the perfect bass drum sound, you'll love d16 group's punchbox plug-in. punchbox combines sampling and synthesis in a virtual kick drum instrument that will revitalize your music. the samples are meticulously crafted using only the finest instruments and vintage analog gear. the kick synthesizers are based on d16's acclaimed emulations of classic roland drum machines, customized and upgraded for deployment in punchbox. the punchbox audio engine consists of four sound generators, each of them dedicated to a key component of your kick sound.
-
punchbox is the first of a suite of instruments that d16 group have created and its easy to see why. the sounds are fun, easy to use and easy to create. you can use the preset library to instantly get what you want and you can always tweak them to exactly what you want. you get a lot of bang for your buck with this instrument. the fact that it can be used with a midi controller is a bonus, but the fact it has so many features and that its so easy to use makes it even more attractive.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Film India Kabhi Khushi Kabhie Gham Online Subtitrat.md b/spaces/1gistliPinn/ChatGPT4/Examples/Film India Kabhi Khushi Kabhie Gham Online Subtitrat.md
deleted file mode 100644
index 236a55faffbad1e53c823fee4e342fe3ee961537..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Film India Kabhi Khushi Kabhie Gham Online Subtitrat.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
film india kabhi khushi kabhie gham online subtitrat
-
-... Cr3ative Zone. The Crazy Ones Sezonul 1 Episodul 1, serial online subtitrat in Romana | Cr3ative Zone ... Robin Williams: Seven of his most memorable movie roles. Robin Williams ... HinduismIndiaFilme De Dragoste. Black Girl Digs Bollywood (BGDB): "Yeh Ladki Hai Allah" from "Kabhi Khushi Kabhie Gham... " (2001). 4d29de3e1b
-
-
-
diff --git a/spaces/1phancelerku/anime-remove-background/Asphalt Nitro 2 Mod Apk The Ultimate Guide to 60 FPS and Infinite Money.md b/spaces/1phancelerku/anime-remove-background/Asphalt Nitro 2 Mod Apk The Ultimate Guide to 60 FPS and Infinite Money.md
deleted file mode 100644
index 68e648539910a5956e3cf841c673102f56904863..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Asphalt Nitro 2 Mod Apk The Ultimate Guide to 60 FPS and Infinite Money.md
+++ /dev/null
@@ -1,87 +0,0 @@
-
-
Asphalt Nitro 2 Mod APK 60 FPS: A Review
-
If you are a fan of racing games, you might have heard of Asphalt Nitro 2, a mobile game developed and published by Gameloft as part of the Asphalt series. But did you know that there is a modded version of the game that allows you to play it at 60 frames per second (FPS) and enjoy unlimited money and other features? In this article, we will review Asphalt Nitro 2 Mod APK 60 FPS, a modified version of the game that enhances your gaming experience. We will also tell you how to download and install it, and how to play it with some tips and tricks.
-
What is Asphalt Nitro 2?
-
A racing game for low-end devices
-
Asphalt Nitro 2 is an arcade racing game that was announced in 2021 and is currently available in beta for Android users. It is basically Asphalt but for low-end devices, as it offers so much excitement in a compact (50 MB) package. It is designed to run smoothly on a wide range of mobile devices, including phones with weaker hardware specs.
Asphalt Nitro 2 features top-notch graphics, 20 licensed supercars, four arcade game modes, and 230 races in gorgeous locations around New Zealand and Japan. You can drive famous supercar brands such as Lamborghini, Bugatti, Ferrari, and more, and perform crazy stunts while in the driver's seat. The game also features Asphalt 9's revolutionary TouchDrive technology, which streamlines car steering and allows you to play with just one hand on the screen. However, you can also turn off this mode in the settings if you prefer manual control.
-
What is Asphalt Nitro 2 Mod APK 60 FPS?
-
A modified version of the game
-
Asphalt Nitro 2 Mod APK 60 FPS is a modified version of the game that enhances your gaming experience by unlocking some features that are not available in the original version. For example, you can play the game at 60 FPS, which makes the graphics smoother and more realistic. You can also enjoy unlimited money, which means you can buy any car or upgrade you want without worrying about the cost. Moreover, you can access all the cars and tracks without having to complete any missions or challenges.
-
Benefits of the mod
-
The benefits of using Asphalt Nitro 2 Mod APK 60 FPS are obvious. You can have more fun playing the game with better graphics, more money, and more options. You can also save your time and effort by skipping the tedious tasks that are required to unlock the content in the original version. You can simply download and install the mod and start playing right away.
-
How to download and install Asphalt Nitro 2 Mod APK 60 FPS?
-
Steps to download and install
-
If you want to try Asphalt Nitro 2 Mod APK 60 FPS, you will need to follow these steps:
-
-
Go to this link and download the mod APK file.
-
Go to your device's settings and enable installation from unknown sources.
-
Locate the downloaded file in your file manager and tap on it to install it.
-
Wait for the installation to finish and launch the game.
-
Enjoy playing Asphalt Nitro 2 Mod APK 60 FPS.
-
-
Precautions and tips
-
Before you download and install the mod, you should take some precautions and tips into account:
-
-
Make sure you have enough storage space on your device to install the mod.
-
Make sure you have a stable internet connection to download the mod and play the game online.
-
Make sure you have a backup of your original game data in case something goes wrong with the mod.
-
Make sure you do not use the mod for any illegal or unethical purposes, such as cheating or hacking.
-
Make sure you do not update the game from the Play Store, as it may overwrite the mod and cause errors.
-
-
How to play Asphalt Nitro 2 Mod APK 60 FPS?
-
Game modes and tracks
-
Asphalt Nitro 2 Mod APK 60 FPS offers four game modes: Career, Quick Race, Multiplayer, and Events. In Career mode, you can complete various missions and challenges to earn money and reputation. In Quick Race mode, you can choose any track and car and race against AI opponents. In Multiplayer mode, you can race against other players online and compete for rankings and rewards. In Events mode, you can participate in limited-time events and win exclusive prizes.
-
The game also features 10 tracks in two locations: New Zealand and Japan. Each track has its own characteristics, such as curves, jumps, shortcuts, and obstacles. You can explore different routes and discover hidden secrets on each track. You can also customize the weather and time of day for each track.
-
Tips and tricks for beginners
-
If you are new to Asphalt Nitro 2 Mod APK 60 FPS, here are some tips and tricks that can help you improve your skills and performance:
-
asphalt nitro 2 mod apk unlimited money and ultra graphics
-asphalt nitro 2 mod apk download link with max graphics and 60 fps
-asphalt nitro 2 mod apk gameplay video with all effects and infinite money
-asphalt nitro 2 mod apk latest version with high resolution and smooth performance
-asphalt nitro 2 mod apk free download for android with unlocked cars and tracks
-asphalt nitro 2 mod apk offline mode with realistic physics and sound effects
-asphalt nitro 2 mod apk no root required with easy installation and updates
-asphalt nitro 2 mod apk hack features with cheats and tips
-asphalt nitro 2 mod apk best settings for low-end devices and battery saving
-asphalt nitro 2 mod apk review and rating by users and experts
-asphalt nitro 2 mod apk comparison with original game and other racing games
-asphalt nitro 2 mod apk how to play guide with tutorials and tricks
-asphalt nitro 2 mod apk support and feedback from developers and community
-asphalt nitro 2 mod apk new features and improvements in the latest update
-asphalt nitro 2 mod apk challenges and achievements to complete and unlock
-asphalt nitro 2 mod apk online multiplayer mode with friends and rivals
-asphalt nitro 2 mod apk customizations and upgrades for cars and drivers
-asphalt nitro 2 mod apk screenshots and wallpapers to download and share
-asphalt nitro 2 mod apk fun facts and trivia about the game and its development
-asphalt nitro 2 mod apk system requirements and compatibility with different devices
-asphalt nitro 2 mod apk bugs and issues to report and fix
-asphalt nitro 2 mod apk alternatives and similar games to try out
-asphalt nitro 2 mod apk news and updates from official sources and media outlets
-asphalt nitro 2 mod apk FAQs and answers to common questions and problems
-asphalt nitro 2 mod apk testimonials and feedback from satisfied users and fans
-
-
Use nitro wisely. Nitro is a boost that can help you speed up and overtake your rivals. However, it is not unlimited and it can run out quickly. You can refill your nitro by performing stunts, such as drifting, jumping, knocking down opponents, or hitting nitro bottles on the track. You can also use different types of nitro, such as perfect nitro, shockwave nitro, or double nitro, depending on your situation.
-
Upgrade your cars. Upgrading your cars can improve their stats, such as speed, acceleration, handling, and nitro. You can upgrade your cars by spending money or using cards that you can obtain from races or events. You can also customize your cars by changing their color, decals, rims, or license plates.
-
Choose the right car for each track. Different cars have different strengths and weaknesses, such as top speed, acceleration, handling, or nitro efficiency. You should choose the car that suits your style and the track's conditions. For example, if the track has many curves and turns, you should choose a car with good handling and nitro efficiency. If the track has long straight roads, you should choose a car with high top speed and acceleration.
-
-
Conclusion
-
Asphalt Nitro 2 Mod APK 60 FPS is a modified version of Asphalt Nitro 2 that enhances your gaming experience by unlocking some features that are not available in the original version. You can play the game at 60 FPS, enjoy unlimited money, and access all the cars and tracks without having to complete any missions or challenges. You can also download and install the mod easily by following the steps we have provided in this article. However, you should also be careful and responsible when using the mod and follow the precautions and tips we have given you. We hope you have fun playing Asphalt Nitro 2 Mod APK 60 FPS.
-
FAQs
-
Here are some frequently asked questions about Asphalt Nitro 2 Mod APK 60 FPS:
-
-
Is Asphalt Nitro 2 Mod APK 60 FPS safe to use?
-Yes, Asphalt Nitro 2 Mod APK 60 FPS is safe to use as long as you download it from a trusted source and follow the precautions we have mentioned in this article. However, you should also be aware that using mods may violate the terms of service of the game and may result in bans or penalties from Gameloft.
-
Can I play Asphalt Nitro 2 Mod APK 60 FPS offline?
-No, Asphalt Nitro 2 Mod APK 60 FPS requires an internet connection to play online with other players or participate in events. However, you can play Career mode or Quick Race mode offline if you want.
-
Can I play Asphalt Nitro 2 Mod APK 60 FPS on iOS devices?
-No, Asphalt Nitro 2 Mod APK 60 FPS is only compatible with Android devices. However, you can play the original version of Asphalt Nitro 2 on iOS devices if you want.
-
What are the minimum requirements to play Asphalt Nitro 2 Mod APK 60 FPS?
-The minimum requirements to play Asphalt Nitro 2 Mod APK 60 FPS are the same as the original version of Asphalt Nitro 2. You will need an Android device with at least 1 GB of RAM, 50 MB of free storage space, and Android 4.4 or higher.
-
Where can I get more information about Asphalt Nitro 2 Mod APK 60 FPS?
-You can get more information about Asphalt Nitro 2 Mod APK 60 FPS by visiting the official website of the mod or by joining the official Discord server of the mod . You can also watch some gameplay videos of the mod on YouTube or read some reviews of the mod on Reddit .
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Doodle Army 2 Mini Militia Hile APK 4.3. 4 - The Ultimate Guide.md b/spaces/1phancelerku/anime-remove-background/Doodle Army 2 Mini Militia Hile APK 4.3. 4 - The Ultimate Guide.md
deleted file mode 100644
index 8fc501ab60eb0f73a49ff44fd4b004ad4d3f70f7..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Doodle Army 2 Mini Militia Hile APK 4.3. 4 - The Ultimate Guide.md
+++ /dev/null
@@ -1,125 +0,0 @@
-
-
What is Mini Militia Hile APK 4.3. 4?
-
If you are a fan of shooting games, you might have heard of Doodle Army 2: Mini Militia, a popular multiplayer game that lets you battle with up to 12 players online or offline. The game offers various modes, weapons, maps, and customization options to make your gaming experience more fun and exciting.
-
But what if you want to enjoy more features and advantages in the game? That's where Mini Militia Hile APK 4.3. 4 comes in handy. This is a modded version of the original game that gives you unlimited access to everything in the game, such as ammo, health, jetpack, pro pack, and more. With this modded version, you can dominate the battlefield and have more fun with your friends.
Why should you download Mini Militia Hile APK 4.3. 4?
-
There are many reasons why you should download Mini Militia Hile APK 4.3. 4 on your Android device. Here are some of them:
-
-
You can get unlimited ammo, health, jetpack, pro pack, and other resources in the game.
-
You can unlock all the weapons, skins, avatars, and maps in the game.
-
You can play online or offline with your friends or other players from around the world.
-
You can customize your character and your gameplay settings according to your preferences.
-
You can enjoy a smooth and lag-free gaming experience with no ads or bugs.
-
-
How to download and install Mini Militia Hile APK 4.3. 4?
-
Downloading and installing Mini Militia Hile APK 4.3. 4 is very easy and simple. Just follow these steps:
-
-
Go to [this link](^1^) and download the APK file of Mini Militia Hile APK 4.3. 4 on your Android device.
-
Before installing the APK file, make sure you enable the "Unknown Sources" option in your device settings.
-
After enabling the option, locate the downloaded APK file and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to complete.
-
Once the installation is done, launch the game and enjoy playing Mini Militia Hile APK 4.3. 4.
-
-
Here are some screenshots of the installation process:
-
-
-
-
How to play Mini Militia Hile APK 4.3. 4?
-
Playing Mini Militia Hile APK 4.3. 4 is very similar to playing the original game, except that you have more features and advantages in the modded version. Here are some tips and tricks for playing Mini Militia Hile APK 4.3. 4:
-
-
Choose your mode wisely: You can play in different modes such as survival, deathmatch, team deathmatch, capture the flag, etc. Choose the mode that suits your skills and preferences.
-
Use your weapons smartly: You can use various weapons such as sniper rifles, shotguns, rocket launchers, flamethrowers, etc. Use them wisely and strategically to defeat your enemies. You can also switch between weapons by tapping on the weapon icon.
-
Use your jetpack wisely: You can use your jetpack to fly and dodge enemy attacks. You can also use it to reach higher places and ambush your enemies. However, be careful not to run out of fuel or get hit by enemy fire.
-
Use your pro pack wisely: You can use your pro pack to access more features and advantages in the game, such as dual wield, extra avatar customization, etc. However, be careful not to abuse it or get banned by the game developers.
-
Use your skills wisely: You can use your skills to improve your performance and survival in the game, such as aiming, dodging, hiding, reloading, etc. Practice and master these skills to become a better player.
-
-
What are the pros and cons of Mini Militia Hile APK 4.3. 4?
-
Like any other modded version of a game, Mini Militia Hile APK 4.3. 4 has its own pros and cons. Here are some of them:
-
-
-
Pros
-
Cons
-
-
-
You can enjoy more features and advantages in the game.
-
You might face some compatibility issues with some devices or versions of the game.
-
-
-
You can have more fun and excitement with your friends or other players.
-
You might get banned by the game developers if they detect your modded version.
-
-
-
You can improve your skills and strategies in the game.
-
You might lose the challenge and thrill of the game if you use too many cheats or hacks.
-
-
-
Conclusion
-
Mini Militia Hile APK 4.3. 4 is a modded version of Doodle Army 2: Mini Militia, a popular multiplayer shooting game that lets you battle with up to 12 players online or offline. The modded version gives you unlimited access to everything in the game, such as ammo, health, jetpack, pro pack, and more. With this modded version, you can dominate the battlefield and have more fun with your friends.
If you want to download and install Mini Militia Hile APK 4.3. 4 on your Android device, you can follow the step-by-step guide with screenshots that we provided in this article. You can also follow the tips and tricks that we shared to play the game better and smarter. However, you should also be aware of the pros and cons of the modded version and use it responsibly and ethically.
-
We hope you enjoyed reading this article and learned something new about Mini Militia Hile APK 4.3. 4. If you have any questions or feedback, feel free to leave a comment below. Thank you for your time and attention!
-
FAQs
-
-
Q: Is Mini Militia Hile APK 4.3. 4 safe to download and use?
-
A: Yes, Mini Militia Hile APK 4.3. 4 is safe to download and use as long as you download it from a trusted source and scan it with an antivirus before installing it on your device.
-
Q: Is Mini Militia Hile APK 4.3. 4 compatible with all Android devices?
-
A: No, Mini Militia Hile APK 4.3. 4 may not be compatible with some Android devices or versions of the game. You should check the compatibility before downloading and installing it on your device.
-
Q: Can I play online with other players using Mini Militia Hile APK 4.3. 4?
-
A: Yes, you can play online with other players using Mini Militia Hile APK 4.3. 4 as long as they are also using the same modded version of the game.
-
Q: Can I update Mini Militia Hile APK 4.3. 4 to the latest version of the game?
-
A: No, you cannot update Mini Militia Hile APK 4.3. 4 to the latest version of the game as it may cause some errors or crashes in the game. You should wait for the modded version to be updated by its developers before updating it on your device.
-
Q: Can I uninstall Mini Militia Hile APK 4.3. 4 from my device?
-
A: Yes, you can uninstall Mini Militia Hile APK 4.3. 4 from your device by following the same steps that you used to install it. You can also delete the APK file from your device after uninstalling it.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Getting Over It with Bennett Foddy MOD APK 1.9.6 - All Unlocked.md b/spaces/1phancelerku/anime-remove-background/Download Getting Over It with Bennett Foddy MOD APK 1.9.6 - All Unlocked.md
deleted file mode 100644
index d31cbd81f158798127f30afc27c4617f2608b3a9..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Getting Over It with Bennett Foddy MOD APK 1.9.6 - All Unlocked.md
+++ /dev/null
@@ -1,101 +0,0 @@
-
-
Getting Over It with Bennett Foddy: A Guide to Downloading and Playing the Latest Mod APK
-
If you are looking for a game that will test your patience, skill, and perseverance, then you might want to try Getting Over It with Bennett Foddy. This is a game that will make you rage, laugh, cry, and celebrate as you climb up a mountain with nothing but a hammer and a pot. In this article, we will tell you everything you need to know about this game, including how to download and play the latest mod APK version that offers some extra features and advantages.
-
What is Getting Over It with Bennett Foddy?
-
A brief introduction to the game and its creator
-
Getting Over It with Bennett Foddy is a punishing climbing game that was released in 2017 by Bennett Foddy, an Australian game developer and professor of game design. The game is inspired by a 2002 B-Game classic called Sexy Hiking, which was created by Jazzuo. The game is also a homage to other games that are known for their difficulty and frustration, such as QWOP, Flappy Bird, and Dark Souls.
The game has a simple premise: you control a man named Diogenes who is stuck in a metal pot. You use your mouse to move a hammer that can hook onto objects and surfaces. Your goal is to climb up an enormous mountain that is filled with various obstacles, such as rocks, trees, furniture, pipes, barrels, and more. The game has no checkpoints or save points, so if you fall down, you have to start over from where you landed. The game also has no end, so you can keep climbing as long as you want.
-
The game is designed to be hard and frustrating, as it requires precise mouse movements and timing. The physics of the game are also unpredictable and sometimes unfair, as you can slip, bounce, or fly off in unexpected directions. The game also features a voice-over commentary by Bennett Foddy himself, who will make philosophical observations, sarcastic remarks, or motivational quotes depending on your progress. Some players may find his voice soothing and helpful, while others may find it annoying and mocking.
-
The rewards and achievements of the game
-
The game does not have any explicit rewards or achievements for completing it, but it does offer some hidden surprises and secrets for those who manage to reach the top of the mountain. There is also a sense of satisfaction and accomplishment that comes from overcoming the challenges and difficulties of the game. The game also allows you to share your success or failure with other players through online leaderboards or chat rooms.
-
getting over it with bennett foddy mod apk unlocked
-getting over it android mod apk latest version
-getting over it mod apk free download for android
-getting over it mod apk unlimited money and gold
-getting over it mod apk no ads and no root
-getting over it mod apk download for pc windows 10
-getting over it mod apk online multiplayer
-getting over it mod apk revdl rexdl
-getting over it mod apk hack cheats
-getting over it mod apk obb data file
-getting over it mod apk premium pro vip
-getting over it mod apk full game unlocked
-getting over it mod apk new update 2023
-getting over it mod apk offline without internet
-getting over it mod apk original from play store
-getting over it mod apk mega nz mediafire
-getting over it mod apk unlimited lives and coins
-getting over it mod apk all levels and maps unlocked
-getting over it mod apk best graphics and sound
-getting over it mod apk easy and hard mode
-getting over it mod apk english language and subtitles
-getting over it mod apk fast download and install
-getting over it mod apk gameplay walkthrough guide
-getting over it mod apk high score and leaderboard
-getting over it mod apk low mb and size
-getting over it mod apk no verification and survey
-getting over it mod apk old version and history
-getting over it mod apk pure and safe
-getting over it mod apk realistic physics and animation
-getting over it mod apk tips tricks and secrets
-
Why download the latest mod APK?
-
The benefits of using a modded version of the game
-
A modded version of the game is a modified version that has some changes or additions that are not present in the original version. A modded version can offer some benefits for players who want to have a different or better experience with the game. For example, a modded version can:
-
-
Unlock all the features and content of the game without paying any money
-
Remove any ads or in-app
purchases that may interrupt or distract you from the game
-
Add some extra features or options that can enhance the gameplay experience, such as custom skins, cheats, hacks, or mods
-
Fix some bugs or errors that may affect the performance or stability of the game
-
Update the game to the latest version that may have new content or improvements
-
-
The mod features that enhance the gameplay experience
-
The latest mod APK for Getting Over It with Bennett Foddy has some amazing features that can make the game more enjoyable and fun. Some of these features are:
-
-
Unlimited coins: You can get unlimited coins that you can use to buy different items or skins in the game
-
Unlimited lives: You can get unlimited lives that you can use to continue playing the game even if you fall down
-
No fall damage: You can avoid any damage or injury that may occur when you fall down from a high place
-
No gravity: You can defy the laws of physics and float in the air as you swing your hammer
-
No obstacles: You can remove any obstacles or barriers that may block your way or slow you down
-
Speed hack: You can increase or decrease the speed of your movement or hammer as you wish
-
Zoom hack: You can zoom in or out of the screen as you want to see more or less of the environment
-
Skip level: You can skip any level that you find too hard or boring and go to the next one
-
God mode: You can become invincible and immune to any harm or danger
-
-
The compatibility and security of the mod APK
-
The latest mod APK for Getting Over It with Bennett Foddy is compatible with most Android devices that have Android 4.1 or higher. The mod APK file size is about 120 MB, so you need to have enough storage space on your device. The mod APK is also safe and secure to use, as it does not contain any viruses, malware, or spyware. The mod APK does not require any root access or special permissions to install or run.
-
How to download and install the latest mod APK?
-
The steps to find and download the mod APK file
-
If you want to download and install the latest mod APK for Getting Over It with Bennett Foddy, you need to follow these simple steps:
-
-
Go to a reliable and trusted website that offers the mod APK file for Getting Over It with Bennett Foddy. You can search for it on Google or use this link:
-
Click on the download button and wait for the download process to complete. You may need to enable the unknown sources option on your device settings to allow the download of third-party apps.
-
Locate the downloaded mod APK file on your device storage and tap on it to open it.
-
-
The steps to install and run the mod APK file
-
After you have downloaded the mod APK file, you need to install and run it on your device. Here are the steps to do so:
-
-
Follow the instructions on the screen and agree to the terms and conditions to install the mod APK file.
-
Wait for the installation process to finish and then launch the game from your app drawer or home screen.
-
Enjoy playing Getting Over It with Bennett Foddy with all the mod features enabled.
-
How to enjoy the game and have fun with it
-
The final and most important aspect of playing Getting Over It with Bennett Foddy is to enjoy the game and have fun with it. The game is not meant to be a torture or a punishment, but a challenge and a reward. Here are some tips and tricks to help you with that:
-
-
Appreciate the game's art and design, which are inspired by real-life locations, objects, and artworks. You can also admire the game's graphics and sound effects, which are realistic and immersive.
-
Explore the game's world and discover its secrets and easter eggs. You can also try to find different paths or shortcuts that can lead you to new places or surprises.
-
Express yourself and your creativity through the game. You can customize your pot and hammer with different skins or items that you can buy or unlock. You can also use the game as a platform to create your own art or content, such as videos, memes, or fan art.
-
Connect with other players and the game's community. You can join online chat rooms or leaderboards to chat or compete with other players. You can also watch or follow other players' streams or videos to learn from them or support them.
-
Challenge yourself and set your own goals or rules. You can try to beat the game in the fastest time possible, or in the most difficult way possible. You can also try to play the game with different settings or modes, such as inverted controls, no mouse, or blindfolded.
-
-
Conclusion and FAQs
-
In conclusion, Getting Over It with Bennett Foddy is a game that will make you experience a range of emotions and sensations, from anger and frustration to joy and satisfaction. It is a game that will challenge your patience, skill, and perseverance, but also reward you with a unique and memorable experience. If you want to play this game with some extra features and advantages, you can download and install the latest mod APK version that we have explained in this article. We hope that this article has helped you understand more about this game and how to play it better. Here are some FAQs that you may have:
-
-
Q: How long does it take to beat the game?
A: It depends on your skill level and luck, but some players have reported beating the game in less than 10 minutes, while others have spent hours or days on it.
-
Q: Is there a way to save or pause the game?
A: No, there is no way to save or pause the game. The game is meant to be played in one sitting, without any interruptions or distractions.
-
Q: Is there a multiplayer mode in the game?
A: No, there is no multiplayer mode in the game. The game is meant to be played solo, without any help or interference from other players.
-
Q: Is there a sequel or a spin-off of the game?
A: No, there is no sequel or a spin-off of the game. The game is meant to be a standalone project, without any plans for future updates or expansions.
-
Q: Is there a way to contact Bennett Foddy or give him feedback?
A: Yes, you can contact Bennett Foddy through his website (https://www.foddy.net/) or his Twitter account (@bfod). You can also give him feedback through his email (fod@foddy.net) or his Steam page (https://store.steampowered.com/app/240720/Getting_Over_It_with_Bennett_Foddy/).
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Experience the Epic Story of Seven Deadly Sins Grand Cross - APK Download Available.md b/spaces/1phancelerku/anime-remove-background/Experience the Epic Story of Seven Deadly Sins Grand Cross - APK Download Available.md
deleted file mode 100644
index a4a7197b21be6d22717957536b0e03bcd6dafc72..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Experience the Epic Story of Seven Deadly Sins Grand Cross - APK Download Available.md
+++ /dev/null
@@ -1,114 +0,0 @@
-
-
Seven Deadly Sins Grand Cross APK Download: A Cinematic Anime Game for Mobile
-
If you are a fan of anime and manga, you might have heard of The Seven Deadly Sins, a popular series that follows the adventures of a group of legendary knights in a fantasy world. If you want to experience the story and battles of The Seven Deadly Sins on your mobile device, you should check out Seven Deadly Sins Grand Cross, a cinematic anime game that will immerse you in the world of Britannia. In this article, we will tell you what Seven Deadly Sins Grand Cross is, how to download its APK file, what are its main features, what are some tips and tricks for playing it, and what are some reviews of it.
-
What is Seven Deadly Sins Grand Cross?
-
Seven Deadly Sins Grand Cross is a mobile RPG based on the popular anime and manga series The Seven Deadly Sins. It is developed by Netmarble, a leading mobile game company, and is available on Android and iOS platforms. Here are some of the reasons why you should play Seven Deadly Sins Grand Cross:
A mobile RPG based on the popular anime and manga series
-
Seven Deadly Sins Grand Cross lets you play as Meliodas, the leader of the Seven Deadly Sins, and his companions as they embark on an epic quest to save the kingdom from the tyranny of the Holy Knights. You will meet familiar characters from the series, such as Elizabeth, Ban, King, Diane, Gowther, Merlin, Escanor, Hawk, and many more. You will also encounter enemies and allies from different races, such as humans, fairies, giants, demons, goddesses, vampires, etc. You will be able to relive the memorable scenes and events from the anime and manga, such as the Boar Hat Tavern, the Forest of White Dreams, the Capital of the Dead, etc.
-
A game that recreates the original story and battles with high-quality 3D graphics and voice acting
-
Seven Deadly Sins Grand Cross is not just a simple adaptation of the series. It is a game that recreates the original story and battles with high-quality 3D graphics and voice acting. The game uses a cinematic approach to present the story, with cutscenes that feature stunning animations and dialogues. The game also uses a card-based combat system that allows you to use different skills and ultimate moves based on your character's abilities. The game also includes original voice dialogues from the voice actors of the anime series, such as Yuki Kaji, Sora Amamiya, Misaki Kuno, Aoi Yuki, Tatsuhisa Suzuki, Jun Fukuyama, Yuhei Takagi, Maaya Sakamoto, and Tomokazu Sugita. You will feel like you are watching the anime as you play the game.
-
A game that offers various features and content for fans and newcomers alike
-
Seven Deadly Sins Grand Cross is not just a game for fans of the series. It is also a game that offers various features and content for newcomers and casual players. You can explore the vast world of Britannia and interact with different characters and locations. You can also customize your own tavern and collect various items and costumes. You can also join a knighthood and cooperate with other players in guild wars and events. You can also enjoy mini-games, such as cooking, fishing, card battles, etc. There is always something new and exciting to do in Seven Deadly Sins Grand Cross.
-
How to download Seven Deadly Sins Grand Cross APK?
-
If you want to play Seven Deadly Sins Grand Cross on your mobile device, you will need to download its APK file. APK stands for Android Package Kit, which is a file format that contains all the elements needed to install an app on an Android device. Here are some of the ways you can download Seven Deadly Sins Grand Cross APK:
-
The official sources for Android and iOS devices
-
The easiest and safest way to download Seven Deadly Sins Grand Cross APK is to use the official sources for Android and iOS devices. You can simply go to the Google Play Store or the App Store and search for the game. Then, you can tap on the install button and wait for the download to finish. You will need about 4 GB of free space on your device to install the game. You will also need a stable internet connection to play the game online.
-
The alternative sources for Android devices
-
If you cannot access the official sources for some reason, or if you want to download an older version of the game, you can use alternative sources for Android devices. These are websites that offer APK files of various apps and games for free. However, you should be careful when using these sources, as some of them may contain malware or viruses that can harm your device or steal your personal information. You should only use trusted and reputable websites that have positive reviews and ratings from other users. Some examples of these websites are APKPure.com, APKMirror.com, and APKCombo.com. To download Seven Deadly Sins Grand Cross APK from these websites, you will need to follow these steps:
- - Go to the website of your choice and search for Seven Deadly Sins Grand Cross. - Choose the version of the game that you want to download and tap on the download button. - Wait for the download to finish and locate the APK file on your device. - Before installing the APK file, you will need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on. - Tap on the APK file and follow the instructions to install the game. - Enjoy playing Seven Deadly Sins Grand Cross on your device.
The precautions and requirements for installing the APK file
-
Before installing Seven Deadly Sins Grand Cross APK on your device, you should take some precautions and meet some requirements to ensure a smooth and safe gaming experience. Here are some of them:
-
How to install seven deadly sins grand cross on android
-Seven deadly sins grand cross apk mod unlimited gems
-Best characters in seven deadly sins grand cross game
-Seven deadly sins grand cross pc version download free
-Seven deadly sins grand cross tips and tricks for beginners
-Seven deadly sins grand cross anime vs game comparison
-Seven deadly sins grand cross global release date and news
-Seven deadly sins grand cross tier list and guide
-Seven deadly sins grand cross gameplay and review
-Seven deadly sins grand cross hack and cheats online
-Seven deadly sins grand cross official website and support
-Seven deadly sins grand cross reddit community and discussion
-Seven deadly sins grand cross manga and novel adaptation
-Seven deadly sins grand cross update and patch notes
-Seven deadly sins grand cross events and rewards
-Seven deadly sins grand cross costumes and skins
-Seven deadly sins grand cross pvp and guild wars
-Seven deadly sins grand cross reroll and gacha system
-Seven deadly sins grand cross codes and coupons
-Seven deadly sins grand cross emulator and controller support
-Seven deadly sins grand cross story mode and quests
-Seven deadly sins grand cross netmarble account and login
-Seven deadly sins grand cross soundtrack and voice actors
-Seven deadly sins grand cross wallpapers and fan art
-Seven deadly sins grand cross ratings and reviews on app store
-Seven deadly sins grand cross system requirements and compatibility
-Seven deadly sins grand cross error and bug fixes
-Seven deadly sins grand cross data transfer and backup
-Seven deadly sins grand cross collaboration and crossover events
-Seven deadly sins grand cross merchandise and products
- - Make sure that your device meets the minimum system requirements for the game. According to the official website, you will need at least Android 4.4 or iOS 9.0, 2 GB of RAM, 4 GB of free space, and a compatible processor. - Make sure that your device has enough battery power or is plugged into a charger while installing the game. - Make sure that your device has a stable internet connection while downloading and installing the game. - Make sure that you have enough data or Wi-Fi bandwidth to download the game, as it is quite large in size. - Make sure that you have enough storage space on your device to install the game and its updates. - Make sure that you backup your data before installing the game, in case something goes wrong or you need to uninstall it later. - Make sure that you scan the APK file with an antivirus or security app before installing it, to check for any malware or viruses. - Make sure that you only install the game from trusted sources, as mentioned above.
What are the main features of Seven Deadly Sins Grand Cross?
-
Seven Deadly Sins Grand Cross is a game that offers a lot of features and content for players to enjoy. Here are some of the main features of the game:
-
Dynamic combat with skill rank up system and ultimate moves
-
Seven Deadly Sins Grand Cross uses a card-based combat system that allows you to use different skills and ultimate moves based on your character's abilities. You can choose from four cards per turn, each with a different effect and cost. You can also combine cards of the same type to rank them up and increase their power and range. You can also use ultimate moves that are unique to each character and can deal massive damage to your enemies. The combat system is dynamic and strategic, as you have to consider the enemy's attributes, the card order, the card fusion, the card effects, etc.
-
Various PvE systems that reflect the original anime
-
Seven Deadly Sins Grand Cross offers various PvE systems that reflect the original anime and manga series. You can follow the main quest line that follows the story of The Seven Deadly Sins, or you can explore the side quests that feature different characters and events. You can also participate in special events that are based on the anime episodes, such as the Vaizel Fighting Festival, the Kingdom Infiltration Arc, etc. You can also challenge various bosses and enemies that appear in the series, such as the Demon Clan, the Ten Commandments, etc. You can also collect various rewards and items from completing these PvE systems.
-
Unique character appearances and costumes
-
Seven Deadly Sins Grand Cross features unique character appearances and costumes that are faithful to the original anime and manga series. You can collect and customize various characters from the series, each with their own skills, stats, and personalities. You can also unlock and equip different costumes for your characters, such as their original outfits, their casual outfits, their seasonal outfits, etc. You can also change their hairstyles, accessories, weapons, etc. You can also view your characters in 3D models and interact with them in various ways.
-
Thorough and authentic implementation of the original anime
-
Seven Deadly Sins Grand Cross is a game that is thorough and authentic in implementing the original anime and manga series. The game uses high-quality 3D graphics and voice acting to recreate the original story and battles of The Seven Deadly Sins. The game also includes original soundtracks and sound effects from the anime series, such as the opening and ending songs, the background music, the character voices, etc. The game also includes original scenes and dialogues from the anime series, such as the comedic moments, the emotional moments, the plot twists, etc. The game also includes original content and stories that are exclusive to the game, such as new characters, new events, new quests, etc.
-
Real-time PvP and guild content
-
Seven Deadly Sins Grand Cross is not only a game for solo players. It is also a game that offers real-time PvP and guild content for multiplayer players. You can compete with other players in various PvP modes, such as Death Match, Elite Demon Battle, Knighthood Boss Battle, etc. You can also join a knighthood and cooperate with other players in guild wars and events. You can also chat with other players in real-time and share your strategies and tips. You can also trade items and cards with other players in the market.
-
What are some tips and tricks for playing Seven Deadly Sins Grand Cross?
-
If you are new to Seven Deadly Sins Grand Cross or want to improve your gameplay skills, here are some tips and tricks for playing the game:
-
Prioritize the main quest line
-
The main quest line is the best way to progress through the game and unlock new features and content. The main quest line follows the story of The Seven Deadly Sins and rewards you with various items and resources, such as gold, gems, stamina potions, equipment, etc. The main quest line also unlocks new areas and locations for you to explore and complete side quests. The main quest line also increases your player level and rank, which allows you to access more content and modes.
-
Create card fusions without forcing them
-
Card fusion is a key element of the combat system in Seven Deadly Sins Grand Cross. Card fusion allows you to combine cards of the same type to rank them up and increase their power and range. However, you should not force card fusion by using cards that are not optimal for the situation. For example, you should not use a heal card to create a fusion if you do not need to heal. You should also not use a debuff card to create a fusion if the enemy is immune to debuffs. You should always consider the enemy's attributes, the card effects, and the card order before creating card fusions. You should also save some cards for the next turn, as they will be automatically ranked up.
-
Put the auto battle and x2 speed feature to good use
-
Seven Deadly Sins Grand Cross has an auto battle and x2 speed feature that can help you save time and effort when playing the game. The auto battle feature allows the game to choose and use cards for you based on a preset strategy. The x2 speed feature allows the game to run faster and skip some animations. You can use these features when you are farming resources, completing easy quests, or replaying stages that you have already cleared. However, you should not rely on these features too much, as they may not be optimal for some situations. For example, you should not use the auto battle feature when you are facing a boss or a difficult enemy, as the game may not use the best cards or strategy for you. You should also not use the x2 speed feature when you are watching cutscenes or enjoying the story, as you may miss some important details or emotions.
-
Manage your resources wisely
-
Seven Deadly Sins Grand Cross is a game that requires you to manage your resources wisely. You will need various resources to upgrade your characters, equipment, tavern, etc. Some of the main resources are gold, gems, stamina, anvils, hammers, awakening stones, etc. You can obtain these resources from various sources, such as quests, events, rewards, shops, etc. However, you should not spend these resources recklessly, as they may be limited or scarce. You should always prioritize the most important or urgent upgrades and save some resources for future needs. You should also avoid wasting resources on unnecessary or inefficient upgrades.
-
Join a knighthood and participate in events
-
Seven Deadly Sins Grand Cross is a game that encourages you to join a knighthood and participate in events. A knighthood is a guild that allows you to cooperate and communicate with other players. You can join an existing knighthood or create your own knighthood with your friends. By joining a knighthood, you can access various benefits and features, such as guild wars, guild bosses, guild shop, guild chat, etc. You can also earn guild coins and guild points that can be used to buy items or rank up your knighthood. By participating in events, you can access various content and rewards that are exclusive to the event period. You can participate in events such as festivals, collabs, special quests, etc. You can also earn event coins and event points that can be used to buy items or exchange for prizes.
-
What are some reviews of Seven Deadly Sins Grand Cross?
-
Seven Deadly Sins Grand Cross is a game that has received positive reviews from critics and players alike. Here are some of the reviews of the game:
-
A positive review from TheGamer.com
-
TheGamer.com gave Seven Deadly Sins Grand Cross a score of 4 out of 5 stars and praised its graphics, combat system, story mode, and voice acting. The reviewer wrote:
-
-
"Seven Deadly Sins: Grand Cross is one of the best looking anime games on the market right now...The combat system is simple yet satisfying...The story mode is well done and faithful to the source material...The voice acting is top notch..."
-
-
A positive review from IGN.com
-
IGN.com gave Seven Deadly Sins Grand Cross a score of 8 out of 10 and praised its gameplay variety , graphics, story, and characters. The reviewer wrote:
-
-
"Seven Deadly Sins: Grand Cross is a well-made and polished RPG that offers a lot of gameplay variety...The graphics are stunning and the animations are smooth...The story is engaging and faithful to the anime...The characters are diverse and likable..."
-
-
A positive review from KINCIR.com
-
KINCIR.com gave Seven Deadly Sins Grand Cross a score of 8.5 out of 10 and praised its gameplay mechanics, customization options, and sound quality. The reviewer wrote:
-
-
"Seven Deadly Sins: Grand Cross is a game that has a lot of gameplay mechanics that are fun and challenging...The customization options are abundant and satisfying...The sound quality is excellent and immersive..."
-
-
A positive review from Metacritic.com
-
Metacritic.com gave Seven Deadly Sins Grand Cross a score of 86 out of 100 based on the ratings of 12 critics and 32 users. The website also showed some of the positive user reviews, such as:
-
-
"This game is amazing. The graphics are beautiful, the gameplay is smooth, the story is captivating, and the characters are awesome. I love this game so much."
-
"This game is one of the best anime games I have ever played. It has everything I want in a game: great story, great combat, great customization, great voice acting, great music, etc. I highly recommend this game to anyone who likes anime or RPGs."
-
"This game is a masterpiece. It is a perfect adaptation of the anime and manga series. It is a game that respects the fans and the source material. It is a game that deserves more recognition and appreciation."
-
-
Conclusion
-
Seven Deadly Sins Grand Cross is a cinematic anime game for mobile that is based on the popular anime and manga series The Seven Deadly Sins. It is a game that recreates the original story and battles with high-quality 3D graphics and voice acting. It is a game that offers various features and content for fans and newcomers alike. It is a game that has received positive reviews from critics and players alike. If you want to play Seven Deadly Sins Grand Cross on your mobile device, you can download its APK file from the official sources or the alternative sources, as long as you take some precautions and meet some requirements. You can also use some tips and tricks to improve your gameplay skills and enjoy the game more. Seven Deadly Sins Grand Cross is a game that will immerse you in the world of Britannia and make you feel like you are part of The Seven Deadly Sins.
-
FAQs
-
Here are some of the frequently asked questions about Seven Deadly Sins Grand Cross:
-
Q: Is Seven Deadly Sins Grand Cross free to play?
-
A: Yes, Seven Deadly Sins Grand Cross is free to play. However, it also offers in-app purchases that can enhance your gaming experience.
-
Q: Is Seven Deadly Sins Grand Cross available in my country?
-
A: Seven Deadly Sins Grand Cross is available in most countries around the world. However, some regions may have different versions or servers of the game. You can check the official website or the official social media pages for more information.
-
Q: Is Seven Deadly Sins Grand Cross compatible with my device?
-
A: Seven Deadly Sins Grand Cross is compatible with most Android and iOS devices that meet the minimum system requirements. However, some devices may experience performance issues or bugs due to various factors. You can check the official website or contact the customer support for more information.
-
Q: How can I contact the customer support of Seven Deadly Sins Grand Cross?
-
A: You can contact the customer support of Seven Deadly Sins Grand Cross by using the in-game inquiry feature or by sending an email to cs@netmarble.com.
-
Q: How can I get more information about Seven Deadly Sins Grand Cross?
-
A: You can get more information about Seven Deadly Sins Grand Cross by visiting the official website, following the official social media pages, joining the official community forums, or watching the official YouTube channel.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/A00001/bingothoo/src/components/turn-counter.tsx b/spaces/A00001/bingothoo/src/components/turn-counter.tsx
deleted file mode 100644
index 08a9e488f044802a8600f4d195b106567c35aab4..0000000000000000000000000000000000000000
--- a/spaces/A00001/bingothoo/src/components/turn-counter.tsx
+++ /dev/null
@@ -1,23 +0,0 @@
-import React from 'react'
-import { Throttling } from '@/lib/bots/bing/types'
-
-export interface TurnCounterProps {
- throttling?: Throttling
-}
-
-export function TurnCounter({ throttling }: TurnCounterProps) {
- if (!throttling) {
- return null
- }
-
- return (
-
- )
-}
diff --git a/spaces/AFCMEgypt/WCB/app.py b/spaces/AFCMEgypt/WCB/app.py
deleted file mode 100644
index 1a398b975a00b294264ef5c3660bc5a7b16c4ea5..0000000000000000000000000000000000000000
--- a/spaces/AFCMEgypt/WCB/app.py
+++ /dev/null
@@ -1,122 +0,0 @@
-
-#Import Required Packages
-import numpy as np
-import gradio as gr
-#from google.colab.patches import cv2_imshow
-import cv2
-import matplotlib.pyplot as plt
-import numpy as np
-import skimage
-import imutils
-from imutils import contours
-import math
-def cube (v):
- return v**3
-def sqrtabs (v) :
- return math.sqrt(abs(v))
-def figplota(xvalues):
- fig = plt.figure()
- plt.plot(xvalues, figure=fig)
- return fig
-def quant(imageinput):
- #@title Please Input the Lateral Flow Assay Image
- # read image using openCV
- #path = "/content/l1.jpg"
- image = cv2.imread(imageinput)#imageinput
- target = "PKU"
- #print(image)
- #cv2_imshow(image)
- # Convert the image to grayscale
- BGR2RGB = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
- gray = cv2.cvtColor(BGR2RGB, cv2.COLOR_RGB2GRAY)
- #print(gray)
- #cv2_imshow(gray)
- # Invert the image to negative scale
- negative = cv2.bitwise_not(gray)
- negativeimage = negative.copy() #save a copy to avoid disrupting the image contour
- #print(negativeimage)
- #cv2_imshow(negativeimage)
- # Minimize the noisy effects of artificats using Gaussian blur (helps with minimizing the effect of noisy artifactual bright-spots)
- blur = cv2.GaussianBlur(negativeimage, (11, 11), 0)
- #print(blur)
- #cv2_imshow(blur)
- # Binarize Image
- threshold = float(cv2.meanStdDev(blur)[0]) + 0.6*float(cv2.meanStdDev(blur)[1])
- imgthreshold = cv2.threshold(blur, threshold, 255, cv2.THRESH_BINARY)[1]
- #print(imgthreshold)
- #cv2_imshow(image_thresh)
- # Reducing noise noise through eroding & eroding
- imgeroding = cv2.erode(imgthreshold, None, iterations=1)
- zeronoise = cv2.dilate(imgeroding, None, iterations=1)
- #print(zeronoise)
- #cv2_imshow(zeronoise)
- # CCA the threshold Image
- import skimage.measure
- labels = skimage.measure.label(zeronoise, background=0)
- masking = np.zeros(zeronoise.shape, dtype="uint8")
- for label in np.unique(labels):
- if label == 0:
- continue
- MaskL = np.zeros(zeronoise.shape, dtype="uint8")
- MaskL[labels == label] = 255
- numPixels = cv2.countNonZero(MaskL)
- if numPixels > masking.shape[1]*3:
- masking = cv2.add(masking, MaskL)
- #cv2_imshow(mask)
- # Find the contours and sort, please change from bottom-to-top to top-to-bottom accordingly
- contourss = cv2.findContours(masking.copy(), cv2.RETR_EXTERNAL,
- cv2.CHAIN_APPROX_SIMPLE)
- contourss = imutils.grab_contours(contourss)
- contourss = contours.sort_contours(contourss, method="bottom-to-top")[0] #change here accordingly
- final= []
- if len(contourss) > 1:
- for (i, c) in enumerate(contourss):
- # draw the bright spot on the image for the control and sample band
- x, y, width, height = cv2.boundingRect(c)
- final.append(negativeimage[y:y+height, x:x+width])
- rect = cv2.minAreaRect(c)
- box = cv2.boxPoints(rect)
- # convert all coordinates floating point values to int
- box = np.int0(box)
- # draw a rectangle
- cv2.drawContours(image, [box], 0, (0, 0, 255), thickness=2)
-
- elif len(contourss) == 1:
- # draw the bright spot on the image for the control band
- for (i, c) in enumerate(contourss):
- x, y, width, height = cv2.boundingRect(c)
- final.append(negativeimage[y:y+height, x:x+width])
- rect = cv2.minAreaRect(c)
- box = cv2.boxPoints(rect)
- # convert all coordinates floating point values to int
- box = np.int0(box)
- # draw a rectangle
- cv2.drawContours(image, [box], 0, (0, 0, 255), thickness=2)
-
-
-
- # Return error message for unclear tests
- else :
- print("No Bands Detected")
- #print(image)
- #cv2_imshow(image)
- # generate signal ratio of sample to control band, you can change according to sorting of bands
-
- ratio1 = cv2.meanStdDev(final[0])[0]
- ratio=((cube(math.cos(sqrtabs(ratio1 - -0.393284)) + 2.2783713) / pow(math.cos(y), 0.20675313)) - (math.exp(math.cos(math.cos((sqrtabs(math.tan(cube(ratio1)) - (ratio1 +math.tan(math.sin(ratio1)))) / 0.44953698) * 0.9778089))) + (-2.3363407 / ratio1)))
- thresho = 20
- sig=final[0][0]
- #signal=plt.plot(sig,figure=plt.figure())
- if ratio >= thresho:
- xx=str("The test band signal [" + str(ratio) + "mg/dl] shows a " + target +"-POSITIVE test." +" " + "Classic PKU, needs urgent medical treatment")
- elif ratio >= 2 and ratio <6:
- xx=str("The test band signal [" + str(ratio) + "mg/dl] shows a " + target +"-POSITIVE test." +" " + "Likely PKU phenotype.")
- elif ratio >= 6 and ratio <12:
- xx=str("The test band signal [" + str(ratio) + "mg/dl] shows a " + target +"-POSITIVE test." +" " + "PKU and dietary restriction is recommended")
- elif ratio >=12 and ratio <20:
- xx=str("The test band signal [" + str(ratio) + "mg/dl] shows a " + target +"-POSITIVE test." +" " + "PKU and need medical attention for risk of intellectuall impairment")
- else:
- xx=str("The test band signal[" + str(ratio) + "mg/dl] shows a " + target +"-NEGATIVE test.")
- return xx,figplota(sig),cv2.resize(image, (20,60), interpolation = cv2.INTER_AREA) #cv2.resize(signal, (20,40), interpolation = cv2.INTER_AREA)#,cv2.resize(signal, (20,40), interpolation = cv2.INTER_AREA)
-iface = gr.Interface(quant, gr.Image(type="filepath"), outputs=["text","plot","image"],debug=True)
-iface.launch()
\ No newline at end of file
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/vocoder/hifigan/modules.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/vocoder/hifigan/modules.py
deleted file mode 100644
index 670f7eb4a71ebabb5358c4108390490136f2a39c..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/vocoder/hifigan/modules.py
+++ /dev/null
@@ -1,332 +0,0 @@
-import os
-import torch
-import torch.nn.functional as F
-import torch.nn as nn
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from pathlib import Path
-import yaml
-import numpy as np
-from argparse import Namespace
-LRELU_SLOPE = 0.1
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.h = h
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- xt = c2(xt)
- x = xt + x
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.h = h
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- xt = c(xt)
- x = xt + x
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Generator(torch.nn.Module):
- def __init__(self, h):
- super(Generator, self).__init__()
- self.h = h
- self.num_kernels = len(h.resblock_kernel_sizes)
- self.num_upsamples = len(h.upsample_rates)
- self.conv_pre = weight_norm(Conv1d(80, h.upsample_initial_channel, 7, 1, padding=3))
- resblock = ResBlock1 if h.resblock == '1' else ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)):
- self.ups.append(weight_norm(
- ConvTranspose1d(h.upsample_initial_channel//(2**i), h.upsample_initial_channel//(2**(i+1)),
- k, u, padding=(k-u)//2)))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = h.upsample_initial_channel//(2**(i+1))
- for j, (k, d) in enumerate(zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)):
- self.resblocks.append(resblock(h, ch, k, d))
-
- self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3))
- self.ups.apply(init_weights)
- self.conv_post.apply(init_weights)
-
- def forward(self, x):
- x = self.conv_pre(x)
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i*self.num_kernels+j](x)
- else:
- xs += self.resblocks[i*self.num_kernels+j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
- remove_weight_norm(self.conv_pre)
- remove_weight_norm(self.conv_post)
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self):
- super(MultiPeriodDiscriminator, self).__init__()
- self.discriminators = nn.ModuleList([
- DiscriminatorP(2),
- DiscriminatorP(3),
- DiscriminatorP(5),
- DiscriminatorP(7),
- DiscriminatorP(11),
- ])
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 128, 15, 1, padding=7)),
- norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)),
- norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)),
- norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiScaleDiscriminator(torch.nn.Module):
- def __init__(self):
- super(MultiScaleDiscriminator, self).__init__()
- self.discriminators = nn.ModuleList([
- DiscriminatorS(use_spectral_norm=True),
- DiscriminatorS(),
- DiscriminatorS(),
- ])
- self.meanpools = nn.ModuleList([
- AvgPool1d(4, 2, padding=2),
- AvgPool1d(4, 2, padding=2)
- ])
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- if i != 0:
- y = self.meanpools[i-1](y)
- y_hat = self.meanpools[i-1](y_hat)
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss*2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- r_loss = torch.mean((1-dr)**2)
- g_loss = torch.mean(dg**2)
- loss += (r_loss + g_loss)
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- l = torch.mean((1-dg)**2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
-
-
-class VocoderHifigan(object):
- def __init__(self, ckpt_vocoder,device='cuda'):
-
- with open(os.path.join(ckpt_vocoder,'args.yml'), 'r') as f:
- vocoder_args = Namespace(**yaml.load(f, Loader=yaml.UnsafeLoader))
-
- self.generator = Generator(vocoder_args)
- netG_path = os.path.join(ckpt_vocoder,'best_netG.pt')
- if os.path.exists(netG_path):
- vocoder_sd = torch.load(netG_path, map_location='cpu')
- self.generator.load_state_dict(vocoder_sd['generator'])
- self.generator.eval()
-
- self.device = device
- self.generator.to(self.device)
-
- def vocode(self, spec, global_step=None):
- with torch.no_grad():
- if isinstance(spec,np.ndarray):
- spec = torch.from_numpy(spec).unsqueeze(0)
- spec = spec.to(dtype=torch.float32,device=self.device)
- return self.generator(spec).squeeze().cpu().numpy()
-
-class VocoderHifigan_noload(object):
- def __init__(self, vocoder_args,device='cuda'):
- self.generator = Generator(vocoder_args)
- self.generator.eval()
-
- self.device = device
- self.generator.to(self.device)
-
- def vocode(self, spec, global_step=None):
- with torch.no_grad():
- if isinstance(spec,np.ndarray):
- spec = torch.from_numpy(spec).unsqueeze(0)
- spec = spec.to(dtype=torch.float32,device=self.device)
- return self.generator(spec).squeeze().cpu().numpy()
\ No newline at end of file
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/base_binarizer.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/base_binarizer.py
deleted file mode 100644
index 3cbc6336c45fbcd3693b3216c6f0eb62cafe055d..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/base_binarizer.py
+++ /dev/null
@@ -1,412 +0,0 @@
-import json
-import os
-import random
-from re import L
-import traceback
-from functools import partial
-
-import numpy as np
-from resemblyzer import VoiceEncoder
-from tqdm import tqdm
-
-from transformers import AutoTokenizer
-
-# import utils.commons.single_thread_env # NOQA
-from text_to_speech.utils.audio import librosa_wav2spec
-from text_to_speech.utils.audio.align import get_mel2ph, mel2token_to_dur
-from text_to_speech.utils.audio.cwt import get_lf0_cwt, get_cont_lf0
-from text_to_speech.utils.audio.pitch.utils import f0_to_coarse
-from text_to_speech.utils.audio.pitch_extractors import extract_pitch_simple
-from text_to_speech.utils.commons.hparams import hparams
-from text_to_speech.utils.commons.indexed_datasets import IndexedDatasetBuilder
-from text_to_speech.utils.commons.multiprocess_utils import multiprocess_run_tqdm
-from text_to_speech.utils.os_utils import remove_file, copy_file
-
-np.seterr(divide='ignore', invalid='ignore')
-
-
-class BinarizationError(Exception):
- pass
-
-sentence2graph_parser = None
-bert_tokenizer = None
-use_graph = False
-use_bpe = True
-
-
-class BaseBinarizer:
- def __init__(self, processed_data_dir=None):
- if processed_data_dir is None:
- processed_data_dir = hparams['processed_data_dir']
- self.processed_data_dir = processed_data_dir
- self.binarization_args = hparams['binarization_args']
- self.items = {}
- self.item_names = []
-
- global sentence2graph_parser
- global use_graph
- global use_bpe
- global bert_tokenizer
- if use_graph:
- from text_to_speech.modules.tts.syntaspeech.syntactic_graph_buider import Sentence2GraphParser
-
- if hparams['ds_name'] in ['libritts', 'librispeech']:
- # Unfortunately, we found when processing libritts with multi-processing will incur pytorch.multiprocessing ERROR
- # so we use single thread with cuda graph builder
- # it take about 20 hours in a PC with 24-cores-cpu and a RTX2080Ti to process the whole LibriTTS
- # so run the binarization and take a break!
- if use_graph:
- sentence2graph_parser = Sentence2GraphParser("en", use_gpu=True)
- if use_bpe:
- model_name = 'bert-base-uncased'
- tokenizer_kwargs = {'cache_dir': None, 'use_fast': True, 'revision': 'main', 'use_auth_token': None}
- bert_tokenizer = AutoTokenizer.from_pretrained(model_name, **tokenizer_kwargs)
- elif hparams['ds_name'] == 'ljspeech':
- # use multi-processing, thus gpu is disabled
- # it takes about 30 minutes for binarization
- if use_graph:
- sentence2graph_parser = Sentence2GraphParser("en", use_gpu=False)
- if use_bpe:
- model_name = 'bert-base-uncased'
- tokenizer_kwargs = {'cache_dir': None, 'use_fast': True, 'revision': 'main', 'use_auth_token': None}
- bert_tokenizer = AutoTokenizer.from_pretrained(model_name, **tokenizer_kwargs)
- elif hparams['preprocess_args']['txt_processor'] == 'zh':
- # use multi-processing, thus gpu is disabled
- # it takes about 30 minutes for binarization
- if use_graph:
- sentence2graph_parser = Sentence2GraphParser("zh", use_gpu=False)
- if use_bpe:
- model_name = 'bert-base-chinese'
- tokenizer_kwargs = {'cache_dir': None, 'use_fast': True, 'revision': 'main', 'use_auth_token': None}
- bert_tokenizer = AutoTokenizer.from_pretrained(model_name, **tokenizer_kwargs)
- else:
- pass
-
- def load_meta_data(self):
- processed_data_dir = self.processed_data_dir
- items_list = json.load(open(f"{processed_data_dir}/metadata.json"))
- for r in tqdm(items_list, desc='Loading meta data.'):
- item_name = r['item_name']
- self.items[item_name] = r
- self.item_names.append(item_name)
- if self.binarization_args['shuffle']:
- random.seed(1234)
- random.shuffle(self.item_names)
-
- @property
- def train_item_names(self):
- range_ = self._convert_range(self.binarization_args['train_range'])
- return self.item_names[range_[0]:range_[1]]
-
- @property
- def valid_item_names(self):
- range_ = self._convert_range(self.binarization_args['valid_range'])
- return self.item_names[range_[0]:range_[1]]
-
- @property
- def test_item_names(self):
- range_ = self._convert_range(self.binarization_args['test_range'])
- return self.item_names[range_[0]:range_[1]]
-
- def _convert_range(self, range_):
- if range_[1] == -1:
- range_[1] = len(self.item_names)
- return range_
-
- def meta_data(self, prefix):
- if prefix == 'valid':
- item_names = self.valid_item_names
- elif prefix == 'test':
- item_names = self.test_item_names
- else:
- item_names = self.train_item_names
- for item_name in item_names:
- yield self.items[item_name]
-
- def process(self):
- self.load_meta_data()
- os.makedirs(hparams['binary_data_dir'], exist_ok=True)
- for fn in ['phone_set.json', 'word_set.json', 'spk_map.json']:
- remove_file(f"{hparams['binary_data_dir']}/{fn}")
- copy_file(f"{hparams['processed_data_dir']}/{fn}", f"{hparams['binary_data_dir']}/{fn}")
- if hparams['ds_name'] in ['ljspeech', 'biaobei', 'wenetspeech']:
- self.process_data('valid')
- self.process_data('test')
- self.process_data('train')
- elif hparams['ds_name'] in ['libritts', 'librispeech']:
- self.process_data_single_processing('valid')
- self.process_data_single_processing('test')
- self.process_data_single_processing('train')
- else:
- self.process_data('valid')
- self.process_data('test')
- self.process_data('train')
- # raise NotImplementedError
-
- def process_data(self, prefix):
- data_dir = hparams['binary_data_dir']
- builder = IndexedDatasetBuilder(f'{data_dir}/{prefix}')
- meta_data = list(self.meta_data(prefix))
- process_item = partial(self.process_item, binarization_args=self.binarization_args)
- ph_lengths = []
- mel_lengths = []
- total_sec = 0
- items = []
- args = [{'item': item} for item in meta_data]
-
- for item_id, item in multiprocess_run_tqdm(process_item, args, desc='Processing data'):
- if item is not None:
- items.append(item)
- if self.binarization_args['with_spk_embed']:
- args = [{'wav': item['wav']} for item in items]
- for item_id, spk_embed in multiprocess_run_tqdm(
- self.get_spk_embed, args,
- init_ctx_func=lambda wid: {'voice_encoder': VoiceEncoder().cuda()}, num_workers=4,
- desc='Extracting spk embed'):
- items[item_id]['spk_embed'] = spk_embed
-
- for item in items:
- if not self.binarization_args['with_wav'] and 'wav' in item:
- del item['wav']
- builder.add_item(item)
- mel_lengths.append(item['len'])
- assert item['len'] > 0, (item['item_name'], item['txt'], item['mel2ph'])
- if 'ph_len' in item:
- ph_lengths.append(item['ph_len'])
- total_sec += item['sec']
- builder.finalize()
- np.save(f'{data_dir}/{prefix}_lengths.npy', mel_lengths)
- if len(ph_lengths) > 0:
- np.save(f'{data_dir}/{prefix}_ph_lengths.npy', ph_lengths)
- print(f"| {prefix} total duration: {total_sec:.3f}s")
-
- def process_data_single_processing(self, prefix):
- data_dir = hparams['binary_data_dir']
- builder = IndexedDatasetBuilder(f'{data_dir}/{prefix}')
- meta_data = list(self.meta_data(prefix))
- ph_lengths = []
- mel_lengths = []
- total_sec = 0
-
- if self.binarization_args['with_spk_embed']:
- voice_encoder = VoiceEncoder().cuda()
- for raw_item in tqdm(meta_data):
- item = self.process_item(raw_item, self.binarization_args)
- if item is None:
- continue
- if item is not None:
- if use_graph:
- if item['dgl_graph'].num_nodes() != np.array(item['ph2word']).max():
- print(f"Skip Item: {item['item_name']} word nodes number incorrect!")
- continue
-
- if self.binarization_args['with_spk_embed']:
- spk_embed = self.get_spk_embed(item['wav'], {'voice_encoder': voice_encoder})
- item['spk_embed'] = spk_embed
-
- if not self.binarization_args['with_wav'] and 'wav' in item:
- del item['wav']
- builder.add_item(item)
- mel_lengths.append(item['len'])
- assert item['len'] > 0, (item['item_name'], item['txt'], item['mel2ph'])
- if 'ph_len' in item:
- ph_lengths.append(item['ph_len'])
- total_sec += item['sec']
- builder.finalize()
- np.save(f'{data_dir}/{prefix}_lengths.npy', mel_lengths)
- if len(ph_lengths) > 0:
- np.save(f'{data_dir}/{prefix}_ph_lengths.npy', ph_lengths)
- print(f"| {prefix} total duration: {total_sec:.3f}s")
-
- # def process_data_single_processing(self, prefix):
- # data_dir = hparams['binary_data_dir']
- # builder = IndexedDatasetBuilder(f'{data_dir}/{prefix}')
- # meta_data = list(self.meta_data(prefix))
- # ph_lengths = []
- # mel_lengths = []
- # total_sec = 0
- # items = []
- # args = [{'item': item} for item in meta_data]
-
- # for raw_item in tqdm(meta_data):
- # item = self.process_item(raw_item, self.binarization_args)
- # if item is not None:
- # if item['dgl_graph'].num_nodes() != np.array(item['ph2word']).max():
- # print(f"Skip Item: {item['item_name']} word nodes number incorrect!")
- # continue
-
- # items.append(item)
-
- # if self.binarization_args['with_spk_embed']:
- # args = [{'wav': item['wav']} for item in items]
- # for item_id, spk_embed in multiprocess_run_tqdm(
- # self.get_spk_embed, args,
- # init_ctx_func=lambda wid: {'voice_encoder': VoiceEncoder().cuda()}, num_workers=4,
- # desc='Extracting spk embed'):
- # items[item_id]['spk_embed'] = spk_embed
-
- # for item in items:
- # if not self.binarization_args['with_wav'] and 'wav' in item:
- # del item['wav']
- # builder.add_item(item)
- # mel_lengths.append(item['len'])
- # assert item['len'] > 0, (item['item_name'], item['txt'], item['mel2ph'])
- # if 'ph_len' in item:
- # ph_lengths.append(item['ph_len'])
- # total_sec += item['sec']
- # builder.finalize()
- # np.save(f'{data_dir}/{prefix}_lengths.npy', mel_lengths)
- # if len(ph_lengths) > 0:
- # np.save(f'{data_dir}/{prefix}_ph_lengths.npy', ph_lengths)
- # print(f"| {prefix} total duration: {total_sec:.3f}s")
-
- @classmethod
- def process_item(cls, item, binarization_args):
- try:
- item['ph_len'] = len(item['ph_token'])
- item_name = item['item_name']
- wav_fn = item['wav_fn']
- wav, mel = cls.process_audio(wav_fn, item, binarization_args)
- except Exception as e:
- print(f"| Skip item ({e}) for index error. item_name: {item_name}, wav_fn: {wav_fn}")
- return None
- try:
- n_bos_frames, n_eos_frames = 0, 0
- if binarization_args['with_align']:
- tg_fn = f"{hparams['processed_data_dir']}/mfa_outputs/{item_name}.TextGrid"
- item['tg_fn'] = tg_fn
- cls.process_align(tg_fn, item)
- if binarization_args['trim_eos_bos']:
- n_bos_frames = item['dur'][0]
- n_eos_frames = item['dur'][-1]
- T = len(mel)
- item['mel'] = mel[n_bos_frames:T - n_eos_frames]
-
- item['mel2ph'] = item['mel2ph'][n_bos_frames:T - n_eos_frames]
- item['mel2word'] = item['mel2word'][n_bos_frames:T - n_eos_frames]
- item['dur'] = item['dur'][1:-1]
- item['dur_word'] = item['dur_word'][1:-1]
- item['len'] = item['mel'].shape[0]
- item['wav'] = wav[n_bos_frames * hparams['hop_size']:len(wav) - n_eos_frames * hparams['hop_size']]
- if binarization_args['with_f0']:
- cls.process_pitch(item, n_bos_frames, n_eos_frames)
- except BinarizationError as e:
- print(f"| Skip item ({e}). item_name: {item_name}, wav_fn: {wav_fn}")
- return None
- except Exception as e:
- traceback.print_exc()
- print(f"| Skip item. item_name: {item_name}, wav_fn: {wav_fn}")
- return None
-
- # if item['mel'].shape[0] < 64:
- # print(f"Skip Item: {item['item_name']} Mel-spectrogram is shorter than 64!")
- # return None
- # fix one bad case of stanza
- if item['txt'].endswith('yn .'):
- item['txt'] = item['txt'][:-4]+'y .'
- if use_graph:
- try:
- language = sentence2graph_parser.language
- if language == 'en':
- dgl_graph, etypes = sentence2graph_parser.parse(item['txt'])
- elif language == 'zh':
- dgl_graph, etypes = sentence2graph_parser.parse(item['txt'], item['word'].split(" "), item['ph_gb_word'].split(" "))
- else:
- raise NotImplementedError
- item['dgl_graph'] = dgl_graph
- item['edge_types'] = etypes
- except:
- print(f"| Dependency Parsing Error! Skip item. item_name: {item_name}, wav_fn: {wav_fn}")
- return None
-
- if use_bpe:
- sent = item['word'][6:-6] # discard the and , because the bert_tokenizer cannot recognize them.
- bert_tokens = bert_tokenizer.tokenize(sent)
- input_ids = bert_tokenizer.convert_tokens_to_ids(bert_tokens)
- input_ids.insert(0, 101) # add [CLS] to represent [BOS]
- input_ids.append(102) # add [SEP] to represent [EOS]
-
- bert_tokens.insert(0, '')
- bert_tokens.append('')
- bert_token2word = []
- word_idx = 0
- for i in range(len(bert_tokens)):
- if not bert_tokens[i].startswith("##"): # this token is a independent word
- word_idx += 1
- bert_token2word.append(word_idx)
-
- item['bert_token'] = bert_tokens
- item['bert_input_ids'] = input_ids
- item['bert_token2word'] = bert_token2word
- item['bert_attention_mask'] = [1 for _ in range(len(bert_tokens))]
- item['bert_token_type_ids'] = [0 for _ in range(len(bert_tokens))]
-
- return item
-
- @classmethod
- def process_audio(cls, wav_fn, res, binarization_args):
- wav2spec_dict = librosa_wav2spec(
- wav_fn,
- fft_size=hparams['fft_size'],
- hop_size=hparams['hop_size'],
- win_length=hparams['win_size'],
- num_mels=hparams['audio_num_mel_bins'],
- fmin=hparams['fmin'],
- fmax=hparams['fmax'],
- sample_rate=hparams['audio_sample_rate'],
- loud_norm=hparams['loud_norm'])
- mel = wav2spec_dict['mel']
- wav = wav2spec_dict['wav'].astype(np.float16)
- if binarization_args['with_linear']:
- res['linear'] = wav2spec_dict['linear']
- res.update({'mel': mel, 'wav': wav, 'sec': len(wav) / hparams['audio_sample_rate'], 'len': mel.shape[0]})
- return wav, mel
-
- @staticmethod
- def process_align(tg_fn, item):
- ph = item['ph']
- mel = item['mel']
- ph_token = item['ph_token']
- if tg_fn is not None and os.path.exists(tg_fn):
- mel2ph, dur = get_mel2ph(tg_fn, ph, mel, hparams['hop_size'], hparams['audio_sample_rate'],
- hparams['binarization_args']['min_sil_duration'])
- else:
- raise BinarizationError(f"Align not found")
- if np.array(mel2ph).max() - 1 >= len(ph_token):
- raise BinarizationError(
- f"Align does not match: mel2ph.max() - 1: {np.array(mel2ph).max() - 1}, len(phone_encoded): {len(ph_token)}")
- item['mel2ph'] = mel2ph
- item['dur'] = dur
-
- ph2word = item['ph2word']
- mel2word = [ph2word[p - 1] for p in item['mel2ph']]
- item['mel2word'] = mel2word # [T_mel]
- dur_word = mel2token_to_dur(mel2word, len(item['word_token']))
- item['dur_word'] = dur_word.tolist() # [T_word]
-
- @staticmethod
- def process_pitch(item, n_bos_frames, n_eos_frames):
- wav, mel = item['wav'], item['mel']
- f0 = extract_pitch_simple(item['wav'])
- if sum(f0) == 0:
- raise BinarizationError("Empty f0")
- assert len(mel) == len(f0), (len(mel), len(f0))
- pitch_coarse = f0_to_coarse(f0)
- item['f0'] = f0
- item['pitch'] = pitch_coarse
- if hparams['binarization_args']['with_f0cwt']:
- uv, cont_lf0_lpf = get_cont_lf0(f0)
- logf0s_mean_org, logf0s_std_org = np.mean(cont_lf0_lpf), np.std(cont_lf0_lpf)
- cont_lf0_lpf_norm = (cont_lf0_lpf - logf0s_mean_org) / logf0s_std_org
- cwt_spec, scales = get_lf0_cwt(cont_lf0_lpf_norm)
- item['cwt_spec'] = cwt_spec
- item['cwt_mean'] = logf0s_mean_org
- item['cwt_std'] = logf0s_std_org
-
- @staticmethod
- def get_spk_embed(wav, ctx):
- return ctx['voice_encoder'].embed_utterance(wav.astype(float))
-
- @property
- def num_workers(self):
- return int(os.getenv('N_PROC', hparams.get('N_PROC', os.cpu_count())))
diff --git a/spaces/AIZeroToHero/Video-Automatic-Speech-Recognition/streaming.py b/spaces/AIZeroToHero/Video-Automatic-Speech-Recognition/streaming.py
deleted file mode 100644
index cc2048269b3e9ac09886471ef9b6dc681db09f25..0000000000000000000000000000000000000000
--- a/spaces/AIZeroToHero/Video-Automatic-Speech-Recognition/streaming.py
+++ /dev/null
@@ -1,66 +0,0 @@
-import subprocess
-
-import numpy as np
-
-
-def ffmpeg_stream(youtube_url, sampling_rate=16_000, chunk_duration_ms=5000, pad_duration_ms=200):
- """
- Helper function to read an audio file through ffmpeg.
- """
- chunk_len = int(sampling_rate * chunk_duration_ms / 1000)
- pad_len = int(sampling_rate * pad_duration_ms / 1000)
- read_chunk_len = chunk_len + pad_len * 2
-
- ar = f"{sampling_rate}"
- ac = "1"
- format_for_conversion = "f32le"
- dtype = np.float32
- size_of_sample = 4
-
- ffmpeg_command = [
- "ffmpeg",
- "-i",
- "pipe:",
- "-ac",
- ac,
- "-ar",
- ar,
- "-f",
- format_for_conversion,
- "-hide_banner",
- "-loglevel",
- "quiet",
- "pipe:1",
- ]
-
- ytdl_command = ["yt-dlp", "-f", "bestaudio", youtube_url, "--quiet", "-o", "-"]
-
- try:
- ffmpeg_process = subprocess.Popen(ffmpeg_command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, bufsize=-1)
- ytdl_process = subprocess.Popen(ytdl_command, stdout=ffmpeg_process.stdin)
- except FileNotFoundError:
- raise ValueError("ffmpeg was not found but is required to stream audio files from filename")
-
- acc = b""
- leftover = np.zeros((0,), dtype=np.float32)
- while ytdl_process.poll() is None:
- buflen = read_chunk_len * size_of_sample
-
- raw = ffmpeg_process.stdout.read(buflen)
- if raw == b"":
- break
-
- if len(acc) + len(raw) > buflen:
- acc = raw
- else:
- acc += raw
-
- audio = np.frombuffer(acc, dtype=dtype)
- audio = np.concatenate([leftover, audio])
- if len(audio) < pad_len * 2:
- # TODO: handle end of stream better than this
- break
- yield audio
-
- leftover = audio[-pad_len * 2 :]
- read_chunk_len = chunk_len
\ No newline at end of file
diff --git a/spaces/ASJMO/freegpt/g4f/Provider/Providers/You.py b/spaces/ASJMO/freegpt/g4f/Provider/Providers/You.py
deleted file mode 100644
index 02a2774ce62bae33612a73272d584dc2acaf3eb0..0000000000000000000000000000000000000000
--- a/spaces/ASJMO/freegpt/g4f/Provider/Providers/You.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import os
-import json
-import time
-import subprocess
-
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://you.com'
-model = 'gpt-3.5-turbo'
-supports_stream = True
-needs_auth = False
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
-
- path = os.path.dirname(os.path.realpath(__file__))
- config = json.dumps({
- 'messages': messages}, separators=(',', ':'))
-
- cmd = ['python3', f'{path}/helpers/you.py', config]
-
- p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
-
- for line in iter(p.stdout.readline, b''):
- yield line.decode('utf-8') #[:-1]
\ No newline at end of file
diff --git a/spaces/AbeShinzo0708/AI_Kishida_Fumio_speaker/hooks/hook-streamlit.py b/spaces/AbeShinzo0708/AI_Kishida_Fumio_speaker/hooks/hook-streamlit.py
deleted file mode 100644
index b5afcec976bb72d477f4de3d433fa317bfe3e7b9..0000000000000000000000000000000000000000
--- a/spaces/AbeShinzo0708/AI_Kishida_Fumio_speaker/hooks/hook-streamlit.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from PyInstaller.utils.hooks import copy_metadata
-
-datas = copy_metadata('streamlit')
diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/stores/pendingMessageIdToRetry.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/stores/pendingMessageIdToRetry.ts
deleted file mode 100644
index 47eec8770ae561b2c4881c5d001a3d46ee699b3b..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/stores/pendingMessageIdToRetry.ts
+++ /dev/null
@@ -1,4 +0,0 @@
-import type { Message } from "$lib/types/Message";
-import { writable } from "svelte/store";
-
-export const pendingMessageIdToRetry = writable(null);
diff --git a/spaces/AchyuthGamer/OpenGPT/server/website.py b/spaces/AchyuthGamer/OpenGPT/server/website.py
deleted file mode 100644
index 01b35dee1621b5b5bea49de330466ebb62817f20..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/server/website.py
+++ /dev/null
@@ -1,58 +0,0 @@
-from flask import render_template, redirect, url_for, request, session
-from flask_babel import refresh
-from time import time
-from os import urandom
-from server.babel import get_locale, get_languages
-
-
-class Website:
- def __init__(self, bp, url_prefix) -> None:
- self.bp = bp
- self.url_prefix = url_prefix
- self.routes = {
- '/': {
- 'function': lambda: redirect(url_for('._index')),
- 'methods': ['GET', 'POST']
- },
- '/chat/': {
- 'function': self._index,
- 'methods': ['GET', 'POST']
- },
- '/chat/': {
- 'function': self._chat,
- 'methods': ['GET', 'POST']
- },
- '/change-language': {
- 'function': self.change_language,
- 'methods': ['POST']
- },
- '/get-locale': {
- 'function': self.get_locale,
- 'methods': ['GET']
- },
- '/get-languages': {
- 'function': self.get_languages,
- 'methods': ['GET']
- }
- }
-
- def _chat(self, conversation_id):
- if '-' not in conversation_id:
- return redirect(url_for('._index'))
-
- return render_template('index.html', chat_id=conversation_id, url_prefix=self.url_prefix)
-
- def _index(self):
- return render_template('index.html', chat_id=f'{urandom(4).hex()}-{urandom(2).hex()}-{urandom(2).hex()}-{urandom(2).hex()}-{hex(int(time() * 1000))[2:]}', url_prefix=self.url_prefix)
-
- def change_language(self):
- data = request.get_json()
- session['language'] = data.get('language')
- refresh()
- return '', 204
-
- def get_locale(self):
- return get_locale()
-
- def get_languages(self):
- return get_languages()
diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/vertical.py b/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/vertical.py
deleted file mode 100644
index d8adf594d1b9324fe7faf5c06cf1c2377e800165..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/decision_maker/vertical.py
+++ /dev/null
@@ -1,58 +0,0 @@
-from __future__ import annotations
-import asyncio
-from colorama import Fore
-
-from typing import TYPE_CHECKING, List
-
-from . import decision_maker_registry
-from .base import BaseDecisionMaker
-from agentverse.logging import typewriter_log, logger
-
-if TYPE_CHECKING:
- from agentverse.agents import BaseAgent, SolverAgent, CriticAgent
- from agentverse.message import Message, CriticMessage, SolverMessage
-
-
-@decision_maker_registry.register("vertical")
-class VerticalDecisionMaker(BaseDecisionMaker):
- """
- Discuss in a vertical manner.
- """
-
- name: str = "vertical"
-
- async def astep(
- self,
- agents: List[BaseAgent],
- task_description: str,
- previous_plan: str = "No solution yet.",
- advice: str = "No advice yet.",
- *args,
- **kwargs,
- ) -> List[SolverMessage]:
- # Here we assume that the first agent is the solver.
- # The rest of the agents are the reviewers.
- reviews: List[CriticMessage] = await asyncio.gather(
- *[
- agent.astep(previous_plan, advice, task_description)
- for agent in agents[1:]
- ]
- )
- logger.info("", "Reviews:", Fore.YELLOW)
- logger.info(
- "",
- "\n".join([f"[{review.sender}]: {review.content}" for review in reviews]),
- Fore.YELLOW,
- )
-
- nonempty_reviews = []
- for review in reviews:
- if not review.is_agree and review.content != "":
- nonempty_reviews.append(review)
- agents[0].add_message_to_memory(nonempty_reviews)
- result = agents[0].step(previous_plan, advice, task_description)
- agents[0].add_message_to_memory([result])
- return [result]
-
- def reset(self):
- pass
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/inputtext/InputText.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/inputtext/InputText.js
deleted file mode 100644
index ffa341b71b2999adf7fbe98460a9e0688e8a59de..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/inputtext/InputText.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import InputText from '../../../plugins/inputtext.js';
-export default InputText;
\ No newline at end of file
diff --git a/spaces/Alex132/togethercomputer-LLaMA-2-7B-32K/app.py b/spaces/Alex132/togethercomputer-LLaMA-2-7B-32K/app.py
deleted file mode 100644
index 0eea9d6f508c3048be87fc452d36415699a6999e..0000000000000000000000000000000000000000
--- a/spaces/Alex132/togethercomputer-LLaMA-2-7B-32K/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/togethercomputer/LLaMA-2-7B-32K").launch()
\ No newline at end of file
diff --git a/spaces/AlexWang/lama/models/ade20k/segm_lib/utils/th.py b/spaces/AlexWang/lama/models/ade20k/segm_lib/utils/th.py
deleted file mode 100644
index ca6ef9385e3b5c0a439579d3fd7aa73b5dc62758..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/models/ade20k/segm_lib/utils/th.py
+++ /dev/null
@@ -1,41 +0,0 @@
-import torch
-from torch.autograd import Variable
-import numpy as np
-import collections
-
-__all__ = ['as_variable', 'as_numpy', 'mark_volatile']
-
-def as_variable(obj):
- if isinstance(obj, Variable):
- return obj
- if isinstance(obj, collections.Sequence):
- return [as_variable(v) for v in obj]
- elif isinstance(obj, collections.Mapping):
- return {k: as_variable(v) for k, v in obj.items()}
- else:
- return Variable(obj)
-
-def as_numpy(obj):
- if isinstance(obj, collections.Sequence):
- return [as_numpy(v) for v in obj]
- elif isinstance(obj, collections.Mapping):
- return {k: as_numpy(v) for k, v in obj.items()}
- elif isinstance(obj, Variable):
- return obj.data.cpu().numpy()
- elif torch.is_tensor(obj):
- return obj.cpu().numpy()
- else:
- return np.array(obj)
-
-def mark_volatile(obj):
- if torch.is_tensor(obj):
- obj = Variable(obj)
- if isinstance(obj, Variable):
- obj.no_grad = True
- return obj
- elif isinstance(obj, collections.Mapping):
- return {k: mark_volatile(o) for k, o in obj.items()}
- elif isinstance(obj, collections.Sequence):
- return [mark_volatile(o) for o in obj]
- else:
- return obj
diff --git a/spaces/AlhitawiMohammed22/CER_Hu-Evaluation-Metrics/eval_cer.py b/spaces/AlhitawiMohammed22/CER_Hu-Evaluation-Metrics/eval_cer.py
deleted file mode 100644
index 4c8f355100b3783696600c1ad0074e4a010d16cf..0000000000000000000000000000000000000000
--- a/spaces/AlhitawiMohammed22/CER_Hu-Evaluation-Metrics/eval_cer.py
+++ /dev/null
@@ -1,145 +0,0 @@
-""" Character Error Ratio (CER) metric. """
-from typing import List
-import datasets, evaluate , jiwer
-import jiwer.transforms as tr
-from datasets.config import PY_VERSION
-from packaging import version
-
-
-if PY_VERSION < version.parse("3.8"):
- import importlib_metadata
-else:
- import importlib.metadata as importlib_metadata
-
-SENTENCE_DELIMITER = ""
-
-if version.parse(importlib_metadata.version("jiwer")) < version.parse("2.3.0"):
-
- class SentencesToListOfCharacters(tr.AbstractTransform):
- def __init__(self, sentence_delimiter: str = " "):
- self.sentence_delimiter = sentence_delimiter
-
- def process_string(self, s: str):
- return list(s)
-
- def process_list(self, inp: List[str]):
- chars = []
- for sent_idx, sentence in enumerate(inp):
- chars.extend(self.process_string(sentence))
- if self.sentence_delimiter is not None and self.sentence_delimiter != "" and sent_idx < len(inp) - 1:
- chars.append(self.sentence_delimiter)
- return chars
-
- cer_transform = tr.Compose(
- [tr.RemoveMultipleSpaces(), tr.Strip(), SentencesToListOfCharacters(SENTENCE_DELIMITER)]
- )
-else:
- cer_transform = tr.Compose(
- [
- tr.RemoveMultipleSpaces(),
- tr.Strip(),
- tr.ReduceToSingleSentence(SENTENCE_DELIMITER),
- tr.ReduceToListOfListOfChars(),
- ]
- )
-
-
-_CITATION = """\
-@inproceedings{inproceedings,
- author = {Morris, Andrew and Maier, Viktoria and Green, Phil},
- year = {2004},
- month = {01},
- pages = {},
- title = {From WER and RIL to MER and WIL: improved evaluation measures for connected speech recognition.}
-}
-"""
-
-
-_DESCRIPTION = """\
-Character error rate (CER) is a standard metric of the performance of an automatic speech recognition system.
-
-CER is similar to Word Error Rate (WER) but operates on characters instead of words. Please refer to the docs of WER for further information.
-
-The character error rate can be computed as:
-
-CER = (S + D + I) / N = (S + D + I) / (S + D + C)
-
-where
-
-S is the number of substitutions,
-D is the number of deletions,
-I is the number of insertions,
-C is the number of correct characters,
-N is the number of characters in the reference (N=S+D+C).
-
-CER's output is not always a number between 0 and 1, particularly when there is a high number of insertions. This value is often associated to the percentage of characters that were incorrectly predicted. The lower the value, the better the
-performance of the ASR system with a CER of 0 being a perfect score.
-"""
-
-_KWARGS_DESCRIPTION = """
-Computes CER score of transcribed segments against references.
-Args:
- references: list of references for each speech input.
- predictions: list of transcriptions to score.
- concatenate_texts: Whether or not to concatenate sentences before evaluation, set to True for a more accurate result.
-Returns:
- (float): the character error rate
-
-Examples for the Hungarian Language:
- >>> # Colab usage
- >>> !pip install evaluate jiwer
- >>> import evaluate
- >>> from evaluate import load
-
- >>> predictions = ["ez a jóslat", "van egy másik minta is"]
- >>> references = ["ez a hivatkozás", "van még egy"]
- >>> cer = evaluate.load("cer")
- >>> cer_score = cer.compute(predictions=predictions, references=references)
- >>> print(cer_score)
- >>> 0.9615384615384616
-"""
-
-
-
-@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
-class CER(evaluate.Metric):
- def _info(self):
- return evaluate.MetricInfo(
- description=_DESCRIPTION,
- citation=_CITATION,
- inputs_description=_KWARGS_DESCRIPTION,
- features=datasets.Features(
- {
- "predictions": datasets.Value("string", id="sequence"),
- "references": datasets.Value("string", id="sequence"),
- }
- ),
- codebase_urls=["https://github.com/jitsi/jiwer/"],
- reference_urls=[
- "https://en.wikipedia.org/wiki/Word_error_rate",
- "https://sites.google.com/site/textdigitisation/qualitymeasures/computingerrorrates",
- ],
- )
-
- def _compute(self, predictions, references, concatenate_texts=False):
- if concatenate_texts:
- return jiwer.compute_measures(
- references,
- predictions,
- truth_transform=cer_transform,
- hypothesis_transform=cer_transform,
- )["wer"]
-
- incorrect = 0
- total = 0
- for prediction, reference in zip(predictions, references):
- measures = jiwer.compute_measures(
- reference,
- prediction,
- truth_transform=cer_transform,
- hypothesis_transform=cer_transform,
- )
- incorrect += measures["substitutions"] + measures["deletions"] + measures["insertions"]
- total += measures["substitutions"] + measures["deletions"] + measures["hits"]
-
- return incorrect / total
\ No newline at end of file
diff --git a/spaces/Altinas/vits-uma-genshin-honkais/text/__init__.py b/spaces/Altinas/vits-uma-genshin-honkais/text/__init__.py
deleted file mode 100644
index 663c4b6416affb53c9dc56dddbc8b2b65d4bf518..0000000000000000000000000000000000000000
--- a/spaces/Altinas/vits-uma-genshin-honkais/text/__init__.py
+++ /dev/null
@@ -1,57 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-from text import cleaners
-from text.symbols import symbols
-
-
-# Mappings from symbol to numeric ID and vice versa:
-_symbol_to_id = {s: i for i, s in enumerate(symbols)}
-_id_to_symbol = {i: s for i, s in enumerate(symbols)}
-
-
-def text_to_sequence(text, symbols, cleaner_names):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- cleaner_names: names of the cleaner functions to run the text through
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- _symbol_to_id = {s: i for i, s in enumerate(symbols)}
- sequence = []
-
- clean_text = _clean_text(text, cleaner_names)
- for symbol in clean_text:
- if symbol not in _symbol_to_id.keys():
- continue
- symbol_id = _symbol_to_id[symbol]
- sequence += [symbol_id]
- return sequence, clean_text
-
-
-def cleaned_text_to_sequence(cleaned_text):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()]
- return sequence
-
-
-def sequence_to_text(sequence):
- '''Converts a sequence of IDs back to a string'''
- result = ''
- for symbol_id in sequence:
- s = _id_to_symbol[symbol_id]
- result += s
- return result
-
-
-def _clean_text(text, cleaner_names):
- for name in cleaner_names:
- cleaner = getattr(cleaners, name)
- if not cleaner:
- raise Exception('Unknown cleaner: %s' % name)
- text = cleaner(text)
- return text
diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/training/coaches/single_id_coach.py b/spaces/Amrrs/DragGan-Inversion/PTI/training/coaches/single_id_coach.py
deleted file mode 100644
index 2ecab5bd53ac5343888314a38d682e9abcc1021d..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/PTI/training/coaches/single_id_coach.py
+++ /dev/null
@@ -1,80 +0,0 @@
-import os
-import torch
-from tqdm import tqdm
-from PTI.configs import paths_config, hyperparameters, global_config
-from PTI.training.coaches.base_coach import BaseCoach
-from PTI.utils.log_utils import log_images_from_w
-
-
-class SingleIDCoach(BaseCoach):
- def __init__(self, data_loader, use_wandb):
- super().__init__(data_loader, use_wandb)
-
- def train(self):
- w_path_dir = f"{paths_config.embedding_base_dir}/{paths_config.input_data_id}"
- os.makedirs(w_path_dir, exist_ok=True)
- os.makedirs(f"{w_path_dir}/{paths_config.pti_results_keyword}", exist_ok=True)
-
- use_ball_holder = True
- w_pivot = None
- fname, image = next(iter(self.data_loader))
- print("NANANAN", fname)
- image_name = fname[0]
-
- self.restart_training()
-
- embedding_dir = f"{w_path_dir}/{paths_config.pti_results_keyword}/{image_name}"
- os.makedirs(embedding_dir, exist_ok=True)
-
- if hyperparameters.use_last_w_pivots:
- w_pivot = self.load_inversions(w_path_dir, image_name)
-
- elif not hyperparameters.use_last_w_pivots or w_pivot is None:
- w_pivot = self.calc_inversions(image, image_name)
- torch.save(w_pivot, f"{embedding_dir}/0.pt")
- # w_pivot = w_pivot.detach().clone().to(global_config.device)
- w_pivot = w_pivot.to(global_config.device)
-
- log_images_counter = 0
- real_images_batch = image.to(global_config.device)
-
- for i in tqdm(range(hyperparameters.max_pti_steps)):
- generated_images = self.forward(w_pivot)
- loss, l2_loss_val, loss_lpips = self.calc_loss(
- generated_images,
- real_images_batch,
- image_name,
- self.G,
- use_ball_holder,
- w_pivot,
- )
-
- self.optimizer.zero_grad()
-
- if loss_lpips <= hyperparameters.LPIPS_value_threshold:
- break
-
- loss.backward()
- self.optimizer.step()
-
- use_ball_holder = (
- global_config.training_step
- % hyperparameters.locality_regularization_interval
- == 0
- )
-
- if (
- self.use_wandb
- and log_images_counter % global_config.image_rec_result_log_snapshot
- == 0
- ):
- log_images_from_w([w_pivot], self.G, [image_name])
-
- global_config.training_step += 1
- log_images_counter += 1
-
- torch.save(
- self.G,
- f"{paths_config.checkpoints_dir}/model_{global_config.run_name}_{image_name}.pt",
- )
- return self.G, w_pivot
diff --git a/spaces/Amrrs/image-caption-with-vit-gpt2/README.md b/spaces/Amrrs/image-caption-with-vit-gpt2/README.md
deleted file mode 100644
index d302d4eef9f3b8f618f038de348fed034e507e84..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/image-caption-with-vit-gpt2/README.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-title: Image Caption With Vit Gpt2
-emoji: 👀
-colorFrom: pink
-colorTo: pink
-sdk: gradio
-app_file: app.py
-pinned: false
-license: mit
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`models`: _List[string]_
-HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`datasets`: _List[string]_
-HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space.
-Will be parsed automatically from your code if not specified here.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_prior.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_prior.py
deleted file mode 100644
index 57d8c7beb97a56150c358c868e23a35d5e053e55..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_prior.py
+++ /dev/null
@@ -1,578 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-from typing import List, Optional, Union
-
-import numpy as np
-import PIL
-import torch
-from transformers import CLIPImageProcessor, CLIPTextModelWithProjection, CLIPTokenizer, CLIPVisionModelWithProjection
-
-from ...models import PriorTransformer
-from ...schedulers import UnCLIPScheduler
-from ...utils import (
- BaseOutput,
- is_accelerate_available,
- is_accelerate_version,
- logging,
- randn_tensor,
- replace_example_docstring,
-)
-from ..pipeline_utils import DiffusionPipeline
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-EXAMPLE_DOC_STRING = """
- Examples:
- ```py
- >>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline
- >>> import torch
-
- >>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior")
- >>> pipe_prior.to("cuda")
-
- >>> prompt = "red cat, 4k photo"
- >>> out = pipe_prior(prompt)
- >>> image_emb = out.image_embeds
- >>> negative_image_emb = out.negative_image_embeds
-
- >>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1")
- >>> pipe.to("cuda")
-
- >>> image = pipe(
- ... prompt,
- ... image_embeds=image_emb,
- ... negative_image_embeds=negative_image_emb,
- ... height=768,
- ... width=768,
- ... num_inference_steps=100,
- ... ).images
-
- >>> image[0].save("cat.png")
- ```
-"""
-
-EXAMPLE_INTERPOLATE_DOC_STRING = """
- Examples:
- ```py
- >>> from diffusers import KandinskyPriorPipeline, KandinskyPipeline
- >>> from diffusers.utils import load_image
- >>> import PIL
-
- >>> import torch
- >>> from torchvision import transforms
-
- >>> pipe_prior = KandinskyPriorPipeline.from_pretrained(
- ... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16
- ... )
- >>> pipe_prior.to("cuda")
-
- >>> img1 = load_image(
- ... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
- ... "/kandinsky/cat.png"
- ... )
-
- >>> img2 = load_image(
- ... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
- ... "/kandinsky/starry_night.jpeg"
- ... )
-
- >>> images_texts = ["a cat", img1, img2]
- >>> weights = [0.3, 0.3, 0.4]
- >>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights)
-
- >>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16)
- >>> pipe.to("cuda")
-
- >>> image = pipe(
- ... "",
- ... image_embeds=image_emb,
- ... negative_image_embeds=zero_image_emb,
- ... height=768,
- ... width=768,
- ... num_inference_steps=150,
- ... ).images[0]
-
- >>> image.save("starry_cat.png")
- ```
-"""
-
-
-@dataclass
-class KandinskyPriorPipelineOutput(BaseOutput):
- """
- Output class for KandinskyPriorPipeline.
-
- Args:
- image_embeds (`torch.FloatTensor`)
- clip image embeddings for text prompt
- negative_image_embeds (`List[PIL.Image.Image]` or `np.ndarray`)
- clip image embeddings for unconditional tokens
- """
-
- image_embeds: Union[torch.FloatTensor, np.ndarray]
- negative_image_embeds: Union[torch.FloatTensor, np.ndarray]
-
-
-class KandinskyPriorPipeline(DiffusionPipeline):
- """
- Pipeline for generating image prior for Kandinsky
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- Args:
- prior ([`PriorTransformer`]):
- The canonincal unCLIP prior to approximate the image embedding from the text embedding.
- image_encoder ([`CLIPVisionModelWithProjection`]):
- Frozen image-encoder.
- text_encoder ([`CLIPTextModelWithProjection`]):
- Frozen text-encoder.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- scheduler ([`UnCLIPScheduler`]):
- A scheduler to be used in combination with `prior` to generate image embedding.
- """
-
- _exclude_from_cpu_offload = ["prior"]
-
- def __init__(
- self,
- prior: PriorTransformer,
- image_encoder: CLIPVisionModelWithProjection,
- text_encoder: CLIPTextModelWithProjection,
- tokenizer: CLIPTokenizer,
- scheduler: UnCLIPScheduler,
- image_processor: CLIPImageProcessor,
- ):
- super().__init__()
-
- self.register_modules(
- prior=prior,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- scheduler=scheduler,
- image_encoder=image_encoder,
- image_processor=image_processor,
- )
-
- @torch.no_grad()
- @replace_example_docstring(EXAMPLE_INTERPOLATE_DOC_STRING)
- def interpolate(
- self,
- images_and_prompts: List[Union[str, PIL.Image.Image, torch.FloatTensor]],
- weights: List[float],
- num_images_per_prompt: int = 1,
- num_inference_steps: int = 25,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- latents: Optional[torch.FloatTensor] = None,
- negative_prior_prompt: Optional[str] = None,
- negative_prompt: str = "",
- guidance_scale: float = 4.0,
- device=None,
- ):
- """
- Function invoked when using the prior pipeline for interpolation.
-
- Args:
- images_and_prompts (`List[Union[str, PIL.Image.Image, torch.FloatTensor]]`):
- list of prompts and images to guide the image generation.
- weights: (`List[float]`):
- list of weights for each condition in `images_and_prompts`
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- num_inference_steps (`int`, *optional*, defaults to 25):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
- to make generation deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- negative_prior_prompt (`str`, *optional*):
- The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if
- `guidance_scale` is less than `1`).
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if
- `guidance_scale` is less than `1`).
- guidance_scale (`float`, *optional*, defaults to 4.0):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
-
- Examples:
-
- Returns:
- [`KandinskyPriorPipelineOutput`] or `tuple`
- """
-
- device = device or self.device
-
- if len(images_and_prompts) != len(weights):
- raise ValueError(
- f"`images_and_prompts` contains {len(images_and_prompts)} items and `weights` contains {len(weights)} items - they should be lists of same length"
- )
-
- image_embeddings = []
- for cond, weight in zip(images_and_prompts, weights):
- if isinstance(cond, str):
- image_emb = self(
- cond,
- num_inference_steps=num_inference_steps,
- num_images_per_prompt=num_images_per_prompt,
- generator=generator,
- latents=latents,
- negative_prompt=negative_prior_prompt,
- guidance_scale=guidance_scale,
- ).image_embeds
-
- elif isinstance(cond, (PIL.Image.Image, torch.Tensor)):
- if isinstance(cond, PIL.Image.Image):
- cond = (
- self.image_processor(cond, return_tensors="pt")
- .pixel_values[0]
- .unsqueeze(0)
- .to(dtype=self.image_encoder.dtype, device=device)
- )
-
- image_emb = self.image_encoder(cond)["image_embeds"]
-
- else:
- raise ValueError(
- f"`images_and_prompts` can only contains elements to be of type `str`, `PIL.Image.Image` or `torch.Tensor` but is {type(cond)}"
- )
-
- image_embeddings.append(image_emb * weight)
-
- image_emb = torch.cat(image_embeddings).sum(dim=0, keepdim=True)
-
- out_zero = self(
- negative_prompt,
- num_inference_steps=num_inference_steps,
- num_images_per_prompt=num_images_per_prompt,
- generator=generator,
- latents=latents,
- negative_prompt=negative_prior_prompt,
- guidance_scale=guidance_scale,
- )
- zero_image_emb = out_zero.negative_image_embeds if negative_prompt == "" else out_zero.image_embeds
-
- return KandinskyPriorPipelineOutput(image_embeds=image_emb, negative_image_embeds=zero_image_emb)
-
- # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents
- def prepare_latents(self, shape, dtype, device, generator, latents, scheduler):
- if latents is None:
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
- else:
- if latents.shape != shape:
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
- latents = latents.to(device)
-
- latents = latents * scheduler.init_noise_sigma
- return latents
-
- def get_zero_embed(self, batch_size=1, device=None):
- device = device or self.device
- zero_img = torch.zeros(1, 3, self.image_encoder.config.image_size, self.image_encoder.config.image_size).to(
- device=device, dtype=self.image_encoder.dtype
- )
- zero_image_emb = self.image_encoder(zero_img)["image_embeds"]
- zero_image_emb = zero_image_emb.repeat(batch_size, 1)
- return zero_image_emb
-
- def _encode_prompt(
- self,
- prompt,
- device,
- num_images_per_prompt,
- do_classifier_free_guidance,
- negative_prompt=None,
- ):
- batch_size = len(prompt) if isinstance(prompt, list) else 1
- # get prompt text embeddings
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- text_input_ids = text_inputs.input_ids
- text_mask = text_inputs.attention_mask.bool().to(device)
-
- untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
-
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids):
- removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
- )
- text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
-
- text_encoder_output = self.text_encoder(text_input_ids.to(device))
-
- prompt_embeds = text_encoder_output.text_embeds
- text_encoder_hidden_states = text_encoder_output.last_hidden_state
-
- prompt_embeds = prompt_embeds.repeat_interleave(num_images_per_prompt, dim=0)
- text_encoder_hidden_states = text_encoder_hidden_states.repeat_interleave(num_images_per_prompt, dim=0)
- text_mask = text_mask.repeat_interleave(num_images_per_prompt, dim=0)
-
- if do_classifier_free_guidance:
- uncond_tokens: List[str]
- if negative_prompt is None:
- uncond_tokens = [""] * batch_size
- elif type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt]
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = negative_prompt
-
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- uncond_text_mask = uncond_input.attention_mask.bool().to(device)
- negative_prompt_embeds_text_encoder_output = self.text_encoder(uncond_input.input_ids.to(device))
-
- negative_prompt_embeds = negative_prompt_embeds_text_encoder_output.text_embeds
- uncond_text_encoder_hidden_states = negative_prompt_embeds_text_encoder_output.last_hidden_state
-
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
-
- seq_len = negative_prompt_embeds.shape[1]
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt)
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len)
-
- seq_len = uncond_text_encoder_hidden_states.shape[1]
- uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.repeat(1, num_images_per_prompt, 1)
- uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.view(
- batch_size * num_images_per_prompt, seq_len, -1
- )
- uncond_text_mask = uncond_text_mask.repeat_interleave(num_images_per_prompt, dim=0)
-
- # done duplicates
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
- text_encoder_hidden_states = torch.cat([uncond_text_encoder_hidden_states, text_encoder_hidden_states])
-
- text_mask = torch.cat([uncond_text_mask, text_mask])
-
- return prompt_embeds, text_encoder_hidden_states, text_mask
-
- def enable_model_cpu_offload(self, gpu_id=0):
- r"""
- Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
- to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
- method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
- `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.
- """
- if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
- from accelerate import cpu_offload_with_hook
- else:
- raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
-
- device = torch.device(f"cuda:{gpu_id}")
-
- if self.device.type != "cpu":
- self.to("cpu", silence_dtype_warnings=True)
- torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
-
- hook = None
- for cpu_offloaded_model in [self.text_encoder, self.prior]:
- _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
-
- # We'll offload the last model manually.
- self.prior_hook = hook
-
- _, hook = cpu_offload_with_hook(self.image_encoder, device, prev_module_hook=self.prior_hook)
-
- self.final_offload_hook = hook
-
- @torch.no_grad()
- @replace_example_docstring(EXAMPLE_DOC_STRING)
- def __call__(
- self,
- prompt: Union[str, List[str]],
- negative_prompt: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: int = 1,
- num_inference_steps: int = 25,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- latents: Optional[torch.FloatTensor] = None,
- guidance_scale: float = 4.0,
- output_type: Optional[str] = "pt",
- return_dict: bool = True,
- ):
- """
- Function invoked when calling the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`):
- The prompt or prompts to guide the image generation.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
- if `guidance_scale` is less than `1`).
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- num_inference_steps (`int`, *optional*, defaults to 25):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
- to make generation deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- guidance_scale (`float`, *optional*, defaults to 4.0):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- output_type (`str`, *optional*, defaults to `"pt"`):
- The output format of the generate image. Choose between: `"np"` (`np.array`) or `"pt"`
- (`torch.Tensor`).
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
-
- Examples:
-
- Returns:
- [`KandinskyPriorPipelineOutput`] or `tuple`
- """
-
- if isinstance(prompt, str):
- prompt = [prompt]
- elif not isinstance(prompt, list):
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
-
- if isinstance(negative_prompt, str):
- negative_prompt = [negative_prompt]
- elif not isinstance(negative_prompt, list) and negative_prompt is not None:
- raise ValueError(f"`negative_prompt` has to be of type `str` or `list` but is {type(negative_prompt)}")
-
- # if the negative prompt is defined we double the batch size to
- # directly retrieve the negative prompt embedding
- if negative_prompt is not None:
- prompt = prompt + negative_prompt
- negative_prompt = 2 * negative_prompt
-
- device = self._execution_device
-
- batch_size = len(prompt)
- batch_size = batch_size * num_images_per_prompt
-
- do_classifier_free_guidance = guidance_scale > 1.0
- prompt_embeds, text_encoder_hidden_states, text_mask = self._encode_prompt(
- prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt
- )
-
- # prior
- self.scheduler.set_timesteps(num_inference_steps, device=device)
- prior_timesteps_tensor = self.scheduler.timesteps
-
- embedding_dim = self.prior.config.embedding_dim
-
- latents = self.prepare_latents(
- (batch_size, embedding_dim),
- prompt_embeds.dtype,
- device,
- generator,
- latents,
- self.scheduler,
- )
-
- for i, t in enumerate(self.progress_bar(prior_timesteps_tensor)):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
-
- predicted_image_embedding = self.prior(
- latent_model_input,
- timestep=t,
- proj_embedding=prompt_embeds,
- encoder_hidden_states=text_encoder_hidden_states,
- attention_mask=text_mask,
- ).predicted_image_embedding
-
- if do_classifier_free_guidance:
- predicted_image_embedding_uncond, predicted_image_embedding_text = predicted_image_embedding.chunk(2)
- predicted_image_embedding = predicted_image_embedding_uncond + guidance_scale * (
- predicted_image_embedding_text - predicted_image_embedding_uncond
- )
-
- if i + 1 == prior_timesteps_tensor.shape[0]:
- prev_timestep = None
- else:
- prev_timestep = prior_timesteps_tensor[i + 1]
-
- latents = self.scheduler.step(
- predicted_image_embedding,
- timestep=t,
- sample=latents,
- generator=generator,
- prev_timestep=prev_timestep,
- ).prev_sample
-
- latents = self.prior.post_process_latents(latents)
-
- image_embeddings = latents
-
- # if negative prompt has been defined, we retrieve split the image embedding into two
- if negative_prompt is None:
- zero_embeds = self.get_zero_embed(latents.shape[0], device=latents.device)
-
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
- self.final_offload_hook.offload()
- else:
- image_embeddings, zero_embeds = image_embeddings.chunk(2)
-
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
- self.prior_hook.offload()
-
- if output_type not in ["pt", "np"]:
- raise ValueError(f"Only the output types `pt` and `np` are supported not output_type={output_type}")
-
- if output_type == "np":
- image_embeddings = image_embeddings.cpu().numpy()
- zero_embeds = zero_embeds.cpu().numpy()
-
- if not return_dict:
- return (image_embeddings, zero_embeds)
-
- return KandinskyPriorPipelineOutput(image_embeds=image_embeddings, negative_image_embeds=zero_embeds)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_image_processor.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_image_processor.py
deleted file mode 100644
index c2cd6f4a04f413e599f8c0dba52dbdfeda0a4e3f..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_image_processor.py
+++ /dev/null
@@ -1,149 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import unittest
-
-import numpy as np
-import PIL
-import torch
-
-from diffusers.image_processor import VaeImageProcessor
-
-
-class ImageProcessorTest(unittest.TestCase):
- @property
- def dummy_sample(self):
- batch_size = 1
- num_channels = 3
- height = 8
- width = 8
-
- sample = torch.rand((batch_size, num_channels, height, width))
-
- return sample
-
- def to_np(self, image):
- if isinstance(image[0], PIL.Image.Image):
- return np.stack([np.array(i) for i in image], axis=0)
- elif isinstance(image, torch.Tensor):
- return image.cpu().numpy().transpose(0, 2, 3, 1)
- return image
-
- def test_vae_image_processor_pt(self):
- image_processor = VaeImageProcessor(do_resize=False, do_normalize=True)
-
- input_pt = self.dummy_sample
- input_np = self.to_np(input_pt)
-
- for output_type in ["pt", "np", "pil"]:
- out = image_processor.postprocess(
- image_processor.preprocess(input_pt),
- output_type=output_type,
- )
- out_np = self.to_np(out)
- in_np = (input_np * 255).round() if output_type == "pil" else input_np
- assert (
- np.abs(in_np - out_np).max() < 1e-6
- ), f"decoded output does not match input for output_type {output_type}"
-
- def test_vae_image_processor_np(self):
- image_processor = VaeImageProcessor(do_resize=False, do_normalize=True)
- input_np = self.dummy_sample.cpu().numpy().transpose(0, 2, 3, 1)
-
- for output_type in ["pt", "np", "pil"]:
- out = image_processor.postprocess(image_processor.preprocess(input_np), output_type=output_type)
-
- out_np = self.to_np(out)
- in_np = (input_np * 255).round() if output_type == "pil" else input_np
- assert (
- np.abs(in_np - out_np).max() < 1e-6
- ), f"decoded output does not match input for output_type {output_type}"
-
- def test_vae_image_processor_pil(self):
- image_processor = VaeImageProcessor(do_resize=False, do_normalize=True)
-
- input_np = self.dummy_sample.cpu().numpy().transpose(0, 2, 3, 1)
- input_pil = image_processor.numpy_to_pil(input_np)
-
- for output_type in ["pt", "np", "pil"]:
- out = image_processor.postprocess(image_processor.preprocess(input_pil), output_type=output_type)
- for i, o in zip(input_pil, out):
- in_np = np.array(i)
- out_np = self.to_np(out) if output_type == "pil" else (self.to_np(out) * 255).round()
- assert (
- np.abs(in_np - out_np).max() < 1e-6
- ), f"decoded output does not match input for output_type {output_type}"
-
- def test_preprocess_input_3d(self):
- image_processor = VaeImageProcessor(do_resize=False, do_normalize=False)
-
- input_pt_4d = self.dummy_sample
- input_pt_3d = input_pt_4d.squeeze(0)
-
- out_pt_4d = image_processor.postprocess(
- image_processor.preprocess(input_pt_4d),
- output_type="np",
- )
- out_pt_3d = image_processor.postprocess(
- image_processor.preprocess(input_pt_3d),
- output_type="np",
- )
-
- input_np_4d = self.to_np(self.dummy_sample)
- input_np_3d = input_np_4d.squeeze(0)
-
- out_np_4d = image_processor.postprocess(
- image_processor.preprocess(input_np_4d),
- output_type="np",
- )
- out_np_3d = image_processor.postprocess(
- image_processor.preprocess(input_np_3d),
- output_type="np",
- )
-
- assert np.abs(out_pt_4d - out_pt_3d).max() < 1e-6
- assert np.abs(out_np_4d - out_np_3d).max() < 1e-6
-
- def test_preprocess_input_list(self):
- image_processor = VaeImageProcessor(do_resize=False, do_normalize=False)
-
- input_pt_4d = self.dummy_sample
- input_pt_list = list(input_pt_4d)
-
- out_pt_4d = image_processor.postprocess(
- image_processor.preprocess(input_pt_4d),
- output_type="np",
- )
-
- out_pt_list = image_processor.postprocess(
- image_processor.preprocess(input_pt_list),
- output_type="np",
- )
-
- input_np_4d = self.to_np(self.dummy_sample)
- list(input_np_4d)
-
- out_np_4d = image_processor.postprocess(
- image_processor.preprocess(input_pt_4d),
- output_type="np",
- )
-
- out_np_list = image_processor.postprocess(
- image_processor.preprocess(input_pt_list),
- output_type="np",
- )
-
- assert np.abs(out_pt_4d - out_pt_list).max() < 1e-6
- assert np.abs(out_np_4d - out_np_list).max() < 1e-6
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/mask/__init__.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/mask/__init__.py
deleted file mode 100644
index ab1e88bc686d5c2fe72b3114cb2b3e372e73a0f8..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/mask/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-from .mask_target import mask_target
-from .structures import BaseInstanceMasks, BitmapMasks, PolygonMasks
-from .utils import encode_mask_results, split_combined_polys
-
-__all__ = [
- 'split_combined_polys', 'mask_target', 'BaseInstanceMasks', 'BitmapMasks',
- 'PolygonMasks', 'encode_mask_results'
-]
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/paa_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/paa_head.py
deleted file mode 100644
index e067b0121cf8b8230c0c9c6b8cfd41f56be4e298..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/paa_head.py
+++ /dev/null
@@ -1,671 +0,0 @@
-import numpy as np
-import torch
-from mmcv.runner import force_fp32
-
-from mmdet.core import multi_apply, multiclass_nms
-from mmdet.core.bbox.iou_calculators import bbox_overlaps
-from mmdet.models import HEADS
-from mmdet.models.dense_heads import ATSSHead
-
-EPS = 1e-12
-try:
- import sklearn.mixture as skm
-except ImportError:
- skm = None
-
-
-def levels_to_images(mlvl_tensor):
- """Concat multi-level feature maps by image.
-
- [feature_level0, feature_level1...] -> [feature_image0, feature_image1...]
- Convert the shape of each element in mlvl_tensor from (N, C, H, W) to
- (N, H*W , C), then split the element to N elements with shape (H*W, C), and
- concat elements in same image of all level along first dimension.
-
- Args:
- mlvl_tensor (list[torch.Tensor]): list of Tensor which collect from
- corresponding level. Each element is of shape (N, C, H, W)
-
- Returns:
- list[torch.Tensor]: A list that contains N tensors and each tensor is
- of shape (num_elements, C)
- """
- batch_size = mlvl_tensor[0].size(0)
- batch_list = [[] for _ in range(batch_size)]
- channels = mlvl_tensor[0].size(1)
- for t in mlvl_tensor:
- t = t.permute(0, 2, 3, 1)
- t = t.view(batch_size, -1, channels).contiguous()
- for img in range(batch_size):
- batch_list[img].append(t[img])
- return [torch.cat(item, 0) for item in batch_list]
-
-
-@HEADS.register_module()
-class PAAHead(ATSSHead):
- """Head of PAAAssignment: Probabilistic Anchor Assignment with IoU
- Prediction for Object Detection.
-
- Code is modified from the `official github repo
- `_.
-
- More details can be found in the `paper
- `_ .
-
- Args:
- topk (int): Select topk samples with smallest loss in
- each level.
- score_voting (bool): Whether to use score voting in post-process.
- covariance_type : String describing the type of covariance parameters
- to be used in :class:`sklearn.mixture.GaussianMixture`.
- It must be one of:
-
- - 'full': each component has its own general covariance matrix
- - 'tied': all components share the same general covariance matrix
- - 'diag': each component has its own diagonal covariance matrix
- - 'spherical': each component has its own single variance
- Default: 'diag'. From 'full' to 'spherical', the gmm fitting
- process is faster yet the performance could be influenced. For most
- cases, 'diag' should be a good choice.
- """
-
- def __init__(self,
- *args,
- topk=9,
- score_voting=True,
- covariance_type='diag',
- **kwargs):
- # topk used in paa reassign process
- self.topk = topk
- self.with_score_voting = score_voting
- self.covariance_type = covariance_type
- super(PAAHead, self).__init__(*args, **kwargs)
-
- @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'iou_preds'))
- def loss(self,
- cls_scores,
- bbox_preds,
- iou_preds,
- gt_bboxes,
- gt_labels,
- img_metas,
- gt_bboxes_ignore=None):
- """Compute losses of the head.
-
- Args:
- cls_scores (list[Tensor]): Box scores for each scale level
- Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
- level with shape (N, num_anchors * 4, H, W)
- iou_preds (list[Tensor]): iou_preds for each scale
- level with shape (N, num_anchors * 1, H, W)
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]): class indices corresponding to each box
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- gt_bboxes_ignore (list[Tensor] | None): Specify which bounding
- boxes can be ignored when are computing the loss.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss gmm_assignment.
- """
-
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
- assert len(featmap_sizes) == self.anchor_generator.num_levels
-
- device = cls_scores[0].device
- anchor_list, valid_flag_list = self.get_anchors(
- featmap_sizes, img_metas, device=device)
- label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1
- cls_reg_targets = self.get_targets(
- anchor_list,
- valid_flag_list,
- gt_bboxes,
- img_metas,
- gt_bboxes_ignore_list=gt_bboxes_ignore,
- gt_labels_list=gt_labels,
- label_channels=label_channels,
- )
- (labels, labels_weight, bboxes_target, bboxes_weight, pos_inds,
- pos_gt_index) = cls_reg_targets
- cls_scores = levels_to_images(cls_scores)
- cls_scores = [
- item.reshape(-1, self.cls_out_channels) for item in cls_scores
- ]
- bbox_preds = levels_to_images(bbox_preds)
- bbox_preds = [item.reshape(-1, 4) for item in bbox_preds]
- iou_preds = levels_to_images(iou_preds)
- iou_preds = [item.reshape(-1, 1) for item in iou_preds]
- pos_losses_list, = multi_apply(self.get_pos_loss, anchor_list,
- cls_scores, bbox_preds, labels,
- labels_weight, bboxes_target,
- bboxes_weight, pos_inds)
-
- with torch.no_grad():
- reassign_labels, reassign_label_weight, \
- reassign_bbox_weights, num_pos = multi_apply(
- self.paa_reassign,
- pos_losses_list,
- labels,
- labels_weight,
- bboxes_weight,
- pos_inds,
- pos_gt_index,
- anchor_list)
- num_pos = sum(num_pos)
- # convert all tensor list to a flatten tensor
- cls_scores = torch.cat(cls_scores, 0).view(-1, cls_scores[0].size(-1))
- bbox_preds = torch.cat(bbox_preds, 0).view(-1, bbox_preds[0].size(-1))
- iou_preds = torch.cat(iou_preds, 0).view(-1, iou_preds[0].size(-1))
- labels = torch.cat(reassign_labels, 0).view(-1)
- flatten_anchors = torch.cat(
- [torch.cat(item, 0) for item in anchor_list])
- labels_weight = torch.cat(reassign_label_weight, 0).view(-1)
- bboxes_target = torch.cat(bboxes_target,
- 0).view(-1, bboxes_target[0].size(-1))
-
- pos_inds_flatten = ((labels >= 0)
- &
- (labels < self.num_classes)).nonzero().reshape(-1)
-
- losses_cls = self.loss_cls(
- cls_scores,
- labels,
- labels_weight,
- avg_factor=max(num_pos, len(img_metas))) # avoid num_pos=0
- if num_pos:
- pos_bbox_pred = self.bbox_coder.decode(
- flatten_anchors[pos_inds_flatten],
- bbox_preds[pos_inds_flatten])
- pos_bbox_target = bboxes_target[pos_inds_flatten]
- iou_target = bbox_overlaps(
- pos_bbox_pred.detach(), pos_bbox_target, is_aligned=True)
- losses_iou = self.loss_centerness(
- iou_preds[pos_inds_flatten],
- iou_target.unsqueeze(-1),
- avg_factor=num_pos)
- losses_bbox = self.loss_bbox(
- pos_bbox_pred,
- pos_bbox_target,
- iou_target.clamp(min=EPS),
- avg_factor=iou_target.sum())
- else:
- losses_iou = iou_preds.sum() * 0
- losses_bbox = bbox_preds.sum() * 0
-
- return dict(
- loss_cls=losses_cls, loss_bbox=losses_bbox, loss_iou=losses_iou)
-
- def get_pos_loss(self, anchors, cls_score, bbox_pred, label, label_weight,
- bbox_target, bbox_weight, pos_inds):
- """Calculate loss of all potential positive samples obtained from first
- match process.
-
- Args:
- anchors (list[Tensor]): Anchors of each scale.
- cls_score (Tensor): Box scores of single image with shape
- (num_anchors, num_classes)
- bbox_pred (Tensor): Box energies / deltas of single image
- with shape (num_anchors, 4)
- label (Tensor): classification target of each anchor with
- shape (num_anchors,)
- label_weight (Tensor): Classification loss weight of each
- anchor with shape (num_anchors).
- bbox_target (dict): Regression target of each anchor with
- shape (num_anchors, 4).
- bbox_weight (Tensor): Bbox weight of each anchor with shape
- (num_anchors, 4).
- pos_inds (Tensor): Index of all positive samples got from
- first assign process.
-
- Returns:
- Tensor: Losses of all positive samples in single image.
- """
- if not len(pos_inds):
- return cls_score.new([]),
- anchors_all_level = torch.cat(anchors, 0)
- pos_scores = cls_score[pos_inds]
- pos_bbox_pred = bbox_pred[pos_inds]
- pos_label = label[pos_inds]
- pos_label_weight = label_weight[pos_inds]
- pos_bbox_target = bbox_target[pos_inds]
- pos_bbox_weight = bbox_weight[pos_inds]
- pos_anchors = anchors_all_level[pos_inds]
- pos_bbox_pred = self.bbox_coder.decode(pos_anchors, pos_bbox_pred)
-
- # to keep loss dimension
- loss_cls = self.loss_cls(
- pos_scores,
- pos_label,
- pos_label_weight,
- avg_factor=self.loss_cls.loss_weight,
- reduction_override='none')
-
- loss_bbox = self.loss_bbox(
- pos_bbox_pred,
- pos_bbox_target,
- pos_bbox_weight,
- avg_factor=self.loss_cls.loss_weight,
- reduction_override='none')
-
- loss_cls = loss_cls.sum(-1)
- pos_loss = loss_bbox + loss_cls
- return pos_loss,
-
- def paa_reassign(self, pos_losses, label, label_weight, bbox_weight,
- pos_inds, pos_gt_inds, anchors):
- """Fit loss to GMM distribution and separate positive, ignore, negative
- samples again with GMM model.
-
- Args:
- pos_losses (Tensor): Losses of all positive samples in
- single image.
- label (Tensor): classification target of each anchor with
- shape (num_anchors,)
- label_weight (Tensor): Classification loss weight of each
- anchor with shape (num_anchors).
- bbox_weight (Tensor): Bbox weight of each anchor with shape
- (num_anchors, 4).
- pos_inds (Tensor): Index of all positive samples got from
- first assign process.
- pos_gt_inds (Tensor): Gt_index of all positive samples got
- from first assign process.
- anchors (list[Tensor]): Anchors of each scale.
-
- Returns:
- tuple: Usually returns a tuple containing learning targets.
-
- - label (Tensor): classification target of each anchor after
- paa assign, with shape (num_anchors,)
- - label_weight (Tensor): Classification loss weight of each
- anchor after paa assign, with shape (num_anchors).
- - bbox_weight (Tensor): Bbox weight of each anchor with shape
- (num_anchors, 4).
- - num_pos (int): The number of positive samples after paa
- assign.
- """
- if not len(pos_inds):
- return label, label_weight, bbox_weight, 0
- label = label.clone()
- label_weight = label_weight.clone()
- bbox_weight = bbox_weight.clone()
- num_gt = pos_gt_inds.max() + 1
- num_level = len(anchors)
- num_anchors_each_level = [item.size(0) for item in anchors]
- num_anchors_each_level.insert(0, 0)
- inds_level_interval = np.cumsum(num_anchors_each_level)
- pos_level_mask = []
- for i in range(num_level):
- mask = (pos_inds >= inds_level_interval[i]) & (
- pos_inds < inds_level_interval[i + 1])
- pos_level_mask.append(mask)
- pos_inds_after_paa = [label.new_tensor([])]
- ignore_inds_after_paa = [label.new_tensor([])]
- for gt_ind in range(num_gt):
- pos_inds_gmm = []
- pos_loss_gmm = []
- gt_mask = pos_gt_inds == gt_ind
- for level in range(num_level):
- level_mask = pos_level_mask[level]
- level_gt_mask = level_mask & gt_mask
- value, topk_inds = pos_losses[level_gt_mask].topk(
- min(level_gt_mask.sum(), self.topk), largest=False)
- pos_inds_gmm.append(pos_inds[level_gt_mask][topk_inds])
- pos_loss_gmm.append(value)
- pos_inds_gmm = torch.cat(pos_inds_gmm)
- pos_loss_gmm = torch.cat(pos_loss_gmm)
- # fix gmm need at least two sample
- if len(pos_inds_gmm) < 2:
- continue
- device = pos_inds_gmm.device
- pos_loss_gmm, sort_inds = pos_loss_gmm.sort()
- pos_inds_gmm = pos_inds_gmm[sort_inds]
- pos_loss_gmm = pos_loss_gmm.view(-1, 1).cpu().numpy()
- min_loss, max_loss = pos_loss_gmm.min(), pos_loss_gmm.max()
- means_init = np.array([min_loss, max_loss]).reshape(2, 1)
- weights_init = np.array([0.5, 0.5])
- precisions_init = np.array([1.0, 1.0]).reshape(2, 1, 1) # full
- if self.covariance_type == 'spherical':
- precisions_init = precisions_init.reshape(2)
- elif self.covariance_type == 'diag':
- precisions_init = precisions_init.reshape(2, 1)
- elif self.covariance_type == 'tied':
- precisions_init = np.array([[1.0]])
- if skm is None:
- raise ImportError('Please run "pip install sklearn" '
- 'to install sklearn first.')
- gmm = skm.GaussianMixture(
- 2,
- weights_init=weights_init,
- means_init=means_init,
- precisions_init=precisions_init,
- covariance_type=self.covariance_type)
- gmm.fit(pos_loss_gmm)
- gmm_assignment = gmm.predict(pos_loss_gmm)
- scores = gmm.score_samples(pos_loss_gmm)
- gmm_assignment = torch.from_numpy(gmm_assignment).to(device)
- scores = torch.from_numpy(scores).to(device)
-
- pos_inds_temp, ignore_inds_temp = self.gmm_separation_scheme(
- gmm_assignment, scores, pos_inds_gmm)
- pos_inds_after_paa.append(pos_inds_temp)
- ignore_inds_after_paa.append(ignore_inds_temp)
-
- pos_inds_after_paa = torch.cat(pos_inds_after_paa)
- ignore_inds_after_paa = torch.cat(ignore_inds_after_paa)
- reassign_mask = (pos_inds.unsqueeze(1) != pos_inds_after_paa).all(1)
- reassign_ids = pos_inds[reassign_mask]
- label[reassign_ids] = self.num_classes
- label_weight[ignore_inds_after_paa] = 0
- bbox_weight[reassign_ids] = 0
- num_pos = len(pos_inds_after_paa)
- return label, label_weight, bbox_weight, num_pos
-
- def gmm_separation_scheme(self, gmm_assignment, scores, pos_inds_gmm):
- """A general separation scheme for gmm model.
-
- It separates a GMM distribution of candidate samples into three
- parts, 0 1 and uncertain areas, and you can implement other
- separation schemes by rewriting this function.
-
- Args:
- gmm_assignment (Tensor): The prediction of GMM which is of shape
- (num_samples,). The 0/1 value indicates the distribution
- that each sample comes from.
- scores (Tensor): The probability of sample coming from the
- fit GMM distribution. The tensor is of shape (num_samples,).
- pos_inds_gmm (Tensor): All the indexes of samples which are used
- to fit GMM model. The tensor is of shape (num_samples,)
-
- Returns:
- tuple[Tensor]: The indices of positive and ignored samples.
-
- - pos_inds_temp (Tensor): Indices of positive samples.
- - ignore_inds_temp (Tensor): Indices of ignore samples.
- """
- # The implementation is (c) in Fig.3 in origin paper instead of (b).
- # You can refer to issues such as
- # https://github.com/kkhoot/PAA/issues/8 and
- # https://github.com/kkhoot/PAA/issues/9.
- fgs = gmm_assignment == 0
- pos_inds_temp = fgs.new_tensor([], dtype=torch.long)
- ignore_inds_temp = fgs.new_tensor([], dtype=torch.long)
- if fgs.nonzero().numel():
- _, pos_thr_ind = scores[fgs].topk(1)
- pos_inds_temp = pos_inds_gmm[fgs][:pos_thr_ind + 1]
- ignore_inds_temp = pos_inds_gmm.new_tensor([])
- return pos_inds_temp, ignore_inds_temp
-
- def get_targets(
- self,
- anchor_list,
- valid_flag_list,
- gt_bboxes_list,
- img_metas,
- gt_bboxes_ignore_list=None,
- gt_labels_list=None,
- label_channels=1,
- unmap_outputs=True,
- ):
- """Get targets for PAA head.
-
- This method is almost the same as `AnchorHead.get_targets()`. We direct
- return the results from _get_targets_single instead map it to levels
- by images_to_levels function.
-
- Args:
- anchor_list (list[list[Tensor]]): Multi level anchors of each
- image. The outer list indicates images, and the inner list
- corresponds to feature levels of the image. Each element of
- the inner list is a tensor of shape (num_anchors, 4).
- valid_flag_list (list[list[Tensor]]): Multi level valid flags of
- each image. The outer list indicates images, and the inner list
- corresponds to feature levels of the image. Each element of
- the inner list is a tensor of shape (num_anchors, )
- gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image.
- img_metas (list[dict]): Meta info of each image.
- gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be
- ignored.
- gt_labels_list (list[Tensor]): Ground truth labels of each box.
- label_channels (int): Channel of label.
- unmap_outputs (bool): Whether to map outputs back to the original
- set of anchors.
-
- Returns:
- tuple: Usually returns a tuple containing learning targets.
-
- - labels (list[Tensor]): Labels of all anchors, each with
- shape (num_anchors,).
- - label_weights (list[Tensor]): Label weights of all anchor.
- each with shape (num_anchors,).
- - bbox_targets (list[Tensor]): BBox targets of all anchors.
- each with shape (num_anchors, 4).
- - bbox_weights (list[Tensor]): BBox weights of all anchors.
- each with shape (num_anchors, 4).
- - pos_inds (list[Tensor]): Contains all index of positive
- sample in all anchor.
- - gt_inds (list[Tensor]): Contains all gt_index of positive
- sample in all anchor.
- """
-
- num_imgs = len(img_metas)
- assert len(anchor_list) == len(valid_flag_list) == num_imgs
- concat_anchor_list = []
- concat_valid_flag_list = []
- for i in range(num_imgs):
- assert len(anchor_list[i]) == len(valid_flag_list[i])
- concat_anchor_list.append(torch.cat(anchor_list[i]))
- concat_valid_flag_list.append(torch.cat(valid_flag_list[i]))
-
- # compute targets for each image
- if gt_bboxes_ignore_list is None:
- gt_bboxes_ignore_list = [None for _ in range(num_imgs)]
- if gt_labels_list is None:
- gt_labels_list = [None for _ in range(num_imgs)]
- results = multi_apply(
- self._get_targets_single,
- concat_anchor_list,
- concat_valid_flag_list,
- gt_bboxes_list,
- gt_bboxes_ignore_list,
- gt_labels_list,
- img_metas,
- label_channels=label_channels,
- unmap_outputs=unmap_outputs)
-
- (labels, label_weights, bbox_targets, bbox_weights, valid_pos_inds,
- valid_neg_inds, sampling_result) = results
-
- # Due to valid flag of anchors, we have to calculate the real pos_inds
- # in origin anchor set.
- pos_inds = []
- for i, single_labels in enumerate(labels):
- pos_mask = (0 <= single_labels) & (
- single_labels < self.num_classes)
- pos_inds.append(pos_mask.nonzero().view(-1))
-
- gt_inds = [item.pos_assigned_gt_inds for item in sampling_result]
- return (labels, label_weights, bbox_targets, bbox_weights, pos_inds,
- gt_inds)
-
- def _get_targets_single(self,
- flat_anchors,
- valid_flags,
- gt_bboxes,
- gt_bboxes_ignore,
- gt_labels,
- img_meta,
- label_channels=1,
- unmap_outputs=True):
- """Compute regression and classification targets for anchors in a
- single image.
-
- This method is same as `AnchorHead._get_targets_single()`.
- """
- assert unmap_outputs, 'We must map outputs back to the original' \
- 'set of anchors in PAAhead'
- return super(ATSSHead, self)._get_targets_single(
- flat_anchors,
- valid_flags,
- gt_bboxes,
- gt_bboxes_ignore,
- gt_labels,
- img_meta,
- label_channels=1,
- unmap_outputs=True)
-
- def _get_bboxes(self,
- cls_scores,
- bbox_preds,
- iou_preds,
- mlvl_anchors,
- img_shapes,
- scale_factors,
- cfg,
- rescale=False,
- with_nms=True):
- """Transform outputs for a single batch item into labeled boxes.
-
- This method is almost same as `ATSSHead._get_bboxes()`.
- We use sqrt(iou_preds * cls_scores) in NMS process instead of just
- cls_scores. Besides, score voting is used when `` score_voting``
- is set to True.
- """
- assert with_nms, 'PAA only supports "with_nms=True" now'
- assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors)
- batch_size = cls_scores[0].shape[0]
-
- mlvl_bboxes = []
- mlvl_scores = []
- mlvl_iou_preds = []
- for cls_score, bbox_pred, iou_preds, anchors in zip(
- cls_scores, bbox_preds, iou_preds, mlvl_anchors):
- assert cls_score.size()[-2:] == bbox_pred.size()[-2:]
-
- scores = cls_score.permute(0, 2, 3, 1).reshape(
- batch_size, -1, self.cls_out_channels).sigmoid()
- bbox_pred = bbox_pred.permute(0, 2, 3,
- 1).reshape(batch_size, -1, 4)
- iou_preds = iou_preds.permute(0, 2, 3, 1).reshape(batch_size,
- -1).sigmoid()
-
- nms_pre = cfg.get('nms_pre', -1)
- if nms_pre > 0 and scores.shape[1] > nms_pre:
- max_scores, _ = (scores * iou_preds[..., None]).sqrt().max(-1)
- _, topk_inds = max_scores.topk(nms_pre)
- batch_inds = torch.arange(batch_size).view(
- -1, 1).expand_as(topk_inds).long()
- anchors = anchors[topk_inds, :]
- bbox_pred = bbox_pred[batch_inds, topk_inds, :]
- scores = scores[batch_inds, topk_inds, :]
- iou_preds = iou_preds[batch_inds, topk_inds]
- else:
- anchors = anchors.expand_as(bbox_pred)
-
- bboxes = self.bbox_coder.decode(
- anchors, bbox_pred, max_shape=img_shapes)
- mlvl_bboxes.append(bboxes)
- mlvl_scores.append(scores)
- mlvl_iou_preds.append(iou_preds)
-
- batch_mlvl_bboxes = torch.cat(mlvl_bboxes, dim=1)
- if rescale:
- batch_mlvl_bboxes /= batch_mlvl_bboxes.new_tensor(
- scale_factors).unsqueeze(1)
- batch_mlvl_scores = torch.cat(mlvl_scores, dim=1)
- # Add a dummy background class to the backend when using sigmoid
- # remind that we set FG labels to [0, num_class-1] since mmdet v2.0
- # BG cat_id: num_class
- padding = batch_mlvl_scores.new_zeros(batch_size,
- batch_mlvl_scores.shape[1], 1)
- batch_mlvl_scores = torch.cat([batch_mlvl_scores, padding], dim=-1)
- batch_mlvl_iou_preds = torch.cat(mlvl_iou_preds, dim=1)
- batch_mlvl_nms_scores = (batch_mlvl_scores *
- batch_mlvl_iou_preds[..., None]).sqrt()
-
- det_results = []
- for (mlvl_bboxes, mlvl_scores) in zip(batch_mlvl_bboxes,
- batch_mlvl_nms_scores):
- det_bbox, det_label = multiclass_nms(
- mlvl_bboxes,
- mlvl_scores,
- cfg.score_thr,
- cfg.nms,
- cfg.max_per_img,
- score_factors=None)
- if self.with_score_voting and len(det_bbox) > 0:
- det_bbox, det_label = self.score_voting(
- det_bbox, det_label, mlvl_bboxes, mlvl_scores,
- cfg.score_thr)
- det_results.append(tuple([det_bbox, det_label]))
-
- return det_results
-
- def score_voting(self, det_bboxes, det_labels, mlvl_bboxes,
- mlvl_nms_scores, score_thr):
- """Implementation of score voting method works on each remaining boxes
- after NMS procedure.
-
- Args:
- det_bboxes (Tensor): Remaining boxes after NMS procedure,
- with shape (k, 5), each dimension means
- (x1, y1, x2, y2, score).
- det_labels (Tensor): The label of remaining boxes, with shape
- (k, 1),Labels are 0-based.
- mlvl_bboxes (Tensor): All boxes before the NMS procedure,
- with shape (num_anchors,4).
- mlvl_nms_scores (Tensor): The scores of all boxes which is used
- in the NMS procedure, with shape (num_anchors, num_class)
- mlvl_iou_preds (Tensor): The predictions of IOU of all boxes
- before the NMS procedure, with shape (num_anchors, 1)
- score_thr (float): The score threshold of bboxes.
-
- Returns:
- tuple: Usually returns a tuple containing voting results.
-
- - det_bboxes_voted (Tensor): Remaining boxes after
- score voting procedure, with shape (k, 5), each
- dimension means (x1, y1, x2, y2, score).
- - det_labels_voted (Tensor): Label of remaining bboxes
- after voting, with shape (num_anchors,).
- """
- candidate_mask = mlvl_nms_scores > score_thr
- candidate_mask_nonzeros = candidate_mask.nonzero()
- candidate_inds = candidate_mask_nonzeros[:, 0]
- candidate_labels = candidate_mask_nonzeros[:, 1]
- candidate_bboxes = mlvl_bboxes[candidate_inds]
- candidate_scores = mlvl_nms_scores[candidate_mask]
- det_bboxes_voted = []
- det_labels_voted = []
- for cls in range(self.cls_out_channels):
- candidate_cls_mask = candidate_labels == cls
- if not candidate_cls_mask.any():
- continue
- candidate_cls_scores = candidate_scores[candidate_cls_mask]
- candidate_cls_bboxes = candidate_bboxes[candidate_cls_mask]
- det_cls_mask = det_labels == cls
- det_cls_bboxes = det_bboxes[det_cls_mask].view(
- -1, det_bboxes.size(-1))
- det_candidate_ious = bbox_overlaps(det_cls_bboxes[:, :4],
- candidate_cls_bboxes)
- for det_ind in range(len(det_cls_bboxes)):
- single_det_ious = det_candidate_ious[det_ind]
- pos_ious_mask = single_det_ious > 0.01
- pos_ious = single_det_ious[pos_ious_mask]
- pos_bboxes = candidate_cls_bboxes[pos_ious_mask]
- pos_scores = candidate_cls_scores[pos_ious_mask]
- pis = (torch.exp(-(1 - pos_ious)**2 / 0.025) *
- pos_scores)[:, None]
- voted_box = torch.sum(
- pis * pos_bboxes, dim=0) / torch.sum(
- pis, dim=0)
- voted_score = det_cls_bboxes[det_ind][-1:][None, :]
- det_bboxes_voted.append(
- torch.cat((voted_box[None, :], voted_score), dim=1))
- det_labels_voted.append(cls)
-
- det_bboxes_voted = torch.cat(det_bboxes_voted, dim=0)
- det_labels_voted = det_labels.new_tensor(det_labels_voted)
- return det_bboxes_voted, det_labels_voted
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_heads/scnet_mask_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_heads/scnet_mask_head.py
deleted file mode 100644
index 983a2d9db71a3b2b4980996725fdafb0b412b413..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_heads/scnet_mask_head.py
+++ /dev/null
@@ -1,27 +0,0 @@
-from mmdet.models.builder import HEADS
-from mmdet.models.utils import ResLayer, SimplifiedBasicBlock
-from .fcn_mask_head import FCNMaskHead
-
-
-@HEADS.register_module()
-class SCNetMaskHead(FCNMaskHead):
- """Mask head for `SCNet `_.
-
- Args:
- conv_to_res (bool, optional): if True, change the conv layers to
- ``SimplifiedBasicBlock``.
- """
-
- def __init__(self, conv_to_res=True, **kwargs):
- super(SCNetMaskHead, self).__init__(**kwargs)
- self.conv_to_res = conv_to_res
- if conv_to_res:
- assert self.conv_kernel_size == 3
- self.num_res_blocks = self.num_convs // 2
- self.convs = ResLayer(
- SimplifiedBasicBlock,
- self.in_channels,
- self.conv_out_channels,
- self.num_res_blocks,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg)
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/apcnet/apcnet_r50-d8_769x769_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/apcnet/apcnet_r50-d8_769x769_40k_cityscapes.py
deleted file mode 100644
index 3db6140cb97da1d202fd464d01f793276effa629..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/apcnet/apcnet_r50-d8_769x769_40k_cityscapes.py
+++ /dev/null
@@ -1,9 +0,0 @@
-_base_ = [
- '../_base_/models/apcnet_r50-d8.py',
- '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_40k.py'
-]
-model = dict(
- decode_head=dict(align_corners=True),
- auxiliary_head=dict(align_corners=True),
- test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513)))
diff --git a/spaces/Armored-Atom/Image-To-Motion/app.py b/spaces/Armored-Atom/Image-To-Motion/app.py
deleted file mode 100644
index 5eeae5366ce223997c6197e5af8b5659c2abacd3..0000000000000000000000000000000000000000
--- a/spaces/Armored-Atom/Image-To-Motion/app.py
+++ /dev/null
@@ -1,128 +0,0 @@
-import gradio as gr
-import os
-import shutil
-import torch
-from PIL import Image
-import argparse
-import pathlib
-
-os.system("git clone https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model")
-os.chdir("Thin-Plate-Spline-Motion-Model")
-os.system("mkdir checkpoints")
-os.system("wget -c https://cloud.tsinghua.edu.cn/f/da8d61d012014b12a9e4/?dl=1 -O checkpoints/vox.pth.tar")
-
-
-
-title = "# Thin-Plate Spline Motion Model for Image Animation"
-DESCRIPTION = '''### Gradio demo for Thin-Plate Spline Motion Model for Image Animation, CVPR 2022. [Paper][Github Code]
-
-
-'''
-FOOTER = ''
-
-
-def get_style_image_path(style_name: str) -> str:
- base_path = 'assets'
- filenames = {
- 'source': 'source.png',
- 'driving': 'driving.mp4',
- }
- return f'{base_path}/{filenames[style_name]}'
-
-
-def get_style_image_markdown_text(style_name: str) -> str:
- url = get_style_image_path(style_name)
- return f''
-
-
-def update_style_image(style_name: str) -> dict:
- text = get_style_image_markdown_text(style_name)
- return gr.Markdown.update(value=text)
-
-
-def set_example_image(example: list) -> dict:
- return gr.Image.update(value=example[0])
-
-def set_example_video(example: list) -> dict:
- return gr.Video.update(value=example[0])
-
-def inference(img,vid):
- if not os.path.exists('temp'):
- os.system('mkdir temp')
-
- img.save("temp/image.jpg", "JPEG")
- os.system(f"python demo.py --config config/vox-256.yaml --checkpoint ./checkpoints/vox.pth.tar --source_image 'temp/image.jpg' --driving_video {vid} --result_video './temp/result.mp4' --cpu")
- return './temp/result.mp4'
-
-
-
-def main():
- with gr.Blocks(theme="huggingface", css='style.css') as demo:
- gr.Markdown(title)
- gr.Markdown(DESCRIPTION)
-
- with gr.Box():
- gr.Markdown('''## Step 1 (Provide Input Face Image)
-- Drop an image containing a face to the **Input Image**.
- - If there are multiple faces in the image, use Edit button in the upper right corner and crop the input image beforehand.
-''')
- with gr.Row():
- with gr.Column():
- with gr.Row():
- input_image = gr.Image(label='Input Image',
- type="pil")
-
- with gr.Row():
- paths = sorted(pathlib.Path('assets').glob('*.png'))
- example_images = gr.Dataset(components=[input_image],
- samples=[[path.as_posix()]
- for path in paths])
-
- with gr.Box():
- gr.Markdown('''## Step 2 (Select Driving Video)
-- Select **Style Driving Video for the face image animation**.
-''')
- with gr.Row():
- with gr.Column():
- with gr.Row():
- driving_video = gr.Video(label='Driving Video',
- format="mp4")
-
- with gr.Row():
- paths = sorted(pathlib.Path('assets').glob('*.mp4'))
- example_video = gr.Dataset(components=[driving_video],
- samples=[[path.as_posix()]
- for path in paths])
-
- with gr.Box():
- gr.Markdown('''## Step 3 (Generate Animated Image based on the Video)
-- Hit the **Generate** button. (Note: As it runs on the CPU, it takes ~ 3 minutes to generate final results.)
-''')
- with gr.Row():
- with gr.Column():
- with gr.Row():
- generate_button = gr.Button('Generate')
-
- with gr.Column():
- result = gr.Video(type="file", label="Output")
- gr.Markdown(FOOTER)
- generate_button.click(fn=inference,
- inputs=[
- input_image,
- driving_video
- ],
- outputs=result)
- example_images.click(fn=set_example_image,
- inputs=example_images,
- outputs=example_images.components)
- example_video.click(fn=set_example_video,
- inputs=example_video,
- outputs=example_video.components)
-
- demo.launch(
- enable_queue=True,
- debug=True
- )
-
-if __name__ == '__main__':
- main()
\ No newline at end of file
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/spinners.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/spinners.py
deleted file mode 100644
index cf2b976f377c2656afb3d84add8d30b0fc280c03..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/spinners.py
+++ /dev/null
@@ -1,159 +0,0 @@
-import contextlib
-import itertools
-import logging
-import sys
-import time
-from typing import IO, Generator, Optional
-
-from pip._internal.utils.compat import WINDOWS
-from pip._internal.utils.logging import get_indentation
-
-logger = logging.getLogger(__name__)
-
-
-class SpinnerInterface:
- def spin(self) -> None:
- raise NotImplementedError()
-
- def finish(self, final_status: str) -> None:
- raise NotImplementedError()
-
-
-class InteractiveSpinner(SpinnerInterface):
- def __init__(
- self,
- message: str,
- file: Optional[IO[str]] = None,
- spin_chars: str = "-\\|/",
- # Empirically, 8 updates/second looks nice
- min_update_interval_seconds: float = 0.125,
- ):
- self._message = message
- if file is None:
- file = sys.stdout
- self._file = file
- self._rate_limiter = RateLimiter(min_update_interval_seconds)
- self._finished = False
-
- self._spin_cycle = itertools.cycle(spin_chars)
-
- self._file.write(" " * get_indentation() + self._message + " ... ")
- self._width = 0
-
- def _write(self, status: str) -> None:
- assert not self._finished
- # Erase what we wrote before by backspacing to the beginning, writing
- # spaces to overwrite the old text, and then backspacing again
- backup = "\b" * self._width
- self._file.write(backup + " " * self._width + backup)
- # Now we have a blank slate to add our status
- self._file.write(status)
- self._width = len(status)
- self._file.flush()
- self._rate_limiter.reset()
-
- def spin(self) -> None:
- if self._finished:
- return
- if not self._rate_limiter.ready():
- return
- self._write(next(self._spin_cycle))
-
- def finish(self, final_status: str) -> None:
- if self._finished:
- return
- self._write(final_status)
- self._file.write("\n")
- self._file.flush()
- self._finished = True
-
-
-# Used for dumb terminals, non-interactive installs (no tty), etc.
-# We still print updates occasionally (once every 60 seconds by default) to
-# act as a keep-alive for systems like Travis-CI that take lack-of-output as
-# an indication that a task has frozen.
-class NonInteractiveSpinner(SpinnerInterface):
- def __init__(self, message: str, min_update_interval_seconds: float = 60.0) -> None:
- self._message = message
- self._finished = False
- self._rate_limiter = RateLimiter(min_update_interval_seconds)
- self._update("started")
-
- def _update(self, status: str) -> None:
- assert not self._finished
- self._rate_limiter.reset()
- logger.info("%s: %s", self._message, status)
-
- def spin(self) -> None:
- if self._finished:
- return
- if not self._rate_limiter.ready():
- return
- self._update("still running...")
-
- def finish(self, final_status: str) -> None:
- if self._finished:
- return
- self._update(f"finished with status '{final_status}'")
- self._finished = True
-
-
-class RateLimiter:
- def __init__(self, min_update_interval_seconds: float) -> None:
- self._min_update_interval_seconds = min_update_interval_seconds
- self._last_update: float = 0
-
- def ready(self) -> bool:
- now = time.time()
- delta = now - self._last_update
- return delta >= self._min_update_interval_seconds
-
- def reset(self) -> None:
- self._last_update = time.time()
-
-
-@contextlib.contextmanager
-def open_spinner(message: str) -> Generator[SpinnerInterface, None, None]:
- # Interactive spinner goes directly to sys.stdout rather than being routed
- # through the logging system, but it acts like it has level INFO,
- # i.e. it's only displayed if we're at level INFO or better.
- # Non-interactive spinner goes through the logging system, so it is always
- # in sync with logging configuration.
- if sys.stdout.isatty() and logger.getEffectiveLevel() <= logging.INFO:
- spinner: SpinnerInterface = InteractiveSpinner(message)
- else:
- spinner = NonInteractiveSpinner(message)
- try:
- with hidden_cursor(sys.stdout):
- yield spinner
- except KeyboardInterrupt:
- spinner.finish("canceled")
- raise
- except Exception:
- spinner.finish("error")
- raise
- else:
- spinner.finish("done")
-
-
-HIDE_CURSOR = "\x1b[?25l"
-SHOW_CURSOR = "\x1b[?25h"
-
-
-@contextlib.contextmanager
-def hidden_cursor(file: IO[str]) -> Generator[None, None, None]:
- # The Windows terminal does not support the hide/show cursor ANSI codes,
- # even via colorama. So don't even try.
- if WINDOWS:
- yield
- # We don't want to clutter the output with control characters if we're
- # writing to a file, or if the user is running with --quiet.
- # See https://github.com/pypa/pip/issues/3418
- elif not file.isatty() or logger.getEffectiveLevel() > logging.INFO:
- yield
- else:
- file.write(HIDE_CURSOR)
- try:
- yield
- finally:
- file.write(SHOW_CURSOR)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/build_clib.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/build_clib.py
deleted file mode 100644
index 50bb9bbabb7ab00cd4763b524ab536e711e468a8..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/command/build_clib.py
+++ /dev/null
@@ -1,208 +0,0 @@
-"""distutils.command.build_clib
-
-Implements the Distutils 'build_clib' command, to build a C/C++ library
-that is included in the module distribution and needed by an extension
-module."""
-
-
-# XXX this module has *lots* of code ripped-off quite transparently from
-# build_ext.py -- not surprisingly really, as the work required to build
-# a static library from a collection of C source files is not really all
-# that different from what's required to build a shared object file from
-# a collection of C source files. Nevertheless, I haven't done the
-# necessary refactoring to account for the overlap in code between the
-# two modules, mainly because a number of subtle details changed in the
-# cut 'n paste. Sigh.
-
-import os
-from distutils.core import Command
-from distutils.errors import DistutilsSetupError
-from distutils.sysconfig import customize_compiler
-from distutils import log
-
-
-def show_compilers():
- from distutils.ccompiler import show_compilers
-
- show_compilers()
-
-
-class build_clib(Command):
-
- description = "build C/C++ libraries used by Python extensions"
-
- user_options = [
- ('build-clib=', 'b', "directory to build C/C++ libraries to"),
- ('build-temp=', 't', "directory to put temporary build by-products"),
- ('debug', 'g', "compile with debugging information"),
- ('force', 'f', "forcibly build everything (ignore file timestamps)"),
- ('compiler=', 'c', "specify the compiler type"),
- ]
-
- boolean_options = ['debug', 'force']
-
- help_options = [
- ('help-compiler', None, "list available compilers", show_compilers),
- ]
-
- def initialize_options(self):
- self.build_clib = None
- self.build_temp = None
-
- # List of libraries to build
- self.libraries = None
-
- # Compilation options for all libraries
- self.include_dirs = None
- self.define = None
- self.undef = None
- self.debug = None
- self.force = 0
- self.compiler = None
-
- def finalize_options(self):
- # This might be confusing: both build-clib and build-temp default
- # to build-temp as defined by the "build" command. This is because
- # I think that C libraries are really just temporary build
- # by-products, at least from the point of view of building Python
- # extensions -- but I want to keep my options open.
- self.set_undefined_options(
- 'build',
- ('build_temp', 'build_clib'),
- ('build_temp', 'build_temp'),
- ('compiler', 'compiler'),
- ('debug', 'debug'),
- ('force', 'force'),
- )
-
- self.libraries = self.distribution.libraries
- if self.libraries:
- self.check_library_list(self.libraries)
-
- if self.include_dirs is None:
- self.include_dirs = self.distribution.include_dirs or []
- if isinstance(self.include_dirs, str):
- self.include_dirs = self.include_dirs.split(os.pathsep)
-
- # XXX same as for build_ext -- what about 'self.define' and
- # 'self.undef' ?
-
- def run(self):
- if not self.libraries:
- return
-
- # Yech -- this is cut 'n pasted from build_ext.py!
- from distutils.ccompiler import new_compiler
-
- self.compiler = new_compiler(
- compiler=self.compiler, dry_run=self.dry_run, force=self.force
- )
- customize_compiler(self.compiler)
-
- if self.include_dirs is not None:
- self.compiler.set_include_dirs(self.include_dirs)
- if self.define is not None:
- # 'define' option is a list of (name,value) tuples
- for (name, value) in self.define:
- self.compiler.define_macro(name, value)
- if self.undef is not None:
- for macro in self.undef:
- self.compiler.undefine_macro(macro)
-
- self.build_libraries(self.libraries)
-
- def check_library_list(self, libraries):
- """Ensure that the list of libraries is valid.
-
- `library` is presumably provided as a command option 'libraries'.
- This method checks that it is a list of 2-tuples, where the tuples
- are (library_name, build_info_dict).
-
- Raise DistutilsSetupError if the structure is invalid anywhere;
- just returns otherwise.
- """
- if not isinstance(libraries, list):
- raise DistutilsSetupError("'libraries' option must be a list of tuples")
-
- for lib in libraries:
- if not isinstance(lib, tuple) and len(lib) != 2:
- raise DistutilsSetupError("each element of 'libraries' must a 2-tuple")
-
- name, build_info = lib
-
- if not isinstance(name, str):
- raise DistutilsSetupError(
- "first element of each tuple in 'libraries' "
- "must be a string (the library name)"
- )
-
- if '/' in name or (os.sep != '/' and os.sep in name):
- raise DistutilsSetupError(
- "bad library name '%s': "
- "may not contain directory separators" % lib[0]
- )
-
- if not isinstance(build_info, dict):
- raise DistutilsSetupError(
- "second element of each tuple in 'libraries' "
- "must be a dictionary (build info)"
- )
-
- def get_library_names(self):
- # Assume the library list is valid -- 'check_library_list()' is
- # called from 'finalize_options()', so it should be!
- if not self.libraries:
- return None
-
- lib_names = []
- for (lib_name, build_info) in self.libraries:
- lib_names.append(lib_name)
- return lib_names
-
- def get_source_files(self):
- self.check_library_list(self.libraries)
- filenames = []
- for (lib_name, build_info) in self.libraries:
- sources = build_info.get('sources')
- if sources is None or not isinstance(sources, (list, tuple)):
- raise DistutilsSetupError(
- "in 'libraries' option (library '%s'), "
- "'sources' must be present and must be "
- "a list of source filenames" % lib_name
- )
-
- filenames.extend(sources)
- return filenames
-
- def build_libraries(self, libraries):
- for (lib_name, build_info) in libraries:
- sources = build_info.get('sources')
- if sources is None or not isinstance(sources, (list, tuple)):
- raise DistutilsSetupError(
- "in 'libraries' option (library '%s'), "
- "'sources' must be present and must be "
- "a list of source filenames" % lib_name
- )
- sources = list(sources)
-
- log.info("building '%s' library", lib_name)
-
- # First, compile the source code to object files in the library
- # directory. (This should probably change to putting object
- # files in a temporary build directory.)
- macros = build_info.get('macros')
- include_dirs = build_info.get('include_dirs')
- objects = self.compiler.compile(
- sources,
- output_dir=self.build_temp,
- macros=macros,
- include_dirs=include_dirs,
- debug=self.debug,
- )
-
- # Now "link" the object files together into a static library.
- # (On Unix at least, this isn't really linking -- it just
- # builds an archive. Whatever.)
- self.compiler.create_static_lib(
- objects, lib_name, output_dir=self.build_clib, debug=self.debug
- )
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/extend.md b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/extend.md
deleted file mode 100644
index a6af550fdb2aa79c818cef54b009f2fe816d46a9..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/extend.md
+++ /dev/null
@@ -1,141 +0,0 @@
-# Extend Detectron2's Defaults
-
-__Research is about doing things in new ways__.
-This brings a tension in how to create abstractions in code,
-which is a challenge for any research engineering project of a significant size:
-
-1. On one hand, it needs to have very thin abstractions to allow for the possibility of doing
- everything in new ways. It should be reasonably easy to break existing
- abstractions and replace them with new ones.
-
-2. On the other hand, such a project also needs reasonably high-level
- abstractions, so that users can easily do things in standard ways,
- without worrying too much about the details that only certain researchers care about.
-
-In detectron2, there are two types of interfaces that address this tension together:
-
-1. Functions and classes that take a config (`cfg`) argument
- created from a yaml file
- (sometimes with few extra arguments).
-
- Such functions and classes implement
- the "standard default" behavior: it will read what it needs from a given
- config and do the "standard" thing.
- Users only need to load an expert-made config and pass it around, without having to worry about
- which arguments are used and what they all mean.
-
- See [Yacs Configs](configs.md) for a detailed tutorial.
-
-2. Functions and classes that have well-defined explicit arguments.
-
- Each of these is a small building block of the entire system.
- They require users' expertise to understand what each argument should be,
- and require more effort to stitch together to a larger system.
- But they can be stitched together in more flexible ways.
-
- When you need to implement something not supported by the "standard defaults"
- included in detectron2, these well-defined components can be reused.
-
- The [LazyConfig system](lazyconfigs.md) relies on such functions and classes.
-
-3. A few functions and classes are implemented with the
- [@configurable](../modules/config.html#detectron2.config.configurable)
- decorator - they can be called with either a config, or with explicit arguments, or a mixture of both.
- Their explicit argument interfaces are currently experimental.
-
- As an example, a Mask R-CNN model can be built in the following ways:
-
- 1. Config-only:
- ```python
- # load proper yaml config file, then
- model = build_model(cfg)
- ```
-
- 2. Mixture of config and additional argument overrides:
- ```python
- model = GeneralizedRCNN(
- cfg,
- roi_heads=StandardROIHeads(cfg, batch_size_per_image=666),
- pixel_std=[57.0, 57.0, 57.0])
- ```
-
- 3. Full explicit arguments:
-
-
- (click to expand)
-
-
- ```python
- model = GeneralizedRCNN(
- backbone=FPN(
- ResNet(
- BasicStem(3, 64, norm="FrozenBN"),
- ResNet.make_default_stages(50, stride_in_1x1=True, norm="FrozenBN"),
- out_features=["res2", "res3", "res4", "res5"],
- ).freeze(2),
- ["res2", "res3", "res4", "res5"],
- 256,
- top_block=LastLevelMaxPool(),
- ),
- proposal_generator=RPN(
- in_features=["p2", "p3", "p4", "p5", "p6"],
- head=StandardRPNHead(in_channels=256, num_anchors=3),
- anchor_generator=DefaultAnchorGenerator(
- sizes=[[32], [64], [128], [256], [512]],
- aspect_ratios=[0.5, 1.0, 2.0],
- strides=[4, 8, 16, 32, 64],
- offset=0.0,
- ),
- anchor_matcher=Matcher([0.3, 0.7], [0, -1, 1], allow_low_quality_matches=True),
- box2box_transform=Box2BoxTransform([1.0, 1.0, 1.0, 1.0]),
- batch_size_per_image=256,
- positive_fraction=0.5,
- pre_nms_topk=(2000, 1000),
- post_nms_topk=(1000, 1000),
- nms_thresh=0.7,
- ),
- roi_heads=StandardROIHeads(
- num_classes=80,
- batch_size_per_image=512,
- positive_fraction=0.25,
- proposal_matcher=Matcher([0.5], [0, 1], allow_low_quality_matches=False),
- box_in_features=["p2", "p3", "p4", "p5"],
- box_pooler=ROIPooler(7, (1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32), 0, "ROIAlignV2"),
- box_head=FastRCNNConvFCHead(
- ShapeSpec(channels=256, height=7, width=7), conv_dims=[], fc_dims=[1024, 1024]
- ),
- box_predictor=FastRCNNOutputLayers(
- ShapeSpec(channels=1024),
- test_score_thresh=0.05,
- box2box_transform=Box2BoxTransform((10, 10, 5, 5)),
- num_classes=80,
- ),
- mask_in_features=["p2", "p3", "p4", "p5"],
- mask_pooler=ROIPooler(14, (1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32), 0, "ROIAlignV2"),
- mask_head=MaskRCNNConvUpsampleHead(
- ShapeSpec(channels=256, width=14, height=14),
- num_classes=80,
- conv_dims=[256, 256, 256, 256, 256],
- ),
- ),
- pixel_mean=[103.530, 116.280, 123.675],
- pixel_std=[1.0, 1.0, 1.0],
- input_format="BGR",
- )
- ```
-
-
-
-
-If you only need the standard behavior, the [Beginner's Tutorial](./getting_started.md)
-should suffice. If you need to extend detectron2 to your own needs,
-see the following tutorials for more details:
-
-* Detectron2 includes a few standard datasets. To use custom ones, see
- [Use Custom Datasets](./datasets.md).
-* Detectron2 contains the standard logic that creates a data loader for training/testing from a
- dataset, but you can write your own as well. See [Use Custom Data Loaders](./data_loading.md).
-* Detectron2 implements many standard detection models, and provide ways for you
- to overwrite their behaviors. See [Use Models](./models.md) and [Write Models](./write-models.md).
-* Detectron2 provides a default training loop that is good for common training tasks.
- You can customize it with hooks, or write your own loop instead. See [training](./training.md).
diff --git a/spaces/Benson/text-generation/Examples/Agua Clasificacin Rompecabezas Mod Apk Descargar.md b/spaces/Benson/text-generation/Examples/Agua Clasificacin Rompecabezas Mod Apk Descargar.md
deleted file mode 100644
index ca15ee3eb5a8986b1d713b94cbcc8aca5a16db6e..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Agua Clasificacin Rompecabezas Mod Apk Descargar.md
+++ /dev/null
@@ -1,48 +0,0 @@
-
-
Tipo de agua Jigsaw Mod APK Descargar: Un divertido y relajante juego de puzzle
-
¿Te encantan los juegos de puzzle que desafían tu cerebro y calman tus nervios? Si es así, entonces deberías probar Water Sort Jigsaw, un juego único y adictivo que combina clasificación de agua y rompecabezas. En este juego, tienes que ordenar diferentes colores de agua en tubos separados y completar hermosas imágenes con el agua ordenada. Suena fácil, ¿verdad? Bueno, no tan rápido. Tienes que tener cuidado de no mezclar los colores o desbordar los tubos, o tendrás que empezar de nuevo. Water Sort Jigsaw es un juego que pondrá a prueba tu lógica, paciencia y creatividad.
-
¿Qué es el rompecabezas de clasificación de agua?
-
Water Sort Jigsaw es un juego de puzzle desarrollado por IEC Global Pty Ltd, una empresa especializada en juegos casuales y educativos para todas las edades. El juego fue lanzado en 2020 y se ha descargado más de 10 millones de veces en Google Play Store. El juego tiene una calificación de 4.4 de 5 estrellas, con miles de comentarios positivos de jugadores satisfechos.
El juego de Water Sort Jigsaw es simple e intuitivo. Tienes un conjunto de tubos llenos de diferentes colores de agua. Su objetivo es ordenar el agua por color en tubos separados. Solo puede verter agua de un tubo a otro si los colores coinciden o si el tubo está vacío. También puedes usar tubos vacíos como almacenamiento temporal de agua. Tienes que ordenar toda el agua de los tubos para completar el nivel.
-
A medida que avanzas a través de los niveles, también desbloquearás diferentes rompecabezas que puedes completar con el agua ordenada. Los rompecabezas se basan en varios temas, como animales, naturaleza, comida, arte y más. Usted puede elegir el nivel de dificultad de los puzzles, de fácil a difícil. Los rompecabezas son una gran manera de relajarse y disfrutar de los gráficos coloridos del juego.
-
¿Por qué descargar agua Ordenar Jigsaw mod apk?
-
-
Características de Water Sort Jigsaw mod apk
-
Niveles y puzzles ilimitados
-
Una de las mejores características de Water Sort Jigsaw mod apk es que le da acceso ilimitado a todos los niveles y rompecabezas en el juego. No tienes que esperar nuevas actualizaciones o pagar por contenido premium. Puedes jugar todo lo que quieras y disfrutar de interminables horas de diversión y entretenimiento.
-
Gráficos coloridos y sonidos relajantes
-
Otra característica de Water Sort Jigsaw mod apk es que mejora los gráficos y sonidos del juego. El apk mod hace que los colores más vibrante y realista, haciendo el juego más atractivo y atractivo. El mod apk también mejora la calidad de sonido y añade más relajante música y efectos de sonido para el juego. El juego se vuelve más envolvente y relajante con el mod apk.
-
No se requieren anuncios ni internet
-
Una tercera característica de Water Sort Jigsaw mod apk es que elimina todos los anuncios molestos y pop-ups que interrumpen su juego. No tienes que ver anuncios para desbloquear niveles u obtener recompensas. Puedes jugar sin distracciones ni interrupciones. El mod apk también le permite jugar sin conexión, sin necesidad de una conexión a Internet. Puede jugar el juego en cualquier momento y en cualquier lugar que desee.
-
Fácil de instalar y usar
-
Una cuarta característica de Water Sort Jigsaw mod apk es que es muy fácil de instalar y usar. Usted no necesita raíz de su dispositivo o pasar por pasos complicados para obtener el apk mod. Solo tiene que descargar el archivo apk mod de una fuente de confianza y siga las instrucciones simples a continuación. El mod apk es compatible con la mayoría de los dispositivos Android y se ejecuta sin problemas sin errores o problemas técnicos.
-
-
¿Cómo descargar e instalar Water Sort Jigsaw mod apk?
-
Si usted está interesado en la descarga y la instalación de Water Sort Jigsaw mod apk, puede seguir estos sencillos pasos:
-
Paso 1: Descargar el archivo apk mod de una fuente de confianza
Paso 2: Habilitar fuentes desconocidas en el dispositivo
-
El segundo paso es habilitar fuentes desconocidas en su dispositivo. Esto le permitirá instalar aplicaciones que no son de Google Play Store. Para hacer esto, vaya a la configuración del dispositivo y busque opciones de seguridad o privacidad. Luego, encuentre la opción que dice fuentes desconocidas o permita la instalación desde fuentes desconocidas y enciéndala.
-
Paso 3: Instalar el archivo apk mod y disfrutar del juego
-
El tercer y último paso es instalar el archivo apk mod y disfrutar del juego. Para hacer esto, busque el archivo descargado en su dispositivo y toque en él. Luego, siga las instrucciones en pantalla para completar el proceso de instalación. Una vez hecho esto, puedes iniciar el juego y comenzar a jugar con funciones y beneficios ilimitados.
-
Conclusión
-
Water Sort Jigsaw es un divertido y relajante juego de puzzle que te mantendrá entretenido durante horas. Es una gran manera de ejercitar el cerebro y aliviar el estrés. Si usted quiere disfrutar del juego con más características y beneficios, usted debe descargar Water Sort Jigsaw mod apk. El apk mod le da acceso ilimitado a todos los niveles y rompecabezas, mejora los gráficos y sonidos, elimina anuncios y requisitos de Internet, y es fácil de instalar y usar. Puede descargar agua Ordenar Jigsaw mod apk desde el enlace de abajo y empezar a clasificar el agua y completar rompecabezas.
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre Water Sort Jigsaw mod apk:
-
-
¿Es seguro descargar Jigsaw mod apk?
-
Sí, Agua Ordenar Jigsaw mod apk es seguro de descargar, siempre y cuando se obtiene de una fuente de confianza. El archivo apk mod es libre de virus y no contiene ningún código malicioso o malware.
-
¿Requiere acceso de raíz Jigsaw mod apk?
-
-
¿Puedo actualizar la clasificación de agua Jigsaw mod apk?
-
No, Agua Ordenar Jigsaw mod apk no es compatible con las actualizaciones de la versión oficial. Si desea actualizar el juego, usted tiene que desinstalar el apk mod e instalar la última versión de la Google Play Store.
-
¿Puedo jugar Water Sort Jigsaw con mis amigos?
-
No, Water Sort Jigsaw no tiene un modo multijugador o una función social. Solo puede jugar el juego en solitario y sin conexión.
-
¿Puedo personalizar la configuración del juego?
-
Sí, Water Sort Jigsaw te permite personalizar algunos de los ajustes del juego, como sonido, música, vibración, idioma y nivel de dificultad. Puedes acceder a estos ajustes desde el menú principal del juego.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Bmw Drift Apk.md b/spaces/Benson/text-generation/Examples/Bmw Drift Apk.md
deleted file mode 100644
index 3cca4c503830f09428823f12674d28770b9f47e7..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Bmw Drift Apk.md
+++ /dev/null
@@ -1,54 +0,0 @@
-
-
BMW deriva APK: Un divertido y realista juego de deriva para Android
-
Si usted es un fan de la deriva y los coches de BMW, te encantará BMW Drift APK, un juego que le permite experimentar la emoción de deslizarse de lado en varios modelos del fabricante de automóviles alemán. En este artículo, le diremos qué es BMW Drift APK, cómo descargarlo e instalarlo, cómo jugarlo, y algunos consejos y trucos para mejorar sus habilidades de deriva.
-
¿Qué es BMW Drift APK?
-
BMW Drift APK es un juego que simula la técnica de conducción de la deriva, donde el conductor sobrevirajes intencionalmente y pierde la tracción manteniendo el control y la dirección. El juego le permite elegir entre diferentes modelos de BMW, como el M3, M5, Z4, X6, y más, y la deriva en varias pistas, como calles de la ciudad, carreteras, carreteras de montaña, y circuitos de carreras.
BMW Drift APK tiene muchas características que lo convierten en un juego de deriva divertido y realista para dispositivos Android. Algunas de estas características son:
-
Física y gráficos realistas
-
El juego utiliza la física avanzada y los motores gráficos para crear una experiencia de conducción realista. Se puede ver el humo de los neumáticos, las chispas de su parachoques, el daño a su coche, y los reflejos en las ventanas. También puede sentir la transferencia de peso, la inercia, el agarre y la retroalimentación de su automóvil a medida que se deriva.
-
Coches y ajustes personalizables
-
El juego te permite personalizar la apariencia y el rendimiento de tu coche. Puedes cambiar el color, las ruedas, los spoilers, los escapes y más. También puede ajustar el motor, la suspensión, los frenos, los neumáticos, el diferencial y la dirección de su automóvil para adaptarse a su estilo de conducción. También puedes ajustar la configuración del juego, como el ángulo de la cámara, los efectos de sonido, el volumen de música y el nivel de dificultad.
-
Múltiples modos de juego y desafíos
-
-
Cómo descargar e instalar BMW deriva APK?
-
Si desea descargar e instalar BMW Drift APK en su dispositivo Android, es necesario seguir estos pasos:
-
Descargar el archivo APK de una fuente de confianza
-
El primer paso es descargar el archivo APK de BMW Drift APK de una fuente de confianza. Puede utilizar este enlace para descargarlo de forma segura. El tamaño del archivo es de unos 50 MB.
-
Habilitar fuentes desconocidas en su dispositivo
-
El siguiente paso es habilitar fuentes desconocidas en su dispositivo. Esto le permitirá instalar aplicaciones que no son de Google Play Store. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo.
-
Instalar el archivo APK y lanzar el juego
-
El paso final es instalar el archivo APK y lanzar el juego. Para hacer esto, busque el archivo descargado en su administrador de archivos y toque en él. Siga las instrucciones en la pantalla para instalar el juego. Una vez que la instalación haya terminado, puede abrir el juego y disfrutar de la deriva.
-
Cómo jugar BMW deriva APK?
-
Jugar BMW Drift APK es fácil y divertido. Aquí están los pasos básicos para jugar el juego:
-
-
Elige tu coche y pista
-
Lo primero que tienes que hacer es elegir tu coche y pista. Puedes seleccionar entre una variedad de modelos BMW, como el M3, M5, Z4, X6 y más. También puede elegir entre diferentes pistas, como calles de la ciudad, carreteras, carreteras de montaña y circuitos de carreras. También puede personalizar la apariencia y el rendimiento de su coche antes de empezar a deriva.
-
Utilice los controles para dirigir, acelerar, frenar y la deriva
-
Lo siguiente que tienes que hacer es utilizar los controles para dirigir, acelerar, frenar y la deriva. Puede utilizar los botones en pantalla o el sensor de inclinación de su dispositivo para controlar su automóvil. También puede usar el botón de freno de mano para iniciar una deriva. El juego te mostrará un indicador de deriva que te dice qué tan bien estás a la deriva. Cuanto más te desvíes, más puntos y recompensas ganarás.
-
-
Lo último que tienes que hacer es ganar puntos y recompensas por tus habilidades de deriva. El juego te dará puntos basados en el ángulo, la velocidad, la duración y la distancia de tus derivas. También puede ganar puntos extra mediante la realización de combos, como encadenar múltiples derivaciones juntos o la deriva cerca de los obstáculos. Puede utilizar los puntos y recompensas para desbloquear nuevos coches y pistas, o actualizar los existentes.
-
Consejos y trucos para BMW deriva APK
-
Si desea mejorar sus habilidades de deriva y disfrutar del juego más, aquí hay algunos consejos y trucos para BMW Drift APK:
-
Aprende los fundamentos de las técnicas de deriva
-
El primer consejo es aprender los fundamentos de las técnicas de deriva. La deriva no se trata solo de deslizarse hacia los lados, sino también de controlar el equilibrio y la dirección de su coche. Hay diferentes tipos de derivaciones, como derivas de potencia, derivas de freno, derivas de patada de embrague, derivas de freno de mano, y más. Puedes aprender más sobre estas técnicas en tutoriales o videos en línea.
-
Práctica en diferentes pistas y coches
-
El segundo consejo es practicar en diferentes pistas y coches. Cada pista y coche tiene sus propias características y desafíos. Algunas pistas pueden tener esquinas estrechas, carriles estrechos o superficies resbaladizas. Algunos coches pueden tener más potencia, agarre o peso que otros. Practicando en diferentes pistas y coches, aprenderás cómo adaptarte a diferentes situaciones y mejorar tus habilidades de deriva.
-
Ajustar la configuración para adaptarse a sus preferencias y el rendimiento del dispositivo
-
El tercer consejo es ajustar la configuración para adaptarse a sus preferencias y el rendimiento del dispositivo. Puedes cambiar la configuración del juego, como el ángulo de la cámara, los efectos de sonido, el volumen de la música y el nivel de dificultad. También puede ajustar la configuración de su automóvil, como el motor, la suspensión, los frenos, los neumáticos, el diferencial y la dirección. Ajustando los ajustes, puedes hacer el juego más agradable y cómodo para ti.
-
Conclusión
-
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre BMW Drift APK:
-
-
Question
Answer
-
¿Es BMW Drift APK gratis?
Sí, BMW Drift APK es gratis para descargar y jugar.
-
¿Es seguro BMW Drift APK?
Sí, BMW Drift APK es seguro si lo descarga de una fuente de confianza como este enlace. Sin embargo, siempre debe tener cuidado al instalar aplicaciones de fuentes desconocidas.
-
Es BMW Dr ift APK compatible con mi dispositivo?
BMW Drift APK es compatible con la mayoría de los dispositivos Android que tienen Android 4.1 o superior. Sin embargo, algunos dispositivos pueden tener problemas de rendimiento o compatibilidad dependiendo de sus especificaciones y configuraciones.
-
¿Puedo jugar BMW Drift APK offline?
Sí, puede jugar BMW Drift APK offline en modo libre y modo carrera. Sin embargo, necesitará una conexión a Internet para jugar en modo online y acceder a algunas características, como tablas de clasificación y actualizaciones.
-
¿Puedo jugar BMW Drift APK con un controlador?
Sí, puede jugar BMW Drift APK con un controlador si su dispositivo lo admite. Puede conectar su controlador a través de Bluetooth o USB y configurar los botones en la configuración del juego.
-
-
Espero que este artículo le ha ayudado a aprender más sobre BMW Drift APK y cómo disfrutarlo. Si tienes alguna pregunta o comentario, por favor deja un comentario abajo. Happy drifting!
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar El Juego Mod Township Offline.md b/spaces/Benson/text-generation/Examples/Descargar El Juego Mod Township Offline.md
deleted file mode 100644
index 5a4b4ae222431486bf3bcffae34501336345cdff..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar El Juego Mod Township Offline.md
+++ /dev/null
@@ -1,79 +0,0 @@
-
-
Cómo descargar el juego Mod Township sin conexión gratis
-
Si usted está buscando un juego divertido y relajante que combina la construcción de la ciudad y la agricultura, entonces usted debe probar Township. Township es un popular juego móvil que te permite crear tu ciudad de ensueño, cosechar cosechas, comerciar con otros países, administrar un zoológico y más. ¿Pero qué pasa si quieres jugar Township sin conexión a Internet? ¿O qué pasa si quieres obtener recursos ilimitados, monedas y dinero en efectivo en el juego? En este artículo, te mostraremos cómo descargar el juego mod Township offline gratis. También explicaremos qué es un mod de juego, cómo puede mejorar su experiencia de juego y cuáles son los beneficios y riesgos de descargar el juego mod Township sin conexión.
Municipio es una mezcla única de la construcción de la ciudad y la agricultura que fue desarrollado por Playrix. Está disponible para dispositivos Android, iOS, Windows, Xbox One, PlayStation 4 y Nintendo Switch. En Township, puedes construir tu ciudad de ensueño desde cero, utilizando varios edificios y decoraciones que puedes personalizar a tu gusto. También puede cultivar y procesar cultivos en sus granjas y fábricas, vender bienes para desarrollar su ciudad, comerciar con islas exóticas, restaurantes abiertos, cines y otros edificios comunitarios, explorar la mina en busca de recursos y artefactos, administrar su propio zoológico con animales de todo el mundo, y más. Township es un juego que ofrece infinitas posibilidades de creatividad y diversión.
-
Township tiene muchas características y actividades para disfrutar. Puedes jugar con tus amigos de Facebook y Google+, hacer nuevos amigos en la comunidad de juegos, crear tus propios clanes, participar en eventos y competiciones de temporada, completar misiones y pedidos de tu pueblo, recoger banderas del país y monumentos famosos para tu ciudad, ver animaciones divertidas de sus personajes, y mucho más. Township es un juego que nunca se vuelve aburrido.
-
-
¿Qué es un mod de juego y cómo puede mejorar su experiencia de juego
-
Un mod de juego es una modificación o alteración del juego original que cambia algunos aspectos del mismo. Un mod de juego puede ser creado por cualquiera que tenga las habilidades y herramientas para hacerlo. Un mod de juego se puede descargar desde varios sitios web o plataformas que los alojan. Puedes instalar un mod de juego en tu dispositivo siguiendo algunas instrucciones o usando algún software.
-
-
Un mod de juego puede agregar nuevo contenido, características o elementos de juego al juego. Por ejemplo, un mod de juego puede introducir nuevos personajes, objetos, mapas, misiones, modos o géneros al juego. Por ejemplo, un mod de juego puede convertir un juego de estrategia en un juego de rol, o un juego de carreras en un juego de supervivencia zombie. Un mod de juego también puede mejorar los gráficos, el sonido o la interfaz del juego. Por ejemplo, un mod de juego puede mejorar la resolución, texturas, iluminación o efectos del juego, o agregar nueva música, voz o subtítulos al juego. Un mod de juego también puede corregir errores, mejorar el rendimiento o personalizar el juego según tus preferencias. Por ejemplo, un mod de juego puede eliminar fallos, errores o fallos del juego, o aumentar la velocidad, estabilidad o compatibilidad del juego. Un mod de juego también puede cambiar la dificultad, el equilibrio o la mecánica del juego. Por ejemplo, un mod de juego puede hacer el juego más fácil o más difícil, más realista o más fantasía, más divertido o más desafiante.
-
Un mod de juego puede mejorar tu experiencia de juego al darte más opciones, variedad y disfrute en el juego. Un mod de juego puede hacer que el juego sea más interesante, emocionante o inmersivo. Un mod de juego también puede extender la vida útil del juego añadiendo nuevo contenido o valor de repetición al juego. Un mod de juego también puede satisfacer tu curiosidad o creatividad permitiéndote explorar nuevas posibilidades o crear tus propios escenarios en el juego.
-
Cómo descargar el juego Mod Township sin conexión gratis
-
-
Encuentra una fuente confiable y segura para el juego mod
-
Hay muchos sitios web o plataformas que ofrecen mods de juegos para Township y otros juegos. Sin embargo, no todos son confiables o seguros. Algunos de ellos pueden contener archivos falsos, obsoletos o dañados que pueden no funcionar correctamente o dañar su dispositivo. Algunos de ellos también pueden tener anuncios maliciosos, ventanas emergentes o enlaces que pueden redirigirle a sitios no deseados o peligrosos. Por lo tanto, debes ser cuidadoso y selectivo al elegir una fuente para el mod del juego.
-
Una forma de encontrar una fuente confiable y segura para el mod del juego es hacer una investigación y leer algunos comentarios de otros usuarios que han descargado y usado el mod del juego. También puedes consultar las valoraciones, comentarios, comentarios o testimonios de otros usuarios que han descargado y utilizado el mod del juego. También puede buscar algunas recomendaciones o sugerencias de sitios de renombre, blogs, foros o comunidades que están relacionados con Township o juegos de azar en general.
-
Otra manera de encontrar una fuente confiable y segura para el mod del juego es usar algunas herramientas o software que pueden ayudarlo a escanear y verificar los archivos antes de descargarlos. Puede usar algunos programas antivirus, detectores de malware, comprobadores de archivos o gestores de descargas que pueden ayudarlo a detectar y eliminar cualquier virus, malware, spyware, adware, troyanos, gusanos u otras amenazas de los archivos. También puede utilizar algunas herramientas o software que pueden ayudarle a comparar y verificar los archivos con los archivos originales del juego para asegurarse de que son compatibles y auténticos.
-
Descargue el archivo mod del juego e instálelo en su dispositivo
-
-
Para descargar el archivo mod del juego, debe seguir el enlace o botón proporcionado por la fuente y guardar el archivo en su dispositivo. Es posible que necesite permitir algunos permisos o acceso a su dispositivo o navegador para descargar el archivo. También es posible que necesite desactivar algunos ajustes de seguridad o características en su dispositivo o navegador para descargar el archivo. Por ejemplo, puede que necesite habilitar fuentes desconocidas o deshabilitar programas antivirus en su dispositivo para descargar el archivo.
-
Para instalar el archivo mod del juego, necesita localizar y abrir el archivo en su dispositivo. Es posible que necesite extraer o descomprimir el archivo primero si es un archivo comprimido. También es posible que tenga que desinstalar o eliminar el juego original primero si ya está instalado en su dispositivo. También es posible que tenga que hacer una copia de seguridad o guardar primero el progreso o los datos del juego si desea conservarlos. Luego, debes seguir las instrucciones o pasos proporcionados por la fuente o el propio archivo para instalar el mod del juego en tu dispositivo. Es posible que necesite permitir algunos permisos o acceso a su dispositivo o aplicación para instalar el archivo. También es posible que necesite reiniciar su dispositivo o aplicación después de instalar el archivo.
-
Inicie el mod del juego y disfrute jugando Township offline
-
Después de instalar el archivo de mod del juego, puede iniciar el mod del juego y disfrutar jugando Township sin conexión. Puedes encontrar y abrir el icono o la aplicación de mod del juego en tu dispositivo. Es posible que veas algunos cambios o diferencias en el logotipo, el título, la interfaz o el contenido del juego en comparación con el juego original. También puede ver algunas notificaciones o mensajes de la fuente o el archivo en sí sobre las características o configuraciones del mod del juego. Puedes ajustarlos o personalizarlos según tus preferencias.
-
-
Beneficios y riesgos de descargar el juego Mod Township sin conexión
-
Descargar el juego mod Township offline tiene sus beneficios y riesgos. Estos son algunos de ellos:
-
Beneficios de descargar el juego mod Township offline
-
-
-
Beneficio
-
Descripción
-
-
-
Puedes jugar a Township sin conexión a internet
-
No necesitas preocuparte por tener una conexión a Internet estable o rápida para jugar a Township. Puedes jugar a Township en cualquier momento y en cualquier lugar que desees, incluso si no estás conectado. También puede guardar su uso de datos o la duración de la batería jugando Township offline.
-
-
-
Puedes acceder a recursos ilimitados, monedas y dinero en efectivo en el juego
-
No necesitas esperar a que tus recursos crezcan o se repongan en el juego. Usted no necesita gastar su dinero real para comprar monedas o dinero en efectivo en el juego. Puedes tener recursos ilimitados, monedas y dinero en el juego que puedes usar para construir, mejorar o expandir tu ciudad, granja, zoológico y más.
-
-
-
Puedes desbloquear todos los edificios, decoraciones y animales del juego
-
No necesitas subir de nivel o completar ciertas tareas para desbloquear todos los edificios, decoraciones y animales del juego. Puedes tener acceso a todos los elementos y opciones del juego que puedes usar para personalizar y embellecer tu ciudad, granja, zoológico y más.
-
-
-
Riesgos de descargar el juego mod Township offline
-
-
-
Riesgo
-
Descripción
-
-
-
Usted puede encontrar problemas de compatibilidad o errores en el juego mod
-
El mod del juego puede no funcionar correctamente o sin problemas en su dispositivo o aplicación. Es posible que el mod del juego no sea compatible con el modelo de dispositivo, el sistema operativo, la versión de la aplicación u otros factores. El mod del juego también puede tener algunos errores, fallas o errores que pueden afectar su juego o rendimiento.
-
-
-
Puede violar los términos de servicio o la política de privacidad del desarrollador del juego
-
-
-
-
Puede exponer su dispositivo a malware o virus desde el archivo de mod del juego
-
El archivo mod del juego puede contener algún código malicioso o software que puede dañar su dispositivo o aplicación. El archivo mod del juego también puede tener algunos anuncios ocultos, ventanas emergentes o enlaces que pueden redirigirlo a sitios no deseados o peligrosos. Al descargar e instalar el archivo mod del juego, puede exponer su dispositivo a malware o virus que pueden dañar su dispositivo o aplicación.
-
-
-
Conclusión y preguntas frecuentes
-
En conclusión, descargar el juego mod Township sin conexión es una manera de disfrutar jugando Township sin conexión a Internet y con recursos ilimitados, monedas, dinero en efectivo y artículos en el juego. Sin embargo, descargar el juego mod Township offline también tiene algunos riesgos como problemas de compatibilidad, términos de violaciones de servicio y exposición a malware. Por lo tanto, debe ser cuidadoso y responsable al descargar y usar el juego mod Township offline. Necesitas encontrar una fuente confiable y segura para el mod del juego, descargar e instalar el archivo del mod del juego correctamente, y lanzar y jugar el mod del juego con precaución. También es necesario respetar los derechos e intereses del desarrollador del juego y otros jugadores. También debes ser consciente de las posibles consecuencias de descargar y usar el juego mod Township offline. Aquí hay algunas preguntas frecuentes que pueden ayudarle a aprender más acerca de cómo descargar el juego mod Township offline:
Q: ¿Puedo jugar Township online con el mod del juego?
-
A: No, no se puede jugar Township en línea con el mod del juego. El mod del juego está diseñado para funcionar solo sin conexión. Si intenta jugar Township en línea con el mod del juego, puede encontrar algunos errores o problemas. También puede correr el riesgo de ser detectado o reportado por el desarrollador del juego u otros jugadores.
-
Q: ¿Puedo actualizar Township con el mod del juego?
-
-
P: ¿Puedo restaurar mi juego original de Township después de usar el mod de juego?
-
A: Sí, puedes restaurar tu juego original de Township después de usar el mod de juego. Necesitas desinstalar o eliminar el archivo de mod de juego de tu dispositivo. También necesitas reinstalar o descargar el juego original de Township desde la fuente oficial. También es posible que necesite restaurar o recuperar el progreso del juego original de Township o los datos de su copia de seguridad o almacenamiento en la nube.
-
Q: ¿Puedo usar otros mods de juego para Township?
-
A: Sí, puedes usar otros mods de juegos para Township. Hay muchos tipos diferentes de mods de juego para Township que ofrecen diferentes características o funciones. Sin embargo, debes ser cuidadoso y selectivo al elegir y usar otros mods de juego para Township. Necesitas asegurarte de que sean confiables, seguros, compatibles y actualizados.
-
Q: ¿Puedo crear mi propio mod de juego para Township?
-
A: Sí, puedes crear tu propio mod de juego para Township si tienes las habilidades y herramientas para hacerlo. Necesitas tener algún conocimiento y experiencia en programación, codificación, hacking o modificación de juegos. También necesitas tener algunas herramientas o software que te ayuden a crear, editar, probar o distribuir tu propio mod de juego para Township. Sin embargo, debes ser respetuoso y ético al crear tu propio mod de juego para Township. Es necesario seguir las reglas y reglamentos del desarrollador de juegos y la comunidad de juegos. También necesitas dar crédito y reconocimiento a las fuentes originales o creadores de tu propio mod de juego para Township.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BetterAPI/BetterChat_new/src/lib/utils/sha256.ts b/spaces/BetterAPI/BetterChat_new/src/lib/utils/sha256.ts
deleted file mode 100644
index 43059b518fc5a4da6ed08ab36aeb6c289007f6aa..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat_new/src/lib/utils/sha256.ts
+++ /dev/null
@@ -1,7 +0,0 @@
-export async function sha256(input: string): Promise {
- const utf8 = new TextEncoder().encode(input);
- const hashBuffer = await crypto.subtle.digest("SHA-256", utf8);
- const hashArray = Array.from(new Uint8Array(hashBuffer));
- const hashHex = hashArray.map((bytes) => bytes.toString(16).padStart(2, "0")).join("");
- return hashHex;
-}
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/docs/attr.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/docs/attr.py
deleted file mode 100644
index a968da2901d8b52373cb0732186e499a83767884..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/docs/attr.py
+++ /dev/null
@@ -1,72 +0,0 @@
-# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"). You
-# may not use this file except in compliance with the License. A copy of
-# the License is located at
-#
-# https://aws.amazon.com/apache2.0/
-#
-# or in the "license" file accompanying this file. This file is
-# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
-# ANY KIND, either express or implied. See the License for the specific
-# language governing permissions and limitations under the License.
-from botocore.docs.params import ResponseParamsDocumenter
-
-from boto3.docs.utils import get_identifier_description
-
-
-class ResourceShapeDocumenter(ResponseParamsDocumenter):
- EVENT_NAME = 'resource-shape'
-
-
-def document_attribute(
- section,
- service_name,
- resource_name,
- attr_name,
- event_emitter,
- attr_model,
- include_signature=True,
-):
- if include_signature:
- full_attr_name = f"{section.context.get('qualifier', '')}{attr_name}"
- section.style.start_sphinx_py_attr(full_attr_name)
- # Note that an attribute may have one, may have many, or may have no
- # operations that back the resource's shape. So we just set the
- # operation_name to the resource name if we ever to hook in and modify
- # a particular attribute.
- ResourceShapeDocumenter(
- service_name=service_name,
- operation_name=resource_name,
- event_emitter=event_emitter,
- ).document_params(section=section, shape=attr_model)
-
-
-def document_identifier(
- section,
- resource_name,
- identifier_model,
- include_signature=True,
-):
- if include_signature:
- full_identifier_name = (
- f"{section.context.get('qualifier', '')}{identifier_model.name}"
- )
- section.style.start_sphinx_py_attr(full_identifier_name)
- description = get_identifier_description(
- resource_name, identifier_model.name
- )
- section.write(f'*(string)* {description}')
-
-
-def document_reference(section, reference_model, include_signature=True):
- if include_signature:
- full_reference_name = (
- f"{section.context.get('qualifier', '')}{reference_model.name}"
- )
- section.style.start_sphinx_py_attr(full_reference_name)
- reference_type = f'(:py:class:`{reference_model.resource.type}`) '
- section.write(reference_type)
- section.include_doc_string(
- f'The related {reference_model.name} if set, otherwise ``None``.'
- )
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/resolution/resolvelib/candidates.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/resolution/resolvelib/candidates.py
deleted file mode 100644
index 31020e27ad1a6ea9f350cdf50a141dc073094b57..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/resolution/resolvelib/candidates.py
+++ /dev/null
@@ -1,552 +0,0 @@
-import logging
-import sys
-from typing import TYPE_CHECKING, Any, FrozenSet, Iterable, Optional, Tuple, Union, cast
-
-from pip._vendor.packaging.utils import NormalizedName, canonicalize_name
-from pip._vendor.packaging.version import Version
-
-from pip._internal.exceptions import (
- HashError,
- InstallationSubprocessError,
- MetadataInconsistent,
-)
-from pip._internal.metadata import BaseDistribution
-from pip._internal.models.link import Link, links_equivalent
-from pip._internal.models.wheel import Wheel
-from pip._internal.req.constructors import (
- install_req_from_editable,
- install_req_from_line,
-)
-from pip._internal.req.req_install import InstallRequirement
-from pip._internal.utils.direct_url_helpers import direct_url_from_link
-from pip._internal.utils.misc import normalize_version_info
-
-from .base import Candidate, CandidateVersion, Requirement, format_name
-
-if TYPE_CHECKING:
- from .factory import Factory
-
-logger = logging.getLogger(__name__)
-
-BaseCandidate = Union[
- "AlreadyInstalledCandidate",
- "EditableCandidate",
- "LinkCandidate",
-]
-
-# Avoid conflicting with the PyPI package "Python".
-REQUIRES_PYTHON_IDENTIFIER = cast(NormalizedName, "")
-
-
-def as_base_candidate(candidate: Candidate) -> Optional[BaseCandidate]:
- """The runtime version of BaseCandidate."""
- base_candidate_classes = (
- AlreadyInstalledCandidate,
- EditableCandidate,
- LinkCandidate,
- )
- if isinstance(candidate, base_candidate_classes):
- return candidate
- return None
-
-
-def make_install_req_from_link(
- link: Link, template: InstallRequirement
-) -> InstallRequirement:
- assert not template.editable, "template is editable"
- if template.req:
- line = str(template.req)
- else:
- line = link.url
- ireq = install_req_from_line(
- line,
- user_supplied=template.user_supplied,
- comes_from=template.comes_from,
- use_pep517=template.use_pep517,
- isolated=template.isolated,
- constraint=template.constraint,
- global_options=template.global_options,
- hash_options=template.hash_options,
- config_settings=template.config_settings,
- )
- ireq.original_link = template.original_link
- ireq.link = link
- ireq.extras = template.extras
- return ireq
-
-
-def make_install_req_from_editable(
- link: Link, template: InstallRequirement
-) -> InstallRequirement:
- assert template.editable, "template not editable"
- ireq = install_req_from_editable(
- link.url,
- user_supplied=template.user_supplied,
- comes_from=template.comes_from,
- use_pep517=template.use_pep517,
- isolated=template.isolated,
- constraint=template.constraint,
- permit_editable_wheels=template.permit_editable_wheels,
- global_options=template.global_options,
- hash_options=template.hash_options,
- config_settings=template.config_settings,
- )
- ireq.extras = template.extras
- return ireq
-
-
-def _make_install_req_from_dist(
- dist: BaseDistribution, template: InstallRequirement
-) -> InstallRequirement:
- if template.req:
- line = str(template.req)
- elif template.link:
- line = f"{dist.canonical_name} @ {template.link.url}"
- else:
- line = f"{dist.canonical_name}=={dist.version}"
- ireq = install_req_from_line(
- line,
- user_supplied=template.user_supplied,
- comes_from=template.comes_from,
- use_pep517=template.use_pep517,
- isolated=template.isolated,
- constraint=template.constraint,
- global_options=template.global_options,
- hash_options=template.hash_options,
- config_settings=template.config_settings,
- )
- ireq.satisfied_by = dist
- return ireq
-
-
-class _InstallRequirementBackedCandidate(Candidate):
- """A candidate backed by an ``InstallRequirement``.
-
- This represents a package request with the target not being already
- in the environment, and needs to be fetched and installed. The backing
- ``InstallRequirement`` is responsible for most of the leg work; this
- class exposes appropriate information to the resolver.
-
- :param link: The link passed to the ``InstallRequirement``. The backing
- ``InstallRequirement`` will use this link to fetch the distribution.
- :param source_link: The link this candidate "originates" from. This is
- different from ``link`` when the link is found in the wheel cache.
- ``link`` would point to the wheel cache, while this points to the
- found remote link (e.g. from pypi.org).
- """
-
- dist: BaseDistribution
- is_installed = False
-
- def __init__(
- self,
- link: Link,
- source_link: Link,
- ireq: InstallRequirement,
- factory: "Factory",
- name: Optional[NormalizedName] = None,
- version: Optional[CandidateVersion] = None,
- ) -> None:
- self._link = link
- self._source_link = source_link
- self._factory = factory
- self._ireq = ireq
- self._name = name
- self._version = version
- self.dist = self._prepare()
-
- def __str__(self) -> str:
- return f"{self.name} {self.version}"
-
- def __repr__(self) -> str:
- return "{class_name}({link!r})".format(
- class_name=self.__class__.__name__,
- link=str(self._link),
- )
-
- def __hash__(self) -> int:
- return hash((self.__class__, self._link))
-
- def __eq__(self, other: Any) -> bool:
- if isinstance(other, self.__class__):
- return links_equivalent(self._link, other._link)
- return False
-
- @property
- def source_link(self) -> Optional[Link]:
- return self._source_link
-
- @property
- def project_name(self) -> NormalizedName:
- """The normalised name of the project the candidate refers to"""
- if self._name is None:
- self._name = self.dist.canonical_name
- return self._name
-
- @property
- def name(self) -> str:
- return self.project_name
-
- @property
- def version(self) -> CandidateVersion:
- if self._version is None:
- self._version = self.dist.version
- return self._version
-
- def format_for_error(self) -> str:
- return "{} {} (from {})".format(
- self.name,
- self.version,
- self._link.file_path if self._link.is_file else self._link,
- )
-
- def _prepare_distribution(self) -> BaseDistribution:
- raise NotImplementedError("Override in subclass")
-
- def _check_metadata_consistency(self, dist: BaseDistribution) -> None:
- """Check for consistency of project name and version of dist."""
- if self._name is not None and self._name != dist.canonical_name:
- raise MetadataInconsistent(
- self._ireq,
- "name",
- self._name,
- dist.canonical_name,
- )
- if self._version is not None and self._version != dist.version:
- raise MetadataInconsistent(
- self._ireq,
- "version",
- str(self._version),
- str(dist.version),
- )
-
- def _prepare(self) -> BaseDistribution:
- try:
- dist = self._prepare_distribution()
- except HashError as e:
- # Provide HashError the underlying ireq that caused it. This
- # provides context for the resulting error message to show the
- # offending line to the user.
- e.req = self._ireq
- raise
- except InstallationSubprocessError as exc:
- # The output has been presented already, so don't duplicate it.
- exc.context = "See above for output."
- raise
-
- self._check_metadata_consistency(dist)
- return dist
-
- def iter_dependencies(self, with_requires: bool) -> Iterable[Optional[Requirement]]:
- requires = self.dist.iter_dependencies() if with_requires else ()
- for r in requires:
- yield self._factory.make_requirement_from_spec(str(r), self._ireq)
- yield self._factory.make_requires_python_requirement(self.dist.requires_python)
-
- def get_install_requirement(self) -> Optional[InstallRequirement]:
- return self._ireq
-
-
-class LinkCandidate(_InstallRequirementBackedCandidate):
- is_editable = False
-
- def __init__(
- self,
- link: Link,
- template: InstallRequirement,
- factory: "Factory",
- name: Optional[NormalizedName] = None,
- version: Optional[CandidateVersion] = None,
- ) -> None:
- source_link = link
- cache_entry = factory.get_wheel_cache_entry(source_link, name)
- if cache_entry is not None:
- logger.debug("Using cached wheel link: %s", cache_entry.link)
- link = cache_entry.link
- ireq = make_install_req_from_link(link, template)
- assert ireq.link == link
- if ireq.link.is_wheel and not ireq.link.is_file:
- wheel = Wheel(ireq.link.filename)
- wheel_name = canonicalize_name(wheel.name)
- assert name == wheel_name, f"{name!r} != {wheel_name!r} for wheel"
- # Version may not be present for PEP 508 direct URLs
- if version is not None:
- wheel_version = Version(wheel.version)
- assert version == wheel_version, "{!r} != {!r} for wheel {}".format(
- version, wheel_version, name
- )
-
- if cache_entry is not None:
- assert ireq.link.is_wheel
- assert ireq.link.is_file
- if cache_entry.persistent and template.link is template.original_link:
- ireq.cached_wheel_source_link = source_link
- if cache_entry.origin is not None:
- ireq.download_info = cache_entry.origin
- else:
- # Legacy cache entry that does not have origin.json.
- # download_info may miss the archive_info.hashes field.
- ireq.download_info = direct_url_from_link(
- source_link, link_is_in_wheel_cache=cache_entry.persistent
- )
-
- super().__init__(
- link=link,
- source_link=source_link,
- ireq=ireq,
- factory=factory,
- name=name,
- version=version,
- )
-
- def _prepare_distribution(self) -> BaseDistribution:
- preparer = self._factory.preparer
- return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
-
-
-class EditableCandidate(_InstallRequirementBackedCandidate):
- is_editable = True
-
- def __init__(
- self,
- link: Link,
- template: InstallRequirement,
- factory: "Factory",
- name: Optional[NormalizedName] = None,
- version: Optional[CandidateVersion] = None,
- ) -> None:
- super().__init__(
- link=link,
- source_link=link,
- ireq=make_install_req_from_editable(link, template),
- factory=factory,
- name=name,
- version=version,
- )
-
- def _prepare_distribution(self) -> BaseDistribution:
- return self._factory.preparer.prepare_editable_requirement(self._ireq)
-
-
-class AlreadyInstalledCandidate(Candidate):
- is_installed = True
- source_link = None
-
- def __init__(
- self,
- dist: BaseDistribution,
- template: InstallRequirement,
- factory: "Factory",
- ) -> None:
- self.dist = dist
- self._ireq = _make_install_req_from_dist(dist, template)
- self._factory = factory
-
- # This is just logging some messages, so we can do it eagerly.
- # The returned dist would be exactly the same as self.dist because we
- # set satisfied_by in _make_install_req_from_dist.
- # TODO: Supply reason based on force_reinstall and upgrade_strategy.
- skip_reason = "already satisfied"
- factory.preparer.prepare_installed_requirement(self._ireq, skip_reason)
-
- def __str__(self) -> str:
- return str(self.dist)
-
- def __repr__(self) -> str:
- return "{class_name}({distribution!r})".format(
- class_name=self.__class__.__name__,
- distribution=self.dist,
- )
-
- def __hash__(self) -> int:
- return hash((self.__class__, self.name, self.version))
-
- def __eq__(self, other: Any) -> bool:
- if isinstance(other, self.__class__):
- return self.name == other.name and self.version == other.version
- return False
-
- @property
- def project_name(self) -> NormalizedName:
- return self.dist.canonical_name
-
- @property
- def name(self) -> str:
- return self.project_name
-
- @property
- def version(self) -> CandidateVersion:
- return self.dist.version
-
- @property
- def is_editable(self) -> bool:
- return self.dist.editable
-
- def format_for_error(self) -> str:
- return f"{self.name} {self.version} (Installed)"
-
- def iter_dependencies(self, with_requires: bool) -> Iterable[Optional[Requirement]]:
- if not with_requires:
- return
- for r in self.dist.iter_dependencies():
- yield self._factory.make_requirement_from_spec(str(r), self._ireq)
-
- def get_install_requirement(self) -> Optional[InstallRequirement]:
- return None
-
-
-class ExtrasCandidate(Candidate):
- """A candidate that has 'extras', indicating additional dependencies.
-
- Requirements can be for a project with dependencies, something like
- foo[extra]. The extras don't affect the project/version being installed
- directly, but indicate that we need additional dependencies. We model that
- by having an artificial ExtrasCandidate that wraps the "base" candidate.
-
- The ExtrasCandidate differs from the base in the following ways:
-
- 1. It has a unique name, of the form foo[extra]. This causes the resolver
- to treat it as a separate node in the dependency graph.
- 2. When we're getting the candidate's dependencies,
- a) We specify that we want the extra dependencies as well.
- b) We add a dependency on the base candidate.
- See below for why this is needed.
- 3. We return None for the underlying InstallRequirement, as the base
- candidate will provide it, and we don't want to end up with duplicates.
-
- The dependency on the base candidate is needed so that the resolver can't
- decide that it should recommend foo[extra1] version 1.0 and foo[extra2]
- version 2.0. Having those candidates depend on foo=1.0 and foo=2.0
- respectively forces the resolver to recognise that this is a conflict.
- """
-
- def __init__(
- self,
- base: BaseCandidate,
- extras: FrozenSet[str],
- ) -> None:
- self.base = base
- self.extras = extras
-
- def __str__(self) -> str:
- name, rest = str(self.base).split(" ", 1)
- return "{}[{}] {}".format(name, ",".join(self.extras), rest)
-
- def __repr__(self) -> str:
- return "{class_name}(base={base!r}, extras={extras!r})".format(
- class_name=self.__class__.__name__,
- base=self.base,
- extras=self.extras,
- )
-
- def __hash__(self) -> int:
- return hash((self.base, self.extras))
-
- def __eq__(self, other: Any) -> bool:
- if isinstance(other, self.__class__):
- return self.base == other.base and self.extras == other.extras
- return False
-
- @property
- def project_name(self) -> NormalizedName:
- return self.base.project_name
-
- @property
- def name(self) -> str:
- """The normalised name of the project the candidate refers to"""
- return format_name(self.base.project_name, self.extras)
-
- @property
- def version(self) -> CandidateVersion:
- return self.base.version
-
- def format_for_error(self) -> str:
- return "{} [{}]".format(
- self.base.format_for_error(), ", ".join(sorted(self.extras))
- )
-
- @property
- def is_installed(self) -> bool:
- return self.base.is_installed
-
- @property
- def is_editable(self) -> bool:
- return self.base.is_editable
-
- @property
- def source_link(self) -> Optional[Link]:
- return self.base.source_link
-
- def iter_dependencies(self, with_requires: bool) -> Iterable[Optional[Requirement]]:
- factory = self.base._factory
-
- # Add a dependency on the exact base
- # (See note 2b in the class docstring)
- yield factory.make_requirement_from_candidate(self.base)
- if not with_requires:
- return
-
- # The user may have specified extras that the candidate doesn't
- # support. We ignore any unsupported extras here.
- valid_extras = self.extras.intersection(self.base.dist.iter_provided_extras())
- invalid_extras = self.extras.difference(self.base.dist.iter_provided_extras())
- for extra in sorted(invalid_extras):
- logger.warning(
- "%s %s does not provide the extra '%s'",
- self.base.name,
- self.version,
- extra,
- )
-
- for r in self.base.dist.iter_dependencies(valid_extras):
- requirement = factory.make_requirement_from_spec(
- str(r), self.base._ireq, valid_extras
- )
- if requirement:
- yield requirement
-
- def get_install_requirement(self) -> Optional[InstallRequirement]:
- # We don't return anything here, because we always
- # depend on the base candidate, and we'll get the
- # install requirement from that.
- return None
-
-
-class RequiresPythonCandidate(Candidate):
- is_installed = False
- source_link = None
-
- def __init__(self, py_version_info: Optional[Tuple[int, ...]]) -> None:
- if py_version_info is not None:
- version_info = normalize_version_info(py_version_info)
- else:
- version_info = sys.version_info[:3]
- self._version = Version(".".join(str(c) for c in version_info))
-
- # We don't need to implement __eq__() and __ne__() since there is always
- # only one RequiresPythonCandidate in a resolution, i.e. the host Python.
- # The built-in object.__eq__() and object.__ne__() do exactly what we want.
-
- def __str__(self) -> str:
- return f"Python {self._version}"
-
- @property
- def project_name(self) -> NormalizedName:
- return REQUIRES_PYTHON_IDENTIFIER
-
- @property
- def name(self) -> str:
- return REQUIRES_PYTHON_IDENTIFIER
-
- @property
- def version(self) -> CandidateVersion:
- return self._version
-
- def format_for_error(self) -> str:
- return f"Python {self.version}"
-
- def iter_dependencies(self, with_requires: bool) -> Iterable[Optional[Requirement]]:
- return ()
-
- def get_install_requirement(self) -> Optional[InstallRequirement]:
- return None
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/roi_heads/cascade_rcnn.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/roi_heads/cascade_rcnn.py
deleted file mode 100644
index 39c6f9bfecbd5c72104c879bfd3e95442004dc84..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/roi_heads/cascade_rcnn.py
+++ /dev/null
@@ -1,245 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-import torch
-from torch import nn
-from torch.autograd.function import Function
-
-from detectron2.layers import ShapeSpec
-from detectron2.structures import Boxes, Instances, pairwise_iou
-from detectron2.utils.events import get_event_storage
-
-from ..box_regression import Box2BoxTransform
-from ..matcher import Matcher
-from ..poolers import ROIPooler
-from .box_head import build_box_head
-from .fast_rcnn import FastRCNNOutputLayers, fast_rcnn_inference
-from .roi_heads import ROI_HEADS_REGISTRY, StandardROIHeads
-
-
-class _ScaleGradient(Function):
- @staticmethod
- def forward(ctx, input, scale):
- ctx.scale = scale
- return input
-
- @staticmethod
- def backward(ctx, grad_output):
- return grad_output * ctx.scale, None
-
-
-@ROI_HEADS_REGISTRY.register()
-class CascadeROIHeads(StandardROIHeads):
- def _init_box_head(self, cfg, input_shape):
- # fmt: off
- pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION
- pooler_scales = tuple(1.0 / input_shape[k].stride for k in self.in_features)
- sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO
- pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE
- cascade_bbox_reg_weights = cfg.MODEL.ROI_BOX_CASCADE_HEAD.BBOX_REG_WEIGHTS
- cascade_ious = cfg.MODEL.ROI_BOX_CASCADE_HEAD.IOUS
- self.num_cascade_stages = len(cascade_ious)
- assert len(cascade_bbox_reg_weights) == self.num_cascade_stages
- assert cfg.MODEL.ROI_BOX_HEAD.CLS_AGNOSTIC_BBOX_REG, \
- "CascadeROIHeads only support class-agnostic regression now!"
- assert cascade_ious[0] == cfg.MODEL.ROI_HEADS.IOU_THRESHOLDS[0]
- # fmt: on
-
- in_channels = [input_shape[f].channels for f in self.in_features]
- # Check all channel counts are equal
- assert len(set(in_channels)) == 1, in_channels
- in_channels = in_channels[0]
-
- self.box_pooler = ROIPooler(
- output_size=pooler_resolution,
- scales=pooler_scales,
- sampling_ratio=sampling_ratio,
- pooler_type=pooler_type,
- )
- pooled_shape = ShapeSpec(
- channels=in_channels, width=pooler_resolution, height=pooler_resolution
- )
-
- self.box_head = nn.ModuleList()
- self.box_predictor = nn.ModuleList()
- self.box2box_transform = []
- self.proposal_matchers = []
- for k in range(self.num_cascade_stages):
- box_head = build_box_head(cfg, pooled_shape)
- self.box_head.append(box_head)
- self.box_predictor.append(
- FastRCNNOutputLayers(
- cfg,
- box_head.output_shape,
- box2box_transform=Box2BoxTransform(weights=cascade_bbox_reg_weights[k]),
- )
- )
-
- if k == 0:
- # The first matching is done by the matcher of ROIHeads (self.proposal_matcher).
- self.proposal_matchers.append(None)
- else:
- self.proposal_matchers.append(
- Matcher([cascade_ious[k]], [0, 1], allow_low_quality_matches=False)
- )
-
- def forward(self, images, features, proposals, targets=None):
- del images
- if self.training:
- proposals = self.label_and_sample_proposals(proposals, targets)
-
- if self.training:
- # Need targets to box head
- losses = self._forward_box(features, proposals, targets)
- losses.update(self._forward_mask(features, proposals))
- losses.update(self._forward_keypoint(features, proposals))
- return proposals, losses
- else:
- pred_instances = self._forward_box(features, proposals)
- pred_instances = self.forward_with_given_boxes(features, pred_instances)
- return pred_instances, {}
-
- def _forward_box(self, features, proposals, targets=None):
- """
- Args:
- features, targets: the same as in
- Same as in :meth:`ROIHeads.forward`.
- proposals (list[Instances]): the per-image object proposals with
- their matching ground truth.
- Each has fields "proposal_boxes", and "objectness_logits",
- "gt_classes", "gt_boxes".
- """
- features = [features[f] for f in self.in_features]
- head_outputs = [] # (predictor, predictions, proposals)
- prev_pred_boxes = None
- image_sizes = [x.image_size for x in proposals]
- for k in range(self.num_cascade_stages):
- if k > 0:
- # The output boxes of the previous stage are used to create the input
- # proposals of the next stage.
- proposals = self._create_proposals_from_boxes(prev_pred_boxes, image_sizes)
- if self.training:
- proposals = self._match_and_label_boxes(proposals, k, targets)
- predictions = self._run_stage(features, proposals, k)
- prev_pred_boxes = self.box_predictor[k].predict_boxes(predictions, proposals)
- head_outputs.append((self.box_predictor[k], predictions, proposals))
-
- if self.training:
- losses = {}
- storage = get_event_storage()
- for stage, (predictor, predictions, proposals) in enumerate(head_outputs):
- with storage.name_scope("stage{}".format(stage)):
- stage_losses = predictor.losses(predictions, proposals)
- losses.update({k + "_stage{}".format(stage): v for k, v in stage_losses.items()})
- return losses
- else:
- # Each is a list[Tensor] of length #image. Each tensor is Ri x (K+1)
- scores_per_stage = [h[0].predict_probs(h[1], h[2]) for h in head_outputs]
-
- # Average the scores across heads
- scores = [
- sum(list(scores_per_image)) * (1.0 / self.num_cascade_stages)
- for scores_per_image in zip(*scores_per_stage)
- ]
- # Use the boxes of the last head
- predictor, predictions, proposals = head_outputs[-1]
- boxes = predictor.predict_boxes(predictions, proposals)
- pred_instances, _ = fast_rcnn_inference(
- boxes,
- scores,
- image_sizes,
- predictor.test_score_thresh,
- predictor.test_nms_thresh,
- predictor.test_topk_per_image,
- )
- return pred_instances
-
- @torch.no_grad()
- def _match_and_label_boxes(self, proposals, stage, targets):
- """
- Match proposals with groundtruth using the matcher at the given stage.
- Label the proposals as foreground or background based on the match.
-
- Args:
- proposals (list[Instances]): One Instances for each image, with
- the field "proposal_boxes".
- stage (int): the current stage
- targets (list[Instances]): the ground truth instances
-
- Returns:
- list[Instances]: the same proposals, but with fields "gt_classes" and "gt_boxes"
- """
- num_fg_samples, num_bg_samples = [], []
- for proposals_per_image, targets_per_image in zip(proposals, targets):
- match_quality_matrix = pairwise_iou(
- targets_per_image.gt_boxes, proposals_per_image.proposal_boxes
- )
- # proposal_labels are 0 or 1
- matched_idxs, proposal_labels = self.proposal_matchers[stage](match_quality_matrix)
- if len(targets_per_image) > 0:
- gt_classes = targets_per_image.gt_classes[matched_idxs]
- # Label unmatched proposals (0 label from matcher) as background (label=num_classes)
- gt_classes[proposal_labels == 0] = self.num_classes
- gt_boxes = targets_per_image.gt_boxes[matched_idxs]
- else:
- gt_classes = torch.zeros_like(matched_idxs) + self.num_classes
- gt_boxes = Boxes(
- targets_per_image.gt_boxes.tensor.new_zeros((len(proposals_per_image), 4))
- )
- proposals_per_image.gt_classes = gt_classes
- proposals_per_image.gt_boxes = gt_boxes
-
- num_fg_samples.append((proposal_labels == 1).sum().item())
- num_bg_samples.append(proposal_labels.numel() - num_fg_samples[-1])
-
- # Log the number of fg/bg samples in each stage
- storage = get_event_storage()
- storage.put_scalar(
- "stage{}/roi_head/num_fg_samples".format(stage),
- sum(num_fg_samples) / len(num_fg_samples),
- )
- storage.put_scalar(
- "stage{}/roi_head/num_bg_samples".format(stage),
- sum(num_bg_samples) / len(num_bg_samples),
- )
- return proposals
-
- def _run_stage(self, features, proposals, stage):
- """
- Args:
- features (list[Tensor]): #lvl input features to ROIHeads
- proposals (list[Instances]): #image Instances, with the field "proposal_boxes"
- stage (int): the current stage
-
- Returns:
- Same output as `FastRCNNOutputLayers.forward()`.
- """
- box_features = self.box_pooler(features, [x.proposal_boxes for x in proposals])
- # The original implementation averages the losses among heads,
- # but scale up the parameter gradients of the heads.
- # This is equivalent to adding the losses among heads,
- # but scale down the gradients on features.
- box_features = _ScaleGradient.apply(box_features, 1.0 / self.num_cascade_stages)
- box_features = self.box_head[stage](box_features)
- return self.box_predictor[stage](box_features)
-
- def _create_proposals_from_boxes(self, boxes, image_sizes):
- """
- Args:
- boxes (list[Tensor]): per-image predicted boxes, each of shape Ri x 4
- image_sizes (list[tuple]): list of image shapes in (h, w)
-
- Returns:
- list[Instances]: per-image proposals with the given boxes.
- """
- # Just like RPN, the proposals should not have gradients
- boxes = [Boxes(b.detach()) for b in boxes]
- proposals = []
- for boxes_per_image, image_size in zip(boxes, image_sizes):
- boxes_per_image.clip(image_size)
- if self.training:
- # do not filter empty boxes at inference time,
- # because the scores from each stage need to be aligned and added later
- boxes_per_image = boxes_per_image[boxes_per_image.nonempty()]
- prop = Instances(image_size)
- prop.proposal_boxes = boxes_per_image
- proposals.append(prop)
- return proposals
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/PointRend/train_net.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/PointRend/train_net.py
deleted file mode 100644
index 377334b1eddbe1868c7896c66a0725492ce5c2a8..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/PointRend/train_net.py
+++ /dev/null
@@ -1,110 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-
-"""
-PointRend Training Script.
-
-This script is a simplified version of the training script in detectron2/tools.
-"""
-
-import os
-import torch
-
-import detectron2.utils.comm as comm
-from detectron2.checkpoint import DetectionCheckpointer
-from detectron2.config import get_cfg
-from detectron2.data import MetadataCatalog
-from detectron2.engine import DefaultTrainer, default_argument_parser, default_setup, launch
-from detectron2.evaluation import (
- CityscapesEvaluator,
- COCOEvaluator,
- DatasetEvaluators,
- LVISEvaluator,
- verify_results,
-)
-
-from point_rend import add_pointrend_config
-
-
-class Trainer(DefaultTrainer):
- """
- We use the "DefaultTrainer" which contains a number pre-defined logic for
- standard training workflow. They may not work for you, especially if you
- are working on a new research project. In that case you can use the cleaner
- "SimpleTrainer", or write your own training loop.
- """
-
- @classmethod
- def build_evaluator(cls, cfg, dataset_name, output_folder=None):
- """
- Create evaluator(s) for a given dataset.
- This uses the special metadata "evaluator_type" associated with each builtin dataset.
- For your own dataset, you can simply create an evaluator manually in your
- script and do not have to worry about the hacky if-else logic here.
- """
- if output_folder is None:
- output_folder = os.path.join(cfg.OUTPUT_DIR, "inference")
- evaluator_list = []
- evaluator_type = MetadataCatalog.get(dataset_name).evaluator_type
- if evaluator_type == "lvis":
- return LVISEvaluator(dataset_name, cfg, True, output_folder)
- if evaluator_type == "coco":
- return COCOEvaluator(dataset_name, cfg, True, output_folder)
- if evaluator_type == "cityscapes":
- assert (
- torch.cuda.device_count() >= comm.get_rank()
- ), "CityscapesEvaluator currently do not work with multiple machines."
- return CityscapesEvaluator(dataset_name)
- if len(evaluator_list) == 0:
- raise NotImplementedError(
- "no Evaluator for the dataset {} with the type {}".format(
- dataset_name, evaluator_type
- )
- )
- if len(evaluator_list) == 1:
- return evaluator_list[0]
- return DatasetEvaluators(evaluator_list)
-
-
-def setup(args):
- """
- Create configs and perform basic setups.
- """
- cfg = get_cfg()
- add_pointrend_config(cfg)
- cfg.merge_from_file(args.config_file)
- cfg.merge_from_list(args.opts)
- cfg.freeze()
- default_setup(cfg, args)
- return cfg
-
-
-def main(args):
- cfg = setup(args)
-
- if args.eval_only:
- model = Trainer.build_model(cfg)
- DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load(
- cfg.MODEL.WEIGHTS, resume=args.resume
- )
- res = Trainer.test(cfg, model)
- if comm.is_main_process():
- verify_results(cfg, res)
- return res
-
- trainer = Trainer(cfg)
- trainer.resume_or_load(resume=args.resume)
- return trainer.train()
-
-
-if __name__ == "__main__":
- args = default_argument_parser().parse_args()
- print("Command Line Args:", args)
- launch(
- main,
- args.num_gpus,
- num_machines=args.num_machines,
- machine_rank=args.machine_rank,
- dist_url=args.dist_url,
- args=(args,),
- )
diff --git a/spaces/CVPR/LIVE/pydiffvg/shape.py b/spaces/CVPR/LIVE/pydiffvg/shape.py
deleted file mode 100644
index a87e9e501b10a933afec844709f8d58670bb4ba9..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pydiffvg/shape.py
+++ /dev/null
@@ -1,172 +0,0 @@
-import torch
-import svgpathtools
-import math
-
-class Circle:
- def __init__(self, radius, center, stroke_width = torch.tensor(1.0), id = ''):
- self.radius = radius
- self.center = center
- self.stroke_width = stroke_width
- self.id = id
-
-class Ellipse:
- def __init__(self, radius, center, stroke_width = torch.tensor(1.0), id = ''):
- self.radius = radius
- self.center = center
- self.stroke_width = stroke_width
- self.id = id
-
-class Path:
- def __init__(self,
- num_control_points,
- points,
- is_closed,
- stroke_width = torch.tensor(1.0),
- id = '',
- use_distance_approx = False):
- self.num_control_points = num_control_points
- self.points = points
- self.is_closed = is_closed
- self.stroke_width = stroke_width
- self.id = id
- self.use_distance_approx = use_distance_approx
-
-class Polygon:
- def __init__(self, points, is_closed, stroke_width = torch.tensor(1.0), id = ''):
- self.points = points
- self.is_closed = is_closed
- self.stroke_width = stroke_width
- self.id = id
-
-class Rect:
- def __init__(self, p_min, p_max, stroke_width = torch.tensor(1.0), id = ''):
- self.p_min = p_min
- self.p_max = p_max
- self.stroke_width = stroke_width
- self.id = id
-
-class ShapeGroup:
- def __init__(self,
- shape_ids,
- fill_color,
- use_even_odd_rule = True,
- stroke_color = None,
- shape_to_canvas = torch.eye(3),
- id = ''):
- self.shape_ids = shape_ids
- self.fill_color = fill_color
- self.use_even_odd_rule = use_even_odd_rule
- self.stroke_color = stroke_color
- self.shape_to_canvas = shape_to_canvas
- self.id = id
-
-def from_svg_path(path_str, shape_to_canvas = torch.eye(3), force_close = False):
- path = svgpathtools.parse_path(path_str)
- if len(path) == 0:
- return []
- ret_paths = []
- subpaths = path.continuous_subpaths()
- for subpath in subpaths:
- if subpath.isclosed():
- if len(subpath) > 1 and isinstance(subpath[-1], svgpathtools.Line) and subpath[-1].length() < 1e-5:
- subpath.remove(subpath[-1])
- subpath[-1].end = subpath[0].start # Force closing the path
- subpath.end = subpath[-1].end
- assert(subpath.isclosed())
- else:
- beg = subpath[0].start
- end = subpath[-1].end
- if abs(end - beg) < 1e-5:
- subpath[-1].end = beg # Force closing the path
- subpath.end = subpath[-1].end
- assert(subpath.isclosed())
- elif force_close:
- subpath.append(svgpathtools.Line(end, beg))
- subpath.end = subpath[-1].end
- assert(subpath.isclosed())
-
- num_control_points = []
- points = []
-
- for i, e in enumerate(subpath):
- if i == 0:
- points.append((e.start.real, e.start.imag))
- else:
- # Must begin from the end of previous segment
- assert(e.start.real == points[-1][0])
- assert(e.start.imag == points[-1][1])
- if isinstance(e, svgpathtools.Line):
- num_control_points.append(0)
- elif isinstance(e, svgpathtools.QuadraticBezier):
- num_control_points.append(1)
- points.append((e.control.real, e.control.imag))
- elif isinstance(e, svgpathtools.CubicBezier):
- num_control_points.append(2)
- points.append((e.control1.real, e.control1.imag))
- points.append((e.control2.real, e.control2.imag))
- elif isinstance(e, svgpathtools.Arc):
- # Convert to Cubic curves
- # https://www.joecridge.me/content/pdf/bezier-arcs.pdf
- start = e.theta * math.pi / 180.0
- stop = (e.theta + e.delta) * math.pi / 180.0
-
- sign = 1.0
- if stop < start:
- sign = -1.0
-
- epsilon = 0.00001
- debug = abs(e.delta) >= 90.0
- while (sign * (stop - start) > epsilon):
- arc_to_draw = stop - start
- if arc_to_draw > 0.0:
- arc_to_draw = min(arc_to_draw, 0.5 * math.pi)
- else:
- arc_to_draw = max(arc_to_draw, -0.5 * math.pi)
- alpha = arc_to_draw / 2.0
- cos_alpha = math.cos(alpha)
- sin_alpha = math.sin(alpha)
- cot_alpha = 1.0 / math.tan(alpha)
- phi = start + alpha
- cos_phi = math.cos(phi)
- sin_phi = math.sin(phi)
- lambda_ = (4.0 - cos_alpha) / 3.0
- mu = sin_alpha + (cos_alpha - lambda_) * cot_alpha
- last = sign * (stop - (start + arc_to_draw)) <= epsilon
- num_control_points.append(2)
- rx = e.radius.real
- ry = e.radius.imag
- cx = e.center.real
- cy = e.center.imag
- rot = e.phi * math.pi / 180.0
- cos_rot = math.cos(rot)
- sin_rot = math.sin(rot)
- x = lambda_ * cos_phi + mu * sin_phi
- y = lambda_ * sin_phi - mu * cos_phi
- xx = x * cos_rot - y * sin_rot
- yy = x * sin_rot + y * cos_rot
- points.append((cx + rx * xx, cy + ry * yy))
- x = lambda_ * cos_phi - mu * sin_phi
- y = lambda_ * sin_phi + mu * cos_phi
- xx = x * cos_rot - y * sin_rot
- yy = x * sin_rot + y * cos_rot
- points.append((cx + rx * xx, cy + ry * yy))
- if not last:
- points.append((cx + rx * math.cos(rot + start + arc_to_draw),
- cy + ry * math.sin(rot + start + arc_to_draw)))
- start += arc_to_draw
- first = False
- if i != len(subpath) - 1:
- points.append((e.end.real, e.end.imag))
- else:
- if subpath.isclosed():
- # Must end at the beginning of first segment
- assert(e.end.real == points[0][0])
- assert(e.end.imag == points[0][1])
- else:
- points.append((e.end.real, e.end.imag))
- points = torch.tensor(points)
- points = torch.cat((points, torch.ones([points.shape[0], 1])), dim = 1) @ torch.transpose(shape_to_canvas, 0, 1)
- points = points / points[:, 2:3]
- points = points[:, :2].contiguous()
- ret_paths.append(Path(torch.tensor(num_control_points), points, subpath.isclosed()))
- return ret_paths
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/logical.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/logical.h
deleted file mode 100644
index c6ae90664ad9538e73febfde86c334011de417c8..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/logical.h
+++ /dev/null
@@ -1,22 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// this system has no special version of this algorithm
-
diff --git a/spaces/CVPR/WALT/mmdet/models/roi_heads/mask_heads/fcn_occmask_head.py b/spaces/CVPR/WALT/mmdet/models/roi_heads/mask_heads/fcn_occmask_head.py
deleted file mode 100644
index 17953ed183cc5f1cd55af7d3196fe6ffa4aa06db..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/roi_heads/mask_heads/fcn_occmask_head.py
+++ /dev/null
@@ -1,570 +0,0 @@
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import Conv2d, ConvModule, build_upsample_layer
-from mmcv.ops.carafe import CARAFEPack
-from mmcv.runner import auto_fp16, force_fp32
-from torch.nn.modules.utils import _pair
-
-from mmdet.core import mask_target
-from mmdet.models.builder import HEADS, build_loss
-
-BYTES_PER_FLOAT = 4
-# TODO: This memory limit may be too much or too little. It would be better to
-# determine it based on available resources.
-GPU_MEM_LIMIT = 1024**3 # 1 GB memory limit
-
-
-@HEADS.register_module()
-class FCNOccMaskHead(nn.Module):
-
- def __init__(self,
- num_convs=4,
- roi_feat_size=14,
- in_channels=256,
- conv_kernel_size=3,
- conv_out_channels=256,
- num_classes=80,
- class_agnostic=False,
- upsample_cfg=dict(type='deconv', scale_factor=2),
- conv_cfg=None,
- norm_cfg=None,
- loss_mask=dict(
- type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)):
- super(FCNOccMaskHead, self).__init__()
- self.upsample_cfg = upsample_cfg.copy()
- if self.upsample_cfg['type'] not in [
- None, 'deconv', 'nearest', 'bilinear', 'carafe'
- ]:
- raise ValueError(
- f'Invalid upsample method {self.upsample_cfg["type"]}, '
- 'accepted methods are "deconv", "nearest", "bilinear", '
- '"carafe"')
- self.num_convs = num_convs
- # WARN: roi_feat_size is reserved and not used
- self.roi_feat_size = _pair(roi_feat_size)
- self.in_channels = in_channels
- self.conv_kernel_size = conv_kernel_size
- self.conv_out_channels = conv_out_channels
- self.upsample_method = self.upsample_cfg.get('type')
- self.scale_factor = self.upsample_cfg.pop('scale_factor', None)
- self.num_classes = num_classes
- self.class_agnostic = class_agnostic
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.fp16_enabled = False
- self.loss_mask = build_loss(loss_mask)
-
- self.convs = nn.ModuleList()
- for i in range(self.num_convs):
- if i ==0:
- in_channels_change = in_channels*2
- else:
- in_channels_change = in_channels
-
- in_channels = (
- self.in_channels if i == 0 else self.conv_out_channels)
- padding = (self.conv_kernel_size - 1) // 2
- self.convs.append(
- ConvModule(
- in_channels_change,
- self.conv_out_channels,
- self.conv_kernel_size,
- padding=padding,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg))
-
- self.convs_occluder = nn.ModuleList()
- for i in range(self.num_convs):
- in_channels = (
- self.in_channels if i == 0 else self.conv_out_channels)
- padding = (self.conv_kernel_size - 1) // 2
- self.convs_occluder.append(
- ConvModule(
- in_channels,
- self.conv_out_channels,
- self.conv_kernel_size,
- padding=padding,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg))
-
- upsample_in_channels = (
- self.conv_out_channels if self.num_convs > 0 else in_channels)
- upsample_cfg_ = self.upsample_cfg.copy()
- if self.upsample_method is None:
- self.upsample = None
- elif self.upsample_method == 'deconv':
- upsample_cfg_.update(
- in_channels=upsample_in_channels,
- out_channels=self.conv_out_channels,
- kernel_size=self.scale_factor,
- stride=self.scale_factor)
- self.upsample = build_upsample_layer(upsample_cfg_)
- elif self.upsample_method == 'carafe':
- upsample_cfg_.update(
- channels=upsample_in_channels, scale_factor=self.scale_factor)
- self.upsample = build_upsample_layer(upsample_cfg_)
- else:
- # suppress warnings
- align_corners = (None
- if self.upsample_method == 'nearest' else False)
- upsample_cfg_.update(
- scale_factor=self.scale_factor,
- mode=self.upsample_method,
- align_corners=align_corners)
- self.upsample = build_upsample_layer(upsample_cfg_)
-
- out_channels = 1 if self.class_agnostic else self.num_classes
- logits_in_channel = (
- self.conv_out_channels
- if self.upsample_method == 'deconv' else upsample_in_channels)
- self.conv_logits = Conv2d(logits_in_channel, out_channels, 1)
- self.conv_logits_occluder = Conv2d(logits_in_channel, out_channels, 1)
- self.relu = nn.ReLU(inplace=True)
- self.debug_imgs = None
-
- def init_weights(self):
- for m in [self.upsample, self.conv_logits]:
- if m is None:
- continue
- elif isinstance(m, CARAFEPack):
- m.init_weights()
- else:
- nn.init.kaiming_normal_(
- m.weight, mode='fan_out', nonlinearity='relu')
- nn.init.constant_(m.bias, 0)
-
- @auto_fp16()
- def forward(self, x):
- y = x.clone()
- for conv in self.convs_occluder:
- y = conv(y)
- x = torch.cat((x, y), 1)
- for conv in self.convs:
- x = conv(x)
- if self.upsample is not None:
- x = self.upsample(x)
- if self.upsample_method == 'deconv':
- x = self.relu(x)
- if self.upsample is not None:
- y = self.upsample(y)
- if self.upsample_method == 'deconv':
- y = self.relu(y)
- mask_pred = self.conv_logits(x)
- mask_occluder_pred = self.conv_logits_occluder(y)
- return mask_pred, mask_occluder_pred
-
- def get_targets(self, sampling_results, gt_masks, rcnn_train_cfg):
- pos_proposals = [res.pos_bboxes for res in sampling_results]
- pos_assigned_gt_inds = [
- res.pos_assigned_gt_inds for res in sampling_results
- ]
- mask_targets = mask_target(pos_proposals, pos_assigned_gt_inds,
- gt_masks, rcnn_train_cfg)
- return mask_targets
-
- @force_fp32(apply_to=('mask_pred', ))
- def loss(self, mask_pred, mask_targets, labels):
- """
- Example:
- >>> from mmdet.models.roi_heads.mask_heads.fcn_mask_head import * # NOQA
- >>> N = 7 # N = number of extracted ROIs
- >>> C, H, W = 11, 32, 32
- >>> # Create example instance of FCN Mask Head.
- >>> # There are lots of variations depending on the configuration
- >>> self = FCNMaskHead(num_classes=C, num_convs=1)
- >>> inputs = torch.rand(N, self.in_channels, H, W)
- >>> mask_pred = self.forward(inputs)
- >>> sf = self.scale_factor
- >>> labels = torch.randint(0, C, size=(N,))
- >>> # With the default properties the mask targets should indicate
- >>> # a (potentially soft) single-class label
- >>> mask_targets = torch.rand(N, H * sf, W * sf)
- >>> loss = self.loss(mask_pred, mask_targets, labels)
- >>> print('loss = {!r}'.format(loss))
- """
- mask_full_pred, mask_occ_pred = mask_pred
- loss = dict()
- if mask_full_pred.size(0) == 0:
- loss_mask_vis = mask_full_pred.sum()
- else:
- if self.class_agnostic:
- loss_mask = self.loss_mask(mask_full_pred, mask_targets,
- torch.zeros_like(labels))
- else:
- #print(mask_pred[:,0:1].shape, mask_targets[0::2].shape, labels.shape)
- loss_mask_vis = self.loss_mask(mask_full_pred[:,0:1], mask_targets[0::2], labels)
- loss['loss_mask_vis'] = loss_mask_vis
-
- if mask_occ_pred.size(0) == 0:
- loss_mask = mask_occ_pred.sum()
- else:
- if self.class_agnostic:
- loss_mask = self.loss_mask(mask_occ_pred, mask_targets,
- torch.zeros_like(labels))
- else:
- loss_mask_occ = self.loss_mask(mask_occ_pred[:,0:1], mask_targets[1::2], labels)
- loss['loss_mask_occ'] = loss_mask_occ
- return loss
-
- def get_seg_masks(self, mask_pred, det_bboxes, det_labels, rcnn_test_cfg,
- ori_shape, scale_factor, rescale):
- """Get segmentation masks from mask_pred and bboxes.
- Args:
- mask_pred (Tensor or ndarray): shape (n, #class, h, w).
- For single-scale testing, mask_pred is the direct output of
- model, whose type is Tensor, while for multi-scale testing,
- it will be converted to numpy array outside of this method.
- det_bboxes (Tensor): shape (n, 4/5)
- det_labels (Tensor): shape (n, )
- rcnn_test_cfg (dict): rcnn testing config
- ori_shape (Tuple): original image height and width, shape (2,)
- scale_factor(float | Tensor): If ``rescale is True``, box
- coordinates are divided by this scale factor to fit
- ``ori_shape``.
- rescale (bool): If True, the resulting masks will be rescaled to
- ``ori_shape``.
- Returns:
- list[list]: encoded masks. The c-th item in the outer list
- corresponds to the c-th class. Given the c-th outer list, the
- i-th item in that inner list is the mask for the i-th box with
- class label c.
- Example:
- >>> import mmcv
- >>> from mmdet.models.roi_heads.mask_heads.fcn_mask_head import * # NOQA
- >>> N = 7 # N = number of extracted ROIs
- >>> C, H, W = 11, 32, 32
- >>> # Create example instance of FCN Mask Head.
- >>> self = FCNMaskHead(num_classes=C, num_convs=0)
- >>> inputs = torch.rand(N, self.in_channels, H, W)
- >>> mask_pred = self.forward(inputs)
- >>> # Each input is associated with some bounding box
- >>> det_bboxes = torch.Tensor([[1, 1, 42, 42 ]] * N)
- >>> det_labels = torch.randint(0, C, size=(N,))
- >>> rcnn_test_cfg = mmcv.Config({'mask_thr_binary': 0, })
- >>> ori_shape = (H * 4, W * 4)
- >>> scale_factor = torch.FloatTensor((1, 1))
- >>> rescale = False
- >>> # Encoded masks are a list for each category.
- >>> encoded_masks = self.get_seg_masks(
- >>> mask_pred, det_bboxes, det_labels, rcnn_test_cfg, ori_shape,
- >>> scale_factor, rescale
- >>> )
- >>> assert len(encoded_masks) == C
- >>> assert sum(list(map(len, encoded_masks))) == N
- """
- if isinstance(mask_pred, torch.Tensor):
- mask_pred = mask_pred.sigmoid()
- else:
- mask_pred = det_bboxes.new_tensor(mask_pred)
-
- device = mask_pred.device
- cls_segms = [[] for _ in range(self.num_classes)
- ] # BG is not included in num_classes
- bboxes = det_bboxes[:, :4]
- labels = det_labels
-
- if rescale:
- img_h, img_w = ori_shape[:2]
- else:
- if isinstance(scale_factor, float):
- img_h = np.round(ori_shape[0] * scale_factor).astype(np.int32)
- img_w = np.round(ori_shape[1] * scale_factor).astype(np.int32)
- else:
- w_scale, h_scale = scale_factor[0], scale_factor[1]
- img_h = np.round(ori_shape[0] * h_scale.item()).astype(
- np.int32)
- img_w = np.round(ori_shape[1] * w_scale.item()).astype(
- np.int32)
- scale_factor = 1.0
-
- if not isinstance(scale_factor, (float, torch.Tensor)):
- scale_factor = bboxes.new_tensor(scale_factor)
- bboxes = bboxes / scale_factor
-
- if torch.onnx.is_in_onnx_export():
- # TODO: Remove after F.grid_sample is supported.
- from torchvision.models.detection.roi_heads \
- import paste_masks_in_image
- masks = paste_masks_in_image(mask_pred, bboxes, ori_shape[:2])
- thr = rcnn_test_cfg.get('mask_thr_binary', 0)
- if thr > 0:
- masks = masks >= thr
- return masks
-
- N = len(mask_pred)
- # The actual implementation split the input into chunks,
- # and paste them chunk by chunk.
- if device.type == 'cpu':
- # CPU is most efficient when they are pasted one by one with
- # skip_empty=True, so that it performs minimal number of
- # operations.
- num_chunks = N
- else:
- # GPU benefits from parallelism for larger chunks,
- # but may have memory issue
- num_chunks = int(
- np.ceil(N * img_h * img_w * BYTES_PER_FLOAT / GPU_MEM_LIMIT))
- assert (num_chunks <=
- N), 'Default GPU_MEM_LIMIT is too small; try increasing it'
- chunks = torch.chunk(torch.arange(N, device=device), num_chunks)
-
- threshold = rcnn_test_cfg.mask_thr_binary
- im_mask = torch.zeros(
- N,
- img_h,
- img_w,
- device=device,
- dtype=torch.bool if threshold >= 0 else torch.uint8)
-
- if not self.class_agnostic:
- mask_pred = mask_pred[range(N), labels][:, None]
-
- for inds in chunks:
- masks_chunk, spatial_inds = _do_paste_mask(
- mask_pred[inds],
- bboxes[inds],
- img_h,
- img_w,
- skip_empty=device.type == 'cpu')
-
- if threshold >= 0:
- masks_chunk = (masks_chunk >= threshold).to(dtype=torch.bool)
- else:
- # for visualization and debugging
- masks_chunk = (masks_chunk * 255).to(dtype=torch.uint8)
-
- im_mask[(inds, ) + spatial_inds] = masks_chunk
-
- for i in range(N):
- cls_segms[labels[i]].append(im_mask[i].detach().cpu().numpy())
- return cls_segms
-
- def get_seg_masks1(self, mask_pred, det_bboxes, det_labels, rcnn_test_cfg,
- ori_shape, scale_factor, rescale):
- """Get segmentation masks from mask_pred and bboxes.
-
- Args:
- mask_pred (Tensor or ndarray): shape (n, #class, h, w).
- For single-scale testing, mask_pred is the direct output of
- model, whose type is Tensor, while for multi-scale testing,
- it will be converted to numpy array outside of this method.
- det_bboxes (Tensor): shape (n, 4/5)
- det_labels (Tensor): shape (n, )
- rcnn_test_cfg (dict): rcnn testing config
- ori_shape (Tuple): original image height and width, shape (2,)
- scale_factor(float | Tensor): If ``rescale is True``, box
- coordinates are divided by this scale factor to fit
- ``ori_shape``.
- rescale (bool): If True, the resulting masks will be rescaled to
- ``ori_shape``.
-
- Returns:
- list[list]: encoded masks. The c-th item in the outer list
- corresponds to the c-th class. Given the c-th outer list, the
- i-th item in that inner list is the mask for the i-th box with
- class label c.
-
- Example:
- >>> import mmcv
- >>> from mmdet.models.roi_heads.mask_heads.fcn_mask_head import * # NOQA
- >>> N = 7 # N = number of extracted ROIs
- >>> C, H, W = 11, 32, 32
- >>> # Create example instance of FCN Mask Head.
- >>> self = FCNMaskHead(num_classes=C, num_convs=0)
- >>> inputs = torch.rand(N, self.in_channels, H, W)
- >>> mask_pred = self.forward(inputs)
- >>> # Each input is associated with some bounding box
- >>> det_bboxes = torch.Tensor([[1, 1, 42, 42 ]] * N)
- >>> det_labels = torch.randint(0, C, size=(N,))
- >>> rcnn_test_cfg = mmcv.Config({'mask_thr_binary': 0, })
- >>> ori_shape = (H * 4, W * 4)
- >>> scale_factor = torch.FloatTensor((1, 1))
- >>> rescale = False
- >>> # Encoded masks are a list for each category.
- >>> encoded_masks = self.get_seg_masks(
- >>> mask_pred, det_bboxes, det_labels, rcnn_test_cfg, ori_shape,
- >>> scale_factor, rescale
- >>> )
- >>> assert len(encoded_masks) == C
- >>> assert sum(list(map(len, encoded_masks))) == N
- """
- if isinstance(mask_pred, torch.Tensor):
- mask_pred = mask_pred.sigmoid()
- else:
- mask_pred = det_bboxes.new_tensor(mask_pred)
-
- device = mask_pred.device
- cls_segms = [[] for _ in range(self.num_classes)
- ] # BG is not included in num_classes
- bboxes = det_bboxes[:, :4]
- labels = det_labels
- labels = torch.cat((labels, torch.tensor(([1]))))
- bboxes = torch.cat((bboxes, bboxes))
- #print(labels,torch.tensor(([1])))
- #asas
-
- if rescale:
- img_h, img_w = ori_shape[:2]
- else:
- if isinstance(scale_factor, float):
- img_h = np.round(ori_shape[0] * scale_factor).astype(np.int32)
- img_w = np.round(ori_shape[1] * scale_factor).astype(np.int32)
- else:
- w_scale, h_scale = scale_factor[0], scale_factor[1]
- img_h = np.round(ori_shape[0] * h_scale.item()).astype(
- np.int32)
- img_w = np.round(ori_shape[1] * w_scale.item()).astype(
- np.int32)
- scale_factor = 1.0
-
- if not isinstance(scale_factor, (float, torch.Tensor)):
- scale_factor = bboxes.new_tensor(scale_factor)
- bboxes = bboxes / scale_factor
-
- if torch.onnx.is_in_onnx_export():
- # TODO: Remove after F.grid_sample is supported.
- from torchvision.models.detection.roi_heads \
- import paste_masks_in_image
- masks = paste_masks_in_image(mask_pred, bboxes, ori_shape[:2])
- thr = rcnn_test_cfg.get('mask_thr_binary', 0)
- if thr > 0:
- masks = masks >= thr
- return masks
-
- N = len(mask_pred)
- # The actual implementation split the input into chunks,
- # and paste them chunk by chunk.
- if device.type == 'cpu':
- # CPU is most efficient when they are pasted one by one with
- # skip_empty=True, so that it performs minimal number of
- # operations.
- num_chunks = N
- else:
- # GPU benefits from parallelism for larger chunks,
- # but may have memory issue
- num_chunks = int(
- np.ceil(N * img_h * img_w * BYTES_PER_FLOAT / GPU_MEM_LIMIT))
- assert (num_chunks <=
- N), 'Default GPU_MEM_LIMIT is too small; try increasing it'
- chunks = torch.chunk(torch.arange(N, device=device), num_chunks)
-
- threshold = rcnn_test_cfg.mask_thr_binary
- im_mask = torch.zeros(
- N,
- img_h,
- img_w,
- device=device,
- dtype=torch.bool if threshold >= 0 else torch.uint8)
-
- if not self.class_agnostic:
- mask_pred = mask_pred[range(N), labels][:, None]
- #print('-----------------------------')
- #print(chunks)
-
- for inds in chunks:
- #print(mask_pred[inds].shape, bboxes[inds].shape)
- masks_chunk, spatial_inds = _do_paste_mask(
- mask_pred[0:1],
- bboxes[inds],
- img_h,
- img_w,
- skip_empty=device.type == 'cpu')
- masks_chunk_occ, spatial_inds_occ = _do_paste_mask(
- mask_pred[1:2],
- bboxes[inds],
- img_h,
- img_w,
- skip_empty=device.type == 'cpu')
-
-
- if threshold >= 0:
- masks_chunk = (masks_chunk >= threshold).to(dtype=torch.bool)
- masks_chunk_occ = (masks_chunk_occ >= threshold).to(dtype=torch.bool)
- else:
- # for visualization and debugging
- masks_chunk = (masks_chunk * 255).to(dtype=torch.uint8)
-
- im_mask[([0], ) + spatial_inds] = masks_chunk
- im_mask[([1], ) + spatial_inds] = masks_chunk_occ
-
-
- for i in range(N):
- cls_segms[labels[i]].append(im_mask[i].detach().cpu().numpy())
- #print(cls_segms)
- return cls_segms
-
-
-def _do_paste_mask(masks, boxes, img_h, img_w, skip_empty=True):
- """Paste instance masks according to boxes.
-
- This implementation is modified from
- https://github.com/facebookresearch/detectron2/
-
- Args:
- masks (Tensor): N, 1, H, W
- boxes (Tensor): N, 4
- img_h (int): Height of the image to be pasted.
- img_w (int): Width of the image to be pasted.
- skip_empty (bool): Only paste masks within the region that
- tightly bound all boxes, and returns the results this region only.
- An important optimization for CPU.
-
- Returns:
- tuple: (Tensor, tuple). The first item is mask tensor, the second one
- is the slice object.
- If skip_empty == False, the whole image will be pasted. It will
- return a mask of shape (N, img_h, img_w) and an empty tuple.
- If skip_empty == True, only area around the mask will be pasted.
- A mask of shape (N, h', w') and its start and end coordinates
- in the original image will be returned.
- """
- # On GPU, paste all masks together (up to chunk size)
- # by using the entire image to sample the masks
- # Compared to pasting them one by one,
- # this has more operations but is faster on COCO-scale dataset.
- device = masks.device
- if skip_empty:
- x0_int, y0_int = torch.clamp(
- boxes.min(dim=0).values.floor()[:2] - 1,
- min=0).to(dtype=torch.int32)
- x1_int = torch.clamp(
- boxes[:, 2].max().ceil() + 1, max=img_w).to(dtype=torch.int32)
- y1_int = torch.clamp(
- boxes[:, 3].max().ceil() + 1, max=img_h).to(dtype=torch.int32)
- else:
- x0_int, y0_int = 0, 0
- x1_int, y1_int = img_w, img_h
- x0, y0, x1, y1 = torch.split(boxes, 1, dim=1) # each is Nx1
-
- N = masks.shape[0]
-
- img_y = torch.arange(
- y0_int, y1_int, device=device, dtype=torch.float32) + 0.5
- img_x = torch.arange(
- x0_int, x1_int, device=device, dtype=torch.float32) + 0.5
- img_y = (img_y - y0) / (y1 - y0) * 2 - 1
- img_x = (img_x - x0) / (x1 - x0) * 2 - 1
- # img_x, img_y have shapes (N, w), (N, h)
- if torch.isinf(img_x).any():
- inds = torch.where(torch.isinf(img_x))
- img_x[inds] = 0
- if torch.isinf(img_y).any():
- inds = torch.where(torch.isinf(img_y))
- img_y[inds] = 0
-
- gx = img_x[:, None, :].expand(N, img_y.size(1), img_x.size(1))
- gy = img_y[:, :, None].expand(N, img_y.size(1), img_x.size(1))
- grid = torch.stack([gx, gy], dim=3)
-
- if torch.onnx.is_in_onnx_export():
- raise RuntimeError(
- 'Exporting F.grid_sample from Pytorch to ONNX is not supported.')
- img_masks = F.grid_sample(
- masks.to(dtype=torch.float32), grid, align_corners=False)
-
- if skip_empty:
- return img_masks[:, 0], (slice(y0_int, y1_int), slice(x0_int, x1_int))
- else:
- return img_masks[:, 0], ()
diff --git a/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/audio_text.py b/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/audio_text.py
deleted file mode 100644
index cae32d4eb78c4268bf6ef1bae3c15a399af046bf..0000000000000000000000000000000000000000
--- a/spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/audio_text.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import json
-
-import requests
-
-from autogpt.config import Config
-from autogpt.workspace import path_in_workspace
-
-cfg = Config()
-
-
-def read_audio_from_file(audio_path):
- audio_path = path_in_workspace(audio_path)
- with open(audio_path, "rb") as audio_file:
- audio = audio_file.read()
- return read_audio(audio)
-
-
-def read_audio(audio):
- model = cfg.huggingface_audio_to_text_model
- api_url = f"https://api-inference.huggingface.co/models/{model}"
- api_token = cfg.huggingface_api_token
- headers = {"Authorization": f"Bearer {api_token}"}
-
- if api_token is None:
- raise ValueError(
- "You need to set your Hugging Face API token in the config file."
- )
-
- response = requests.post(
- api_url,
- headers=headers,
- data=audio,
- )
-
- text = json.loads(response.content.decode("utf-8"))["text"]
- return "The audio says: " + text
diff --git a/spaces/Chukwuka/FoodVision-Model/model.py b/spaces/Chukwuka/FoodVision-Model/model.py
deleted file mode 100644
index 0de311b275fd1d36537003704f6d0ff19568e701..0000000000000000000000000000000000000000
--- a/spaces/Chukwuka/FoodVision-Model/model.py
+++ /dev/null
@@ -1,44 +0,0 @@
-
-import torch
-import torch.nn as nn
-import torchvision
-
-
-# Create an EffNetB2 feature extractor
-def create_effnet_b2(num_of_class: str=3,
- transform: torchvision.transforms=None,
- seed=42
- ):
- """Creates an EfficientNetB2 feature extractor model and transforms.
-
- Args:
- num_classes (int, optional): number of classes in the classifier head.
- Defaults to 3.
- seed (int, optional): random seed value. Defaults to 42.
-
- Returns:
- model (torch.nn.Module): EffNetB2 feature extractor model.
- transforms (torchvision.transforms): EffNetB2 image transforms.
- """
-
- # 1. Get the base mdoel with pretrained weights and send to target device
- model = torchvision.models.efficientnet_b2(pretrained=True)
-
- # 2. Freeze the base model layers
- for param in model.parameters():
- param.requires_grad = False
-
- # 3. Set the seeds
- torch.manual_seed(seed)
-
- # 4. Change the classifier head
- model.classifier = nn.Sequential(nn.Dropout(p=0.3, inplace=True),
- nn.Linear(1408, num_of_class, bias=True)
- )
-
- return model, transform
-
-# mymodel = create_effnet_b2(num_of_class=3,
-# transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor()]),
-# seed=42)
-# print(mymodel)
diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/guoba.support.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/guoba.support.js
deleted file mode 100644
index 50067c93593466fac7199f3749c8d2129159843e..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/guoba.support.js
+++ /dev/null
@@ -1,233 +0,0 @@
-import lodash from 'lodash'
-import { Config } from './components/index.js'
-
-// 支持锅巴
-export function supportGuoba() {
- let groupList = Array.from(Bot.gl.values())
- groupList = groupList.map(item => item = { label: `${item.group_name}-${item.group_id}`, value: item.group_id })
- return {
- // 插件信息,将会显示在前端页面
- // 如果你的插件没有在插件库里,那么需要填上补充信息
- // 如果存在的话,那么填不填就无所谓了,填了就以你的信息为准
- pluginInfo: {
- name: 'ws-plugin',
- title: 'ws-plugin',
- author: '@小叶',
- authorLink: 'https://gitee.com/xiaoye12123',
- link: 'https://gitee.com/xiaoye12123/ws-plugin',
- isV3: true,
- isV2: false,
- description: 'Yunzai-Bot 的扩展插件 ws-plugin 提供ontbot协议适配,通过ws连接onebot实现的bot',
- // 显示图标,此为个性化配置
- // 图标可在 https://icon-sets.iconify.design 这里进行搜索
- icon: 'bx:atom',
- // 图标颜色,例:#FF0000 或 rgb(255, 0, 0)
- iconColor: 'rgb(241,212,152)',
- // 如果想要显示成图片,也可以填写图标路径(绝对路径)
- // iconPath: path.join(_paths.pluginRoot, 'resources/images/icon.png'),
- },
- // 配置项信息
- configInfo: {
- // 配置项 schemas
- schemas: [
- {
- component: 'Divider',
- label: '通知设置'
- },
- {
- field: 'msg.noMsgStart',
- label: '上报设置1',
- bottomHelpMessage: '以数组内开头的消息不上报',
- component: 'GTags',
- componentProps: {
- allowAdd: true,
- allowDel: true,
- },
- },
- {
- field: 'msg.noMsgInclude',
- label: '上报设置2',
- bottomHelpMessage: '包含了数组内的消息不上报',
- component: 'GTags',
- componentProps: {
- allowAdd: true,
- allowDel: true,
- },
- },
- {
- field: 'msg.noGroup',
- label: '黑名单群聊',
- bottomHelpMessage: '数组内的群消息不上报',
- component: 'Select',
- componentProps: {
- allowAdd: true,
- allowDel: true,
- mode: 'multiple',
- options: groupList
- }
- },
- {
- field: 'msg.yesGroup',
- label: '白名单群聊',
- bottomHelpMessage: '只上报数组内的群消息',
- component: 'Select',
- componentProps: {
- allowAdd: true,
- allowDel: true,
- mode: 'multiple',
- options: groupList
- }
- },
- {
- field: 'msg.disconnectToMaster',
- label: '断开连接',
- bottomHelpMessage: '断开连接时否通知主人',
- component: 'Switch',
- },
- {
- field: 'msg.reconnectToMaster',
- label: '重新连接',
- bottomHelpMessage: '重新连接成功时是否通知主人',
- component: 'Switch',
- },
- {
- field: 'msg.firstconnectToMaster',
- label: '首次连接',
- bottomHelpMessage: '首次连接时是否通知主人成功还是失败',
- component: 'Switch',
- },
- {
- field: 'msg.msgStoreTime',
- label: '消息存储时间',
- bottomHelpMessage: '消息存储时间,用于撤回和回复消息,单位秒',
- component: 'InputNumber',
- required: true,
- componentProps: {
- min: 0,
- placeholder: '请输入时间',
- },
- },
- {
- component: 'Divider',
- label: '上报设置'
- },
- {
- field: 'notice.groupAdmin',
- label: '管理变动',
- bottomHelpMessage: '群管理员变动是否上报',
- component: 'Switch',
- },
- {
- field: 'notice.groupDecrease',
- label: '群员减少',
- bottomHelpMessage: '群成员减少是否上报',
- component: 'Switch',
- },
- {
- field: 'notice.groupIncrease',
- label: '群员增加',
- bottomHelpMessage: '群成员增加是否上报',
- component: 'Switch',
- },
- {
- field: 'notice.groupBan',
- label: '群内禁言',
- bottomHelpMessage: '群禁言是否上报',
- component: 'Switch',
- },
- {
- field: 'notice.friendIncrease',
- label: '好友添加',
- bottomHelpMessage: '好友添加是否上报(添加成功之后)',
- component: 'Switch',
- },
- {
- field: 'notice.groupRecall',
- label: '群内撤回',
- bottomHelpMessage: '群消息撤回是否上报',
- component: 'Switch',
- },
- {
- field: 'notice.friendRecall',
- label: '好友撤回',
- bottomHelpMessage: '好友消息撤回是否上报',
- component: 'Switch',
- },
- {
- field: 'notice.groupPoke',
- label: '群戳一戳',
- bottomHelpMessage: '群内戳一戳是否上报',
- component: 'Switch',
- },
- {
- component: 'Divider',
- label: '请求设置'
- },
- {
- field: 'request.friendAdd',
- label: '好友申请',
- bottomHelpMessage: '好友申请是否上报',
- component: 'Switch',
- },
- {
- field: 'request.groupInvite',
- label: '群聊邀请',
- bottomHelpMessage: '群聊邀请是否上报 (邀请机器人入群)',
- component: 'Switch',
- },
- {
- field: 'request.groupAdd',
- label: '群聊申请',
- bottomHelpMessage: '群聊申请是否上报 (申请加入群聊)',
- component: 'Switch',
- },
- {
- component: 'Divider',
- label: '连接设置'
- },
- {
- field: 'ws.heartbeatInterval',
- label: '心跳频率',
- bottomHelpMessage: '心跳频率, 单位秒',
- component: 'InputNumber',
- required: true,
- componentProps: {
- min: 0,
- placeholder: '请输入心跳频率时间',
- },
- },
- {
- field: 'ws.messagePostFormat',
- label: '上报类型',
- bottomHelpMessage: '可选: 1:string, 2:array',
- component: 'RadioGroup',
- componentProps: {
- options: [
- { label: 'string', value: 1 },
- { label: 'array', value: 2 },
- ],
- },
- },
- ],
- // 获取配置数据方法(用于前端填充显示数据)
- getConfigData() {
- return {
- ws: Config.getDefOrConfig('ws-config'),
- msg: Config.getDefOrConfig('msg-config'),
- notice: Config.getDefOrConfig('notice-config'),
- request: Config.getDefOrConfig('request-config')
- }
- },
- // 设置配置的方法(前端点确定后调用的方法)
- setConfigData(data, { Result }) {
- let config = Config.getCfg()
- for (const key in data) {
- let split = key.split('.')
- if (lodash.isEqual(config[split[1]], data[key])) continue
- Config.modify(split[0] + '-config', split[1], data[key])
- }
- return Result.ok({}, '保存成功~')
- },
- },
- }
-}
diff --git a/spaces/Clebersla/RVC_V2_Huggingface_Version/config.py b/spaces/Clebersla/RVC_V2_Huggingface_Version/config.py
deleted file mode 100644
index 5b72235b58b65ac629f49bcc4aad032b5b59d8d4..0000000000000000000000000000000000000000
--- a/spaces/Clebersla/RVC_V2_Huggingface_Version/config.py
+++ /dev/null
@@ -1,204 +0,0 @@
-import argparse
-import sys
-import torch
-import json
-from multiprocessing import cpu_count
-
-global usefp16
-usefp16 = False
-
-
-def use_fp32_config():
- usefp16 = False
- device_capability = 0
- if torch.cuda.is_available():
- device = torch.device("cuda:0") # Assuming you have only one GPU (index 0).
- device_capability = torch.cuda.get_device_capability(device)[0]
- if device_capability >= 7:
- usefp16 = True
- for config_file in ["32k.json", "40k.json", "48k.json"]:
- with open(f"configs/{config_file}", "r") as d:
- data = json.load(d)
-
- if "train" in data and "fp16_run" in data["train"]:
- data["train"]["fp16_run"] = True
-
- with open(f"configs/{config_file}", "w") as d:
- json.dump(data, d, indent=4)
-
- print(f"Set fp16_run to true in {config_file}")
-
- with open(
- "trainset_preprocess_pipeline_print.py", "r", encoding="utf-8"
- ) as f:
- strr = f.read()
-
- strr = strr.replace("3.0", "3.7")
-
- with open(
- "trainset_preprocess_pipeline_print.py", "w", encoding="utf-8"
- ) as f:
- f.write(strr)
- else:
- for config_file in ["32k.json", "40k.json", "48k.json"]:
- with open(f"configs/{config_file}", "r") as f:
- data = json.load(f)
-
- if "train" in data and "fp16_run" in data["train"]:
- data["train"]["fp16_run"] = False
-
- with open(f"configs/{config_file}", "w") as d:
- json.dump(data, d, indent=4)
-
- print(f"Set fp16_run to false in {config_file}")
-
- with open(
- "trainset_preprocess_pipeline_print.py", "r", encoding="utf-8"
- ) as f:
- strr = f.read()
-
- strr = strr.replace("3.7", "3.0")
-
- with open(
- "trainset_preprocess_pipeline_print.py", "w", encoding="utf-8"
- ) as f:
- f.write(strr)
- else:
- print(
- "CUDA is not available. Make sure you have an NVIDIA GPU and CUDA installed."
- )
- return (usefp16, device_capability)
-
-
-class Config:
- def __init__(self):
- self.device = "cuda:0"
- self.is_half = True
- self.n_cpu = 0
- self.gpu_name = None
- self.gpu_mem = None
- (
- self.python_cmd,
- self.listen_port,
- self.iscolab,
- self.noparallel,
- self.noautoopen,
- self.paperspace,
- self.is_cli,
- ) = self.arg_parse()
-
- self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config()
-
- @staticmethod
- def arg_parse() -> tuple:
- exe = sys.executable or "python"
- parser = argparse.ArgumentParser()
- parser.add_argument("--port", type=int, default=7865, help="Listen port")
- parser.add_argument("--pycmd", type=str, default=exe, help="Python command")
- parser.add_argument("--colab", action="store_true", help="Launch in colab")
- parser.add_argument(
- "--noparallel", action="store_true", help="Disable parallel processing"
- )
- parser.add_argument(
- "--noautoopen",
- action="store_true",
- help="Do not open in browser automatically",
- )
- parser.add_argument( # Fork Feature. Paperspace integration for web UI
- "--paperspace",
- action="store_true",
- help="Note that this argument just shares a gradio link for the web UI. Thus can be used on other non-local CLI systems.",
- )
- parser.add_argument( # Fork Feature. Embed a CLI into the infer-web.py
- "--is_cli",
- action="store_true",
- help="Use the CLI instead of setting up a gradio UI. This flag will launch an RVC text interface where you can execute functions from infer-web.py!",
- )
- cmd_opts = parser.parse_args()
-
- cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865
-
- return (
- cmd_opts.pycmd,
- cmd_opts.port,
- cmd_opts.colab,
- cmd_opts.noparallel,
- cmd_opts.noautoopen,
- cmd_opts.paperspace,
- cmd_opts.is_cli,
- )
-
- # has_mps is only available in nightly pytorch (for now) and MasOS 12.3+.
- # check `getattr` and try it for compatibility
- @staticmethod
- def has_mps() -> bool:
- if not torch.backends.mps.is_available():
- return False
- try:
- torch.zeros(1).to(torch.device("mps"))
- return True
- except Exception:
- return False
-
- def device_config(self) -> tuple:
- if torch.cuda.is_available():
- i_device = int(self.device.split(":")[-1])
- self.gpu_name = torch.cuda.get_device_name(i_device)
- if (
- ("16" in self.gpu_name and "V100" not in self.gpu_name.upper())
- or "P40" in self.gpu_name.upper()
- or "1060" in self.gpu_name
- or "1070" in self.gpu_name
- or "1080" in self.gpu_name
- ):
- print("Found GPU", self.gpu_name, ", force to fp32")
- self.is_half = False
- else:
- print("Found GPU", self.gpu_name)
- use_fp32_config()
- self.gpu_mem = int(
- torch.cuda.get_device_properties(i_device).total_memory
- / 1024
- / 1024
- / 1024
- + 0.4
- )
- if self.gpu_mem <= 4:
- with open("trainset_preprocess_pipeline_print.py", "r") as f:
- strr = f.read().replace("3.7", "3.0")
- with open("trainset_preprocess_pipeline_print.py", "w") as f:
- f.write(strr)
- elif self.has_mps():
- print("No supported Nvidia GPU found, use MPS instead")
- self.device = "mps"
- self.is_half = False
- use_fp32_config()
- else:
- print("No supported Nvidia GPU found, use CPU instead")
- self.device = "cpu"
- self.is_half = False
- use_fp32_config()
-
- if self.n_cpu == 0:
- self.n_cpu = cpu_count()
-
- if self.is_half:
- # 6G显存配置
- x_pad = 3
- x_query = 10
- x_center = 60
- x_max = 65
- else:
- # 5G显存配置
- x_pad = 1
- x_query = 6
- x_center = 38
- x_max = 41
-
- if self.gpu_mem != None and self.gpu_mem <= 4:
- x_pad = 1
- x_query = 5
- x_center = 30
- x_max = 32
-
- return x_pad, x_query, x_center, x_max
diff --git a/spaces/CofAI/chat/g4f/README.md b/spaces/CofAI/chat/g4f/README.md
deleted file mode 100644
index c2cbfd69dc169e2cb4f8d24104fb12a52b91688d..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat/g4f/README.md
+++ /dev/null
@@ -1,5 +0,0 @@
-## 🚀 API G4F
-
-This API is built upon the [gpt4free](https://github.com/xtekky/gpt4free) project.
-
-
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/streams/file.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/streams/file.py
deleted file mode 100644
index 2840d40ab6a2fa222d6594d6980d8234df17eade..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/streams/file.py
+++ /dev/null
@@ -1,147 +0,0 @@
-from __future__ import annotations
-
-from io import SEEK_SET, UnsupportedOperation
-from os import PathLike
-from pathlib import Path
-from typing import Any, BinaryIO, Callable, Mapping, cast
-
-from .. import (
- BrokenResourceError,
- ClosedResourceError,
- EndOfStream,
- TypedAttributeSet,
- to_thread,
- typed_attribute,
-)
-from ..abc import ByteReceiveStream, ByteSendStream
-
-
-class FileStreamAttribute(TypedAttributeSet):
- #: the open file descriptor
- file: BinaryIO = typed_attribute()
- #: the path of the file on the file system, if available (file must be a real file)
- path: Path = typed_attribute()
- #: the file number, if available (file must be a real file or a TTY)
- fileno: int = typed_attribute()
-
-
-class _BaseFileStream:
- def __init__(self, file: BinaryIO):
- self._file = file
-
- async def aclose(self) -> None:
- await to_thread.run_sync(self._file.close)
-
- @property
- def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]:
- attributes: dict[Any, Callable[[], Any]] = {
- FileStreamAttribute.file: lambda: self._file,
- }
-
- if hasattr(self._file, "name"):
- attributes[FileStreamAttribute.path] = lambda: Path(self._file.name)
-
- try:
- self._file.fileno()
- except UnsupportedOperation:
- pass
- else:
- attributes[FileStreamAttribute.fileno] = lambda: self._file.fileno()
-
- return attributes
-
-
-class FileReadStream(_BaseFileStream, ByteReceiveStream):
- """
- A byte stream that reads from a file in the file system.
-
- :param file: a file that has been opened for reading in binary mode
-
- .. versionadded:: 3.0
- """
-
- @classmethod
- async def from_path(cls, path: str | PathLike[str]) -> FileReadStream:
- """
- Create a file read stream by opening the given file.
-
- :param path: path of the file to read from
-
- """
- file = await to_thread.run_sync(Path(path).open, "rb")
- return cls(cast(BinaryIO, file))
-
- async def receive(self, max_bytes: int = 65536) -> bytes:
- try:
- data = await to_thread.run_sync(self._file.read, max_bytes)
- except ValueError:
- raise ClosedResourceError from None
- except OSError as exc:
- raise BrokenResourceError from exc
-
- if data:
- return data
- else:
- raise EndOfStream
-
- async def seek(self, position: int, whence: int = SEEK_SET) -> int:
- """
- Seek the file to the given position.
-
- .. seealso:: :meth:`io.IOBase.seek`
-
- .. note:: Not all file descriptors are seekable.
-
- :param position: position to seek the file to
- :param whence: controls how ``position`` is interpreted
- :return: the new absolute position
- :raises OSError: if the file is not seekable
-
- """
- return await to_thread.run_sync(self._file.seek, position, whence)
-
- async def tell(self) -> int:
- """
- Return the current stream position.
-
- .. note:: Not all file descriptors are seekable.
-
- :return: the current absolute position
- :raises OSError: if the file is not seekable
-
- """
- return await to_thread.run_sync(self._file.tell)
-
-
-class FileWriteStream(_BaseFileStream, ByteSendStream):
- """
- A byte stream that writes to a file in the file system.
-
- :param file: a file that has been opened for writing in binary mode
-
- .. versionadded:: 3.0
- """
-
- @classmethod
- async def from_path(
- cls, path: str | PathLike[str], append: bool = False
- ) -> FileWriteStream:
- """
- Create a file write stream by opening the given file for writing.
-
- :param path: path of the file to write to
- :param append: if ``True``, open the file for appending; if ``False``, any existing file
- at the given path will be truncated
-
- """
- mode = "ab" if append else "wb"
- file = await to_thread.run_sync(Path(path).open, mode)
- return cls(cast(BinaryIO, file))
-
- async def send(self, item: bytes) -> None:
- try:
- await to_thread.run_sync(self._file.write, item)
- except ValueError:
- raise ClosedResourceError from None
- except OSError as exc:
- raise BrokenResourceError from exc
diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/inference.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/inference.py
deleted file mode 100644
index 729fd8c17b8673647b4757f8600d8ef785b55cb8..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/inference.py
+++ /dev/null
@@ -1,261 +0,0 @@
-"""
-@Date: 2021/09/19
-@description:
-"""
-import json
-import os
-import argparse
-import cv2
-import numpy as np
-import torch
-import matplotlib.pyplot as plt
-import glob
-
-from tqdm import tqdm
-from PIL import Image
-from config.defaults import merge_from_file, get_config
-from dataset.mp3d_dataset import MP3DDataset
-from dataset.zind_dataset import ZindDataset
-from models.build import build_model
-from loss import GradLoss
-from postprocessing.post_process import post_process
-from preprocessing.pano_lsd_align import panoEdgeDetection, rotatePanorama
-from utils.boundary import corners2boundaries, layout2depth
-from utils.conversion import depth2xyz
-from utils.logger import get_logger
-from utils.misc import tensor2np_d, tensor2np
-from evaluation.accuracy import show_grad
-from models.lgt_net import LGT_Net
-from utils.writer import xyz2json
-from visualization.boundary import draw_boundaries
-from visualization.floorplan import draw_floorplan, draw_iou_floorplan
-from visualization.obj3d import create_3d_obj
-
-
-def parse_option():
- parser = argparse.ArgumentParser(description='Panorama Layout Transformer training and evaluation script')
- parser.add_argument('--img_glob',
- type=str,
- required=True,
- help='image glob path')
-
- parser.add_argument('--cfg',
- type=str,
- required=True,
- metavar='FILE',
- help='path of config file')
-
- parser.add_argument('--post_processing',
- type=str,
- default='manhattan',
- choices=['manhattan', 'atalanta', 'original'],
- help='post-processing type')
-
- parser.add_argument('--output_dir',
- type=str,
- default='src/output',
- help='path of output')
-
- parser.add_argument('--visualize_3d', action='store_true',
- help='visualize_3d')
-
- parser.add_argument('--output_3d', action='store_true',
- help='output_3d')
-
- parser.add_argument('--device',
- type=str,
- default='cuda',
- help='device')
-
- args = parser.parse_args()
- args.mode = 'test'
-
- print("arguments:")
- for arg in vars(args):
- print(arg, ":", getattr(args, arg))
- print("-" * 50)
- return args
-
-
-def visualize_2d(img, dt, show_depth=True, show_floorplan=True, show=False, save_path=None):
- dt_np = tensor2np_d(dt)
- dt_depth = dt_np['depth'][0]
- dt_xyz = depth2xyz(np.abs(dt_depth))
- dt_ratio = dt_np['ratio'][0][0]
- dt_boundaries = corners2boundaries(dt_ratio, corners_xyz=dt_xyz, step=None, visible=False, length=img.shape[1])
- vis_img = draw_boundaries(img, boundary_list=dt_boundaries, boundary_color=[0, 1, 0])
-
- if 'processed_xyz' in dt:
- dt_boundaries = corners2boundaries(dt_ratio, corners_xyz=dt['processed_xyz'][0], step=None, visible=False,
- length=img.shape[1])
- vis_img = draw_boundaries(vis_img, boundary_list=dt_boundaries, boundary_color=[1, 0, 0])
-
- if show_depth:
- dt_grad_img = show_depth_normal_grad(dt)
- grad_h = dt_grad_img.shape[0]
- vis_merge = [
- vis_img[0:-grad_h, :, :],
- dt_grad_img,
- ]
- vis_img = np.concatenate(vis_merge, axis=0)
- # vis_img = dt_grad_img.transpose(1, 2, 0)[100:]
-
- if show_floorplan:
- if 'processed_xyz' in dt:
- floorplan = draw_iou_floorplan(dt['processed_xyz'][0][..., ::2], dt_xyz[..., ::2],
- dt_board_color=[1, 0, 0, 1], gt_board_color=[0, 1, 0, 1])
- else:
- floorplan = show_alpha_floorplan(dt_xyz, border_color=[0, 1, 0, 1])
-
- vis_img = np.concatenate([vis_img, floorplan[:, 60:-60, :]], axis=1)
- if show:
- plt.imshow(vis_img)
- plt.show()
- if save_path:
- result = Image.fromarray((vis_img * 255).astype(np.uint8))
- result.save(save_path)
- return vis_img
-
-
-def preprocess(img_ori, q_error=0.7, refine_iter=3, vp_cache_path=None):
- # Align images with VP
- if os.path.exists(vp_cache_path):
- with open(vp_cache_path) as f:
- vp = [[float(v) for v in line.rstrip().split(' ')] for line in f.readlines()]
- vp = np.array(vp)
- else:
- # VP detection and line segment extraction
- _, vp, _, _, _, _, _ = panoEdgeDetection(img_ori,
- qError=q_error,
- refineIter=refine_iter)
- i_img = rotatePanorama(img_ori, vp[2::-1])
-
- if vp_cache_path is not None:
- with open(vp_cache_path, 'w') as f:
- for i in range(3):
- f.write('%.6f %.6f %.6f\n' % (vp[i, 0], vp[i, 1], vp[i, 2]))
-
- return i_img, vp
-
-
-def show_depth_normal_grad(dt):
- grad_conv = GradLoss().to(dt['depth'].device).grad_conv
- dt_grad_img = show_grad(dt['depth'][0], grad_conv, 50)
- dt_grad_img = cv2.resize(dt_grad_img, (1024, 60), interpolation=cv2.INTER_NEAREST)
- return dt_grad_img
-
-
-def show_alpha_floorplan(dt_xyz, side_l=512, border_color=None):
- if border_color is None:
- border_color = [1, 0, 0, 1]
- fill_color = [0.2, 0.2, 0.2, 0.2]
- dt_floorplan = draw_floorplan(xz=dt_xyz[..., ::2], fill_color=fill_color,
- border_color=border_color, side_l=side_l, show=False, center_color=[1, 0, 0, 1])
- dt_floorplan = Image.fromarray((dt_floorplan * 255).astype(np.uint8), mode='RGBA')
- back = np.zeros([side_l, side_l, len(fill_color)], dtype=np.float)
- back[..., :] = [0.8, 0.8, 0.8, 1]
- back = Image.fromarray((back * 255).astype(np.uint8), mode='RGBA')
- iou_floorplan = Image.alpha_composite(back, dt_floorplan).convert("RGB")
- dt_floorplan = np.array(iou_floorplan) / 255.0
- return dt_floorplan
-
-
-def save_pred_json(xyz, ration, save_path):
- # xyz[..., -1] = -xyz[..., -1]
- json_data = xyz2json(xyz, ration)
- with open(save_path, 'w') as f:
- f.write(json.dumps(json_data, indent=4) + '\n')
- return json_data
-
-
-def inference():
- if len(img_paths) == 0:
- logger.error('No images found')
- return
-
- bar = tqdm(img_paths, ncols=100)
- for img_path in bar:
- if not os.path.isfile(img_path):
- logger.error(f'The {img_path} not is file')
- continue
- name = os.path.basename(img_path).split('.')[0]
- bar.set_description(name)
- img = np.array(Image.open(img_path).resize((1024, 512), Image.Resampling.BICUBIC))[..., :3]
- if args.post_processing is not None and 'manhattan' in args.post_processing:
- bar.set_description("Preprocessing")
- img, vp = preprocess(img, vp_cache_path=os.path.join(args.output_dir, f"{name}_vp.txt"))
-
- img = (img / 255.0).astype(np.float32)
- run_one_inference(img, model, args, name)
-
-
-def inference_dataset(dataset):
- bar = tqdm(dataset, ncols=100)
- for data in bar:
- bar.set_description(data['id'])
- run_one_inference(data['image'].transpose(1, 2, 0), model, args, name=data['id'], logger=logger)
-
-
-@torch.no_grad()
-def run_one_inference(img, model, args, name, logger, show=True, show_depth=True,
- show_floorplan=True, mesh_format='.gltf', mesh_resolution=512):
- model.eval()
- logger.info("model inference...")
- dt = model(torch.from_numpy(img.transpose(2, 0, 1)[None]).to(args.device))
- if args.post_processing != 'original':
- logger.info(f"post-processing, type:{args.post_processing}...")
- dt['processed_xyz'] = post_process(tensor2np(dt['depth']), type_name=args.post_processing)
-
- visualize_2d(img, dt,
- show_depth=show_depth,
- show_floorplan=show_floorplan,
- show=show,
- save_path=os.path.join(args.output_dir, f"{name}_pred.png"))
- output_xyz = dt['processed_xyz'][0] if 'processed_xyz' in dt else depth2xyz(tensor2np(dt['depth'][0]))
-
- logger.info(f"saving predicted layout json...")
- json_data = save_pred_json(output_xyz, tensor2np(dt['ratio'][0])[0],
- save_path=os.path.join(args.output_dir, f"{name}_pred.json"))
- # if args.visualize_3d:
- # from visualization.visualizer.visualizer import visualize_3d
- # visualize_3d(json_data, (img * 255).astype(np.uint8))
-
- if args.visualize_3d or args.output_3d:
- dt_boundaries = corners2boundaries(tensor2np(dt['ratio'][0])[0], corners_xyz=output_xyz, step=None,
- length=mesh_resolution if 'processed_xyz' in dt else None,
- visible=True if 'processed_xyz' in dt else False)
- dt_layout_depth = layout2depth(dt_boundaries, show=False)
-
- logger.info(f"creating 3d mesh ...")
- create_3d_obj(cv2.resize(img, dt_layout_depth.shape[::-1]), dt_layout_depth,
- save_path=os.path.join(args.output_dir, f"{name}_3d{mesh_format}") if args.output_3d else None,
- mesh=True, show=args.visualize_3d)
-
-
-if __name__ == '__main__':
- logger = get_logger()
- args = parse_option()
- config = get_config(args)
-
- if ('cuda' in args.device or 'cuda' in config.TRAIN.DEVICE) and not torch.cuda.is_available():
- logger.info(f'The {args.device} is not available, will use cpu ...')
- config.defrost()
- args.device = "cpu"
- config.TRAIN.DEVICE = "cpu"
- config.freeze()
-
- model, _, _, _ = build_model(config, logger)
- os.makedirs(args.output_dir, exist_ok=True)
- img_paths = sorted(glob.glob(args.img_glob))
-
- inference()
-
- # dataset = MP3DDataset(root_dir='./src/dataset/mp3d', mode='test', split_list=[
- # ['7y3sRwLe3Va', '155fac2d50764bf09feb6c8f33e8fb76'],
- # ['e9zR4mvMWw7', 'c904c55a5d0e420bbd6e4e030b9fe5b4'],
- # ])
- # dataset = ZindDataset(root_dir='./src/dataset/zind', mode='test', split_list=[
- # '1169_pano_21',
- # '0583_pano_59',
- # ], vp_align=True)
- # inference_dataset(dataset)
diff --git a/spaces/Datasculptor/DescriptionGPT/tools/get_coco_zeroshot_oriorder.py b/spaces/Datasculptor/DescriptionGPT/tools/get_coco_zeroshot_oriorder.py
deleted file mode 100644
index ed6748be1f2ed92741ea78f5a187f9838185a80e..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/DescriptionGPT/tools/get_coco_zeroshot_oriorder.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import argparse
-import json
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--data_path', default='datasets/coco/annotations/instances_val2017_unseen_2.json')
- parser.add_argument('--cat_path', default='datasets/coco/annotations/instances_val2017.json')
- args = parser.parse_args()
- print('Loading', args.cat_path)
- cat = json.load(open(args.cat_path, 'r'))['categories']
-
- print('Loading', args.data_path)
- data = json.load(open(args.data_path, 'r'))
- data['categories'] = cat
- out_path = args.data_path[:-5] + '_oriorder.json'
- print('Saving to', out_path)
- json.dump(data, open(out_path, 'w'))
diff --git a/spaces/Datasculptor/MusicGen/audiocraft/data/audio_dataset.py b/spaces/Datasculptor/MusicGen/audiocraft/data/audio_dataset.py
deleted file mode 100644
index cf21422ea0059cb2d6553f93e608b8f9fa0d3a50..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/MusicGen/audiocraft/data/audio_dataset.py
+++ /dev/null
@@ -1,525 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import copy
-from concurrent.futures import ThreadPoolExecutor, Future
-from dataclasses import dataclass, fields
-from contextlib import ExitStack
-import gzip
-import json
-import logging
-import os
-from pathlib import Path
-import random
-import sys
-import typing as tp
-
-import torch
-import torch.nn.functional as F
-
-from .audio import audio_read, audio_info
-from .audio_utils import convert_audio
-from .zip import PathInZip
-
-try:
- import dora
-except ImportError:
- dora = None # type: ignore
-
-
-@dataclass(order=True)
-class BaseInfo:
-
- @classmethod
- def _dict2fields(cls, dictionary: dict):
- return {
- field.name: dictionary[field.name]
- for field in fields(cls) if field.name in dictionary
- }
-
- @classmethod
- def from_dict(cls, dictionary: dict):
- _dictionary = cls._dict2fields(dictionary)
- return cls(**_dictionary)
-
- def to_dict(self):
- return {
- field.name: self.__getattribute__(field.name)
- for field in fields(self)
- }
-
-
-@dataclass(order=True)
-class AudioMeta(BaseInfo):
- path: str
- duration: float
- sample_rate: int
- amplitude: tp.Optional[float] = None
- weight: tp.Optional[float] = None
- # info_path is used to load additional information about the audio file that is stored in zip files.
- info_path: tp.Optional[PathInZip] = None
-
- @classmethod
- def from_dict(cls, dictionary: dict):
- base = cls._dict2fields(dictionary)
- if 'info_path' in base and base['info_path'] is not None:
- base['info_path'] = PathInZip(base['info_path'])
- return cls(**base)
-
- def to_dict(self):
- d = super().to_dict()
- if d['info_path'] is not None:
- d['info_path'] = str(d['info_path'])
- return d
-
-
-@dataclass(order=True)
-class SegmentInfo(BaseInfo):
- meta: AudioMeta
- seek_time: float
- n_frames: int # actual number of frames without padding
- total_frames: int # total number of frames, padding included
- sample_rate: int # actual sample rate
-
-
-DEFAULT_EXTS = ['.wav', '.mp3', '.flac', '.ogg', '.m4a']
-
-logger = logging.getLogger(__name__)
-
-
-def _get_audio_meta(file_path: str, minimal: bool = True) -> AudioMeta:
- """AudioMeta from a path to an audio file.
-
- Args:
- file_path (str): Resolved path of valid audio file.
- minimal (bool): Whether to only load the minimal set of metadata (takes longer if not).
- Returns:
- AudioMeta: Audio file path and its metadata.
- """
- info = audio_info(file_path)
- amplitude: tp.Optional[float] = None
- if not minimal:
- wav, sr = audio_read(file_path)
- amplitude = wav.abs().max().item()
- return AudioMeta(file_path, info.duration, info.sample_rate, amplitude)
-
-
-def _resolve_audio_meta(m: AudioMeta, fast: bool = True) -> AudioMeta:
- """If Dora is available as a dependency, try to resolve potential relative paths
- in list of AudioMeta. This method is expected to be used when loading meta from file.
-
- Args:
- m (AudioMeta): Audio meta to resolve.
- fast (bool): If True, uses a really fast check for determining if a file is already absolute or not.
- Only valid on Linux/Mac.
- Returns:
- AudioMeta: Audio meta with resolved path.
- """
- def is_abs(m):
- if fast:
- return str(m)[0] == '/'
- else:
- os.path.isabs(str(m))
-
- if not dora:
- return m
-
- if not is_abs(m.path):
- m.path = dora.git_save.to_absolute_path(m.path)
- if m.info_path is not None and not is_abs(m.info_path.zip_path):
- m.info_path.zip_path = dora.git_save.to_absolute_path(m.path)
- return m
-
-
-def find_audio_files(path: tp.Union[Path, str],
- exts: tp.List[str] = DEFAULT_EXTS,
- resolve: bool = True,
- minimal: bool = True,
- progress: bool = False,
- workers: int = 0) -> tp.List[AudioMeta]:
- """Build a list of AudioMeta from a given path,
- collecting relevant audio files and fetching meta info.
-
- Args:
- path (str or Path): Path to folder containing audio files.
- exts (list of str): List of file extensions to consider for audio files.
- minimal (bool): Whether to only load the minimal set of metadata (takes longer if not).
- progress (bool): Whether to log progress on audio files collection.
- workers (int): number of parallel workers, if 0, use only the current thread.
- Returns:
- List[AudioMeta]: List of audio file path and its metadata.
- """
- audio_files = []
- futures: tp.List[Future] = []
- pool: tp.Optional[ThreadPoolExecutor] = None
- with ExitStack() as stack:
- if workers > 0:
- pool = ThreadPoolExecutor(workers)
- stack.enter_context(pool)
-
- if progress:
- print("Finding audio files...")
- for root, folders, files in os.walk(path, followlinks=True):
- for file in files:
- full_path = Path(root) / file
- if full_path.suffix.lower() in exts:
- audio_files.append(full_path)
- if pool is not None:
- futures.append(pool.submit(_get_audio_meta, str(audio_files[-1]), minimal))
- if progress:
- print(format(len(audio_files), " 8d"), end='\r', file=sys.stderr)
-
- if progress:
- print("Getting audio metadata...")
- meta: tp.List[AudioMeta] = []
- for idx, file_path in enumerate(audio_files):
- try:
- if pool is None:
- m = _get_audio_meta(str(file_path), minimal)
- else:
- m = futures[idx].result()
- if resolve:
- m = _resolve_audio_meta(m)
- except Exception as err:
- print("Error with", str(file_path), err, file=sys.stderr)
- continue
- meta.append(m)
- if progress:
- print(format((1 + idx) / len(audio_files), " 3.1%"), end='\r', file=sys.stderr)
- meta.sort()
- return meta
-
-
-def load_audio_meta(path: tp.Union[str, Path],
- resolve: bool = True, fast: bool = True) -> tp.List[AudioMeta]:
- """Load list of AudioMeta from an optionally compressed json file.
-
- Args:
- path (str or Path): Path to JSON file.
- resolve (bool): Whether to resolve the path from AudioMeta (default=True).
- fast (bool): activates some tricks to make things faster.
- Returns:
- List[AudioMeta]: List of audio file path and its total duration.
- """
- open_fn = gzip.open if str(path).lower().endswith('.gz') else open
- with open_fn(path, 'rb') as fp: # type: ignore
- lines = fp.readlines()
- meta = []
- for line in lines:
- d = json.loads(line)
- m = AudioMeta.from_dict(d)
- if resolve:
- m = _resolve_audio_meta(m, fast=fast)
- meta.append(m)
- return meta
-
-
-def save_audio_meta(path: tp.Union[str, Path], meta: tp.List[AudioMeta]):
- """Save the audio metadata to the file pointer as json.
-
- Args:
- path (str or Path): Path to JSON file.
- metadata (list of BaseAudioMeta): List of audio meta to save.
- """
- Path(path).parent.mkdir(exist_ok=True, parents=True)
- open_fn = gzip.open if str(path).lower().endswith('.gz') else open
- with open_fn(path, 'wb') as fp: # type: ignore
- for m in meta:
- json_str = json.dumps(m.to_dict()) + '\n'
- json_bytes = json_str.encode('utf-8')
- fp.write(json_bytes)
-
-
-class AudioDataset:
- """Base audio dataset.
-
- The dataset takes a list of AudioMeta and create a dataset composed of segments of audio
- and potentially additional information, by creating random segments from the list of audio
- files referenced in the metadata and applying minimal data pre-processing such as resampling,
- mixing of channels, padding, etc.
-
- If no segment_duration value is provided, the AudioDataset will return the full wav for each
- audio file. Otherwise, it will randomly sample audio files and create a segment of the specified
- duration, applying padding if required.
-
- By default, only the torch Tensor corresponding to the waveform is returned. Setting return_info=True
- allows to return a tuple containing the torch Tensor and additional metadata on the segment and the
- original audio meta.
-
- Args:
- meta (tp.List[AudioMeta]): List of audio files metadata.
- segment_duration (float): Optional segment duration of audio to load.
- If not specified, the dataset will load the full audio segment from the file.
- shuffle (bool): Set to `True` to have the data reshuffled at every epoch.
- sample_rate (int): Target sample rate of the loaded audio samples.
- channels (int): Target number of channels of the loaded audio samples.
- sample_on_duration (bool): Set to `True` to sample segments with probability
- dependent on audio file duration. This is only used if `segment_duration` is provided.
- sample_on_weight (bool): Set to `True` to sample segments using the `weight` entry of
- `AudioMeta`. If `sample_on_duration` is also True, the actual weight will be the product
- of the file duration and file weight. This is only used if `segment_duration` is provided.
- min_segment_ratio (float): Minimum segment ratio to use when the audio file
- is shorter than the desired segment.
- max_read_retry (int): Maximum number of retries to sample an audio segment from the dataset.
- return_info (bool): Whether to return the wav only or return wav along with segment info and metadata.
- min_audio_duration (tp.Optional[float], optional): Minimum audio file duration, in seconds, if provided
- audio shorter than this will be filtered out.
- max_audio_duration (tp.Optional[float], optional): Maximal audio file duration in seconds, if provided
- audio longer than this will be filtered out.
- """
- def __init__(self,
- meta: tp.List[AudioMeta],
- segment_duration: tp.Optional[float] = None,
- shuffle: bool = True,
- num_samples: int = 10_000,
- sample_rate: int = 48_000,
- channels: int = 2,
- pad: bool = True,
- sample_on_duration: bool = True,
- sample_on_weight: bool = True,
- min_segment_ratio: float = 0.5,
- max_read_retry: int = 10,
- return_info: bool = False,
- min_audio_duration: tp.Optional[float] = None,
- max_audio_duration: tp.Optional[float] = None
- ):
- assert len(meta) > 0, 'No audio meta provided to AudioDataset. Please check loading of audio meta.'
- assert segment_duration is None or segment_duration > 0
- assert segment_duration is None or min_segment_ratio >= 0
- logging.debug(f'sample_on_duration: {sample_on_duration}')
- logging.debug(f'sample_on_weight: {sample_on_weight}')
- logging.debug(f'pad: {pad}')
- logging.debug(f'min_segment_ratio: {min_segment_ratio}')
-
- self.segment_duration = segment_duration
- self.min_segment_ratio = min_segment_ratio
- self.max_audio_duration = max_audio_duration
- self.min_audio_duration = min_audio_duration
- if self.min_audio_duration is not None and self.max_audio_duration is not None:
- assert self.min_audio_duration <= self.max_audio_duration
- self.meta: tp.List[AudioMeta] = self._filter_duration(meta)
- assert len(self.meta) # Fail fast if all data has been filtered.
- self.total_duration = sum(d.duration for d in self.meta)
-
- if segment_duration is None:
- num_samples = len(self.meta)
- self.num_samples = num_samples
- self.shuffle = shuffle
- self.sample_rate = sample_rate
- self.channels = channels
- self.pad = pad
- self.sample_on_weight = sample_on_weight
- self.sample_on_duration = sample_on_duration
- self.sampling_probabilities = self._get_sampling_probabilities()
- self.max_read_retry = max_read_retry
- self.return_info = return_info
-
- def __len__(self):
- return self.num_samples
-
- def _get_sampling_probabilities(self, normalized: bool = True):
- """Return the sampling probabilities for each file inside `self.meta`.
- """
- scores: tp.List[float] = []
- for file_meta in self.meta:
- score = 1.
- if self.sample_on_weight and file_meta.weight is not None:
- score *= file_meta.weight
- if self.sample_on_duration:
- score *= file_meta.duration
- scores.append(score)
- probabilities = torch.tensor(scores)
- if normalized:
- probabilities /= probabilities.sum()
- return probabilities
-
- def sample_file(self, rng: torch.Generator) -> AudioMeta:
- """Sample a given file from `self.meta`. Can be overriden in subclasses.
- This is only called if `segment_duration` is not None.
-
- You must use the provided random number generator `rng` for reproducibility.
- """
- if not self.sample_on_weight and not self.sample_on_duration:
- file_index = int(torch.randint(len(self.sampling_probabilities), (1,), generator=rng).item())
- else:
- file_index = int(torch.multinomial(self.sampling_probabilities, 1, generator=rng).item())
-
- return self.meta[file_index]
-
- def __getitem__(self, index: int) -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, SegmentInfo]]:
- if self.segment_duration is None:
- file_meta = self.meta[index]
- out, sr = audio_read(file_meta.path)
- out = convert_audio(out, sr, self.sample_rate, self.channels)
- n_frames = out.shape[-1]
- segment_info = SegmentInfo(file_meta, seek_time=0., n_frames=n_frames, total_frames=n_frames,
- sample_rate=self.sample_rate)
- else:
- rng = torch.Generator()
- if self.shuffle:
- # We use index, plus extra randomness
- rng.manual_seed(index + self.num_samples * random.randint(0, 2**24))
- else:
- # We only use index
- rng.manual_seed(index)
-
- for retry in range(self.max_read_retry):
- file_meta = self.sample_file(rng)
- # We add some variance in the file position even if audio file is smaller than segment
- # without ending up with empty segments
- max_seek = max(0, file_meta.duration - self.segment_duration * self.min_segment_ratio)
- seek_time = torch.rand(1, generator=rng).item() * max_seek
- try:
- out, sr = audio_read(file_meta.path, seek_time, self.segment_duration, pad=False)
- out = convert_audio(out, sr, self.sample_rate, self.channels)
- n_frames = out.shape[-1]
- target_frames = int(self.segment_duration * self.sample_rate)
- if self.pad:
- out = F.pad(out, (0, target_frames - n_frames))
- segment_info = SegmentInfo(file_meta, seek_time, n_frames=n_frames, total_frames=target_frames,
- sample_rate=self.sample_rate)
- except Exception as exc:
- logger.warning("Error opening file %s: %r", file_meta.path, exc)
- if retry == self.max_read_retry - 1:
- raise
- else:
- break
-
- if self.return_info:
- # Returns the wav and additional information on the wave segment
- return out, segment_info
- else:
- return out
-
- def collater(self, samples):
- """The collater function has to be provided to the dataloader
- if AudioDataset has return_info=True in order to properly collate
- the samples of a batch.
- """
- if self.segment_duration is None and len(samples) > 1:
- assert self.pad, "Must allow padding when batching examples of different durations."
-
- # In this case the audio reaching the collater is of variable length as segment_duration=None.
- to_pad = self.segment_duration is None and self.pad
- if to_pad:
- max_len = max([wav.shape[-1] for wav, _ in samples])
-
- def _pad_wav(wav):
- return F.pad(wav, (0, max_len - wav.shape[-1]))
-
- if self.return_info:
- if len(samples) > 0:
- assert len(samples[0]) == 2
- assert isinstance(samples[0][0], torch.Tensor)
- assert isinstance(samples[0][1], SegmentInfo)
-
- wavs = [wav for wav, _ in samples]
- segment_infos = [copy.deepcopy(info) for _, info in samples]
-
- if to_pad:
- # Each wav could be of a different duration as they are not segmented.
- for i in range(len(samples)):
- # Determines the total legth of the signal with padding, so we update here as we pad.
- segment_infos[i].total_frames = max_len
- wavs[i] = _pad_wav(wavs[i])
-
- wav = torch.stack(wavs)
- return wav, segment_infos
- else:
- assert isinstance(samples[0], torch.Tensor)
- if to_pad:
- samples = [_pad_wav(s) for s in samples]
- return torch.stack(samples)
-
- def _filter_duration(self, meta: tp.List[AudioMeta]) -> tp.List[AudioMeta]:
- """Filters out audio files with short durations.
- Removes from meta files that have durations that will not allow to samples examples from them.
- """
- orig_len = len(meta)
-
- # Filter data that is too short.
- if self.min_audio_duration is not None:
- meta = [m for m in meta if m.duration >= self.min_audio_duration]
-
- # Filter data that is too long.
- if self.max_audio_duration is not None:
- meta = [m for m in meta if m.duration <= self.max_audio_duration]
-
- filtered_len = len(meta)
- removed_percentage = 100*(1-float(filtered_len)/orig_len)
- msg = 'Removed %.2f percent of the data because it was too short or too long.' % removed_percentage
- if removed_percentage < 10:
- logging.debug(msg)
- else:
- logging.warning(msg)
- return meta
-
- @classmethod
- def from_meta(cls, root: tp.Union[str, Path], **kwargs):
- """Instantiate AudioDataset from a path to a directory containing a manifest as a jsonl file.
-
- Args:
- root (str or Path): Path to root folder containing audio files.
- kwargs: Additional keyword arguments for the AudioDataset.
- """
- root = Path(root)
- if root.is_dir():
- if (root / 'data.jsonl').exists():
- root = root / 'data.jsonl'
- elif (root / 'data.jsonl.gz').exists():
- root = root / 'data.jsonl.gz'
- else:
- raise ValueError("Don't know where to read metadata from in the dir. "
- "Expecting either a data.jsonl or data.jsonl.gz file but none found.")
- meta = load_audio_meta(root)
- return cls(meta, **kwargs)
-
- @classmethod
- def from_path(cls, root: tp.Union[str, Path], minimal_meta: bool = True,
- exts: tp.List[str] = DEFAULT_EXTS, **kwargs):
- """Instantiate AudioDataset from a path containing (possibly nested) audio files.
-
- Args:
- root (str or Path): Path to root folder containing audio files.
- minimal_meta (bool): Whether to only load minimal metadata or not.
- exts (list of str): Extensions for audio files.
- kwargs: Additional keyword arguments for the AudioDataset.
- """
- root = Path(root)
- if root.is_file():
- meta = load_audio_meta(root, resolve=True)
- else:
- meta = find_audio_files(root, exts, minimal=minimal_meta, resolve=True)
- return cls(meta, **kwargs)
-
-
-def main():
- logging.basicConfig(stream=sys.stderr, level=logging.INFO)
- parser = argparse.ArgumentParser(
- prog='audio_dataset',
- description='Generate .jsonl files by scanning a folder.')
- parser.add_argument('root', help='Root folder with all the audio files')
- parser.add_argument('output_meta_file',
- help='Output file to store the metadata, ')
- parser.add_argument('--complete',
- action='store_false', dest='minimal', default=True,
- help='Retrieve all metadata, even the one that are expansive '
- 'to compute (e.g. normalization).')
- parser.add_argument('--resolve',
- action='store_true', default=False,
- help='Resolve the paths to be absolute and with no symlinks.')
- parser.add_argument('--workers',
- default=10, type=int,
- help='Number of workers.')
- args = parser.parse_args()
- meta = find_audio_files(args.root, DEFAULT_EXTS, progress=True,
- resolve=args.resolve, minimal=args.minimal, workers=args.workers)
- save_audio_meta(args.output_meta_file, meta)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Docfile/open_llm_leaderboard/src/assets/css_html_js.py b/spaces/Docfile/open_llm_leaderboard/src/assets/css_html_js.py
deleted file mode 100644
index 8215be3b3547c57f4d75a1448a2407334e16fb6d..0000000000000000000000000000000000000000
--- a/spaces/Docfile/open_llm_leaderboard/src/assets/css_html_js.py
+++ /dev/null
@@ -1,111 +0,0 @@
-custom_css = """
-
-.markdown-text {
- font-size: 16px !important;
-}
-
-#models-to-add-text {
- font-size: 18px !important;
-}
-
-#citation-button span {
- font-size: 16px !important;
-}
-
-#citation-button textarea {
- font-size: 16px !important;
-}
-
-#citation-button > label > button {
- margin: 6px;
- transform: scale(1.3);
-}
-
-#leaderboard-table {
- margin-top: 15px
-}
-
-#leaderboard-table-lite {
- margin-top: 15px
-}
-
-#search-bar-table-box > div:first-child {
- background: none;
- border: none;
-}
-
-#search-bar {
- padding: 0px;
-}
-
-/* Hides the final AutoEvalColumn */
-#llm-benchmark-tab-table table td:last-child,
-#llm-benchmark-tab-table table th:last-child {
- display: none;
-}
-
-/* Limit the width of the first AutoEvalColumn so that names don't expand too much */
-table td:first-child,
-table th:first-child {
- max-width: 400px;
- overflow: auto;
- white-space: nowrap;
-}
-
-.tab-buttons button {
- font-size: 20px;
-}
-
-#scale-logo {
- border-style: none !important;
- box-shadow: none;
- display: block;
- margin-left: auto;
- margin-right: auto;
- max-width: 600px;
-}
-
-#scale-logo .download {
- display: none;
-}
-#filter_type{
- border: 0;
- padding-left: 0;
- padding-top: 0;
-}
-#filter_type label {
- display: flex;
-}
-#filter_type label > span{
- margin-top: var(--spacing-lg);
- margin-right: 0.5em;
-}
-#filter_type label > .wrap{
- width: 103px;
-}
-#filter_type label > .wrap .wrap-inner{
- padding: 2px;
-}
-#filter_type label > .wrap .wrap-inner input{
- width: 1px
-}
-#filter-columns-type{
- border:0;
- padding:0.5;
-}
-#filter-columns-size{
- border:0;
- padding:0.5;
-}
-#box-filter > .form{
- border: 0
-}
-"""
-
-get_window_url_params = """
- function(url_params) {
- const params = new URLSearchParams(window.location.search);
- url_params = Object.fromEntries(params);
- return url_params;
- }
- """
diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/latent_codes_pool.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/latent_codes_pool.py
deleted file mode 100644
index 626a798a8024e8dced8200038f6d397508ecd7c1..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/latent_codes_pool.py
+++ /dev/null
@@ -1,58 +0,0 @@
-import random
-import torch
-
-
-class LatentCodesPool:
- """This class implements latent codes buffer that stores previously generated w latent codes.
- This buffer enables us to update discriminators using a history of generated w's
- rather than the ones produced by the latest encoder.
- """
-
- def __init__(self, pool_size):
- """Initialize the ImagePool class
- Parameters:
- pool_size (int) -- the size of image buffer, if pool_size=0, no buffer will be created
- """
- self.pool_size = pool_size
- if self.pool_size > 0: # create an empty pool
- self.num_ws = 0
- self.ws = []
-
- def query(self, ws):
- """Return w's from the pool.
- Parameters:
- ws: the latest generated w's from the generator
- Returns w's from the buffer.
- By 50/100, the buffer will return input w's.
- By 50/100, the buffer will return w's previously stored in the buffer,
- and insert the current w's to the buffer.
- """
- if self.pool_size == 0: # if the buffer size is 0, do nothing
- return ws
- return_ws = []
- for w in ws: # ws.shape: (batch, 512) or (batch, n_latent, 512)
- # w = torch.unsqueeze(image.data, 0)
- if w.ndim == 2:
- # apply a random latent index as a candidate
- i = random.randint(0, len(w) - 1)
- w = w[i]
- self.handle_w(w, return_ws)
- # collect all the images and return
- return_ws = torch.stack(return_ws, 0)
- return return_ws
-
- def handle_w(self, w, return_ws):
- if self.num_ws < self.pool_size: # if the buffer is not full; keep inserting current codes to the buffer
- self.num_ws = self.num_ws + 1
- self.ws.append(w)
- return_ws.append(w)
- else:
- p = random.uniform(0, 1)
- if p > 0.5: # by 50% chance, the buffer will return a previously stored latent code, and insert the current code into the buffer
- random_id = random.randint(
- 0, self.pool_size - 1) # randint is inclusive
- tmp = self.ws[random_id].clone()
- self.ws[random_id] = w
- return_ws.append(tmp)
- else: # by another 50% chance, the buffer will return the current image
- return_ws.append(w)
diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/conv2d_resample.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/conv2d_resample.py
deleted file mode 100644
index 11c0d1c313bd400a76d4d8aed496c4f31d8c6724..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/conv2d_resample.py
+++ /dev/null
@@ -1,176 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""2D convolution with optional up/downsampling."""
-
-import torch
-
-from .. import misc
-from . import conv2d_gradfix
-from . import upfirdn2d
-from .upfirdn2d import _parse_padding
-from .upfirdn2d import _get_filter_size
-
-# ----------------------------------------------------------------------------
-
-
-def _get_weight_shape(w):
- with misc.suppress_tracer_warnings(): # this value will be treated as a constant
- shape = [int(sz) for sz in w.shape]
- misc.assert_shape(w, shape)
- return shape
-
-# ----------------------------------------------------------------------------
-
-
-def _conv2d_wrapper(x, w, stride=1, padding=0, groups=1, transpose=False, flip_weight=True):
- """Wrapper for the underlying `conv2d()` and `conv_transpose2d()` implementations.
- """
- out_channels, in_channels_per_group, kh, kw = _get_weight_shape(w)
-
- # Flip weight if requested.
- # conv2d() actually performs correlation (flip_weight=True) not convolution (flip_weight=False).
- if not flip_weight:
- w = w.flip([2, 3])
-
- # Workaround performance pitfall in cuDNN 8.0.5, triggered when using
- # 1x1 kernel + memory_format=channels_last + less than 64 channels.
- if kw == 1 and kh == 1 and stride == 1 and padding in [0, [0, 0], (0, 0)] and not transpose:
- if x.stride()[1] == 1 and min(out_channels, in_channels_per_group) < 64:
- if out_channels <= 4 and groups == 1:
- in_shape = x.shape
- x = w.squeeze(3).squeeze(
- 2) @ x.reshape([in_shape[0], in_channels_per_group, -1])
- x = x.reshape([in_shape[0], out_channels,
- in_shape[2], in_shape[3]])
- else:
- x = x.to(memory_format=torch.contiguous_format)
- w = w.to(memory_format=torch.contiguous_format)
- x = conv2d_gradfix.conv2d(x, w, groups=groups)
- return x.to(memory_format=torch.channels_last)
-
- # Otherwise => execute using conv2d_gradfix.
- op = conv2d_gradfix.conv_transpose2d if transpose else conv2d_gradfix.conv2d
- return op(x, w, stride=stride, padding=padding, groups=groups)
-
-# ----------------------------------------------------------------------------
-
-
-@misc.profiled_function
-def conv2d_resample(x, w, f=None, up=1, down=1, padding=0, groups=1, flip_weight=True, flip_filter=False):
- r"""2D convolution with optional up/downsampling.
-
- Padding is performed only once at the beginning, not between the operations.
-
- Args:
- x: Input tensor of shape
- `[batch_size, in_channels, in_height, in_width]`.
- w: Weight tensor of shape
- `[out_channels, in_channels//groups, kernel_height, kernel_width]`.
- f: Low-pass filter for up/downsampling. Must be prepared beforehand by
- calling upfirdn2d.setup_filter(). None = identity (default).
- up: Integer upsampling factor (default: 1).
- down: Integer downsampling factor (default: 1).
- padding: Padding with respect to the upsampled image. Can be a single number
- or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
- (default: 0).
- groups: Split input channels into N groups (default: 1).
- flip_weight: False = convolution, True = correlation (default: True).
- flip_filter: False = convolution, True = correlation (default: False).
-
- Returns:
- Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
- """
- # Validate arguments.
- assert isinstance(x, torch.Tensor) and (x.ndim == 4)
- assert isinstance(w, torch.Tensor) and (
- w.ndim == 4) and (w.dtype == x.dtype)
- assert f is None or (isinstance(f, torch.Tensor) and f.ndim in [
- 1, 2] and f.dtype == torch.float32)
- assert isinstance(up, int) and (up >= 1)
- assert isinstance(down, int) and (down >= 1)
- assert isinstance(groups, int) and (groups >= 1)
- out_channels, in_channels_per_group, kh, kw = _get_weight_shape(w)
- fw, fh = _get_filter_size(f)
- px0, px1, py0, py1 = _parse_padding(padding)
-
- # Adjust padding to account for up/downsampling.
- if up > 1:
- px0 += (fw + up - 1) // 2
- px1 += (fw - up) // 2
- py0 += (fh + up - 1) // 2
- py1 += (fh - up) // 2
- if down > 1:
- px0 += (fw - down + 1) // 2
- px1 += (fw - down) // 2
- py0 += (fh - down + 1) // 2
- py1 += (fh - down) // 2
-
- # Fast path: 1x1 convolution with downsampling only => downsample first, then convolve.
- if kw == 1 and kh == 1 and (down > 1 and up == 1):
- x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, padding=[
- px0, px1, py0, py1], flip_filter=flip_filter)
- x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight)
- return x
-
- # Fast path: 1x1 convolution with upsampling only => convolve first, then upsample.
- if kw == 1 and kh == 1 and (up > 1 and down == 1):
- x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight)
- x = upfirdn2d.upfirdn2d(x=x, f=f, up=up, padding=[
- px0, px1, py0, py1], gain=up**2, flip_filter=flip_filter)
- return x
-
- # Fast path: downsampling only => use strided convolution.
- if down > 1 and up == 1:
- x = upfirdn2d.upfirdn2d(
- x=x, f=f, padding=[px0, px1, py0, py1], flip_filter=flip_filter)
- x = _conv2d_wrapper(x=x, w=w, stride=down,
- groups=groups, flip_weight=flip_weight)
- return x
-
- # Fast path: upsampling with optional downsampling => use transpose strided convolution.
- if up > 1:
- if groups == 1:
- w = w.transpose(0, 1)
- else:
- w = w.reshape(groups, out_channels // groups,
- in_channels_per_group, kh, kw)
- w = w.transpose(1, 2)
- w = w.reshape(groups * in_channels_per_group,
- out_channels // groups, kh, kw)
- px0 -= kw - 1
- px1 -= kw - up
- py0 -= kh - 1
- py1 -= kh - up
- pxt = max(min(-px0, -px1), 0)
- pyt = max(min(-py0, -py1), 0)
- x = _conv2d_wrapper(x=x, w=w, stride=up, padding=[
- pyt, pxt], groups=groups, transpose=True, flip_weight=(not flip_weight))
- x = upfirdn2d.upfirdn2d(x=x, f=f, padding=[
- px0+pxt, px1+pxt, py0+pyt, py1+pyt], gain=up**2, flip_filter=flip_filter)
- if down > 1:
- x = upfirdn2d.upfirdn2d(
- x=x, f=f, down=down, flip_filter=flip_filter)
- return x
-
- # Fast path: no up/downsampling, padding supported by the underlying implementation => use plain conv2d.
- if up == 1 and down == 1:
- if px0 == px1 and py0 == py1 and px0 >= 0 and py0 >= 0:
- return _conv2d_wrapper(x=x, w=w, padding=[py0, px0], groups=groups, flip_weight=flip_weight)
-
- # Fallback: Generic reference implementation.
- x = upfirdn2d.upfirdn2d(x=x, f=(f if up > 1 else None), up=up, padding=[
- px0, px1, py0, py1], gain=up**2, flip_filter=flip_filter)
- x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight)
- if down > 1:
- x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, flip_filter=flip_filter)
- return x
-
-# ----------------------------------------------------------------------------
diff --git a/spaces/Dusan/clickbaitonator/fudge/main.py b/spaces/Dusan/clickbaitonator/fudge/main.py
deleted file mode 100644
index e8c2299b2449b6dd07d26c7ae678732b1dabca88..0000000000000000000000000000000000000000
--- a/spaces/Dusan/clickbaitonator/fudge/main.py
+++ /dev/null
@@ -1,192 +0,0 @@
-import os
-import random
-import time
-import pickle
-import math
-from argparse import ArgumentParser
-
-from tqdm import tqdm
-import numpy as np
-import torch
-import torch.nn as nn
-
-from data import Dataset
-from model import Model
-from util import save_checkpoint, ProgressMeter, AverageMeter, num_params, pad_mask
-from constants import *
-
-
-def train(model, dataset, optimizer, criterion, epoch, args, data_start_index):
- model.train()
- if data_start_index == 0:
- dataset.shuffle('train', seed=epoch + args.seed)
- if args.epoch_max_len is not None:
- data_end_index = min(data_start_index + args.epoch_max_len, len(dataset.splits['train']))
- loader = dataset.loader('train', num_workers=args.num_workers, indices=list(range(data_start_index, data_end_index)))
- data_start_index = data_end_index if data_end_index < len(dataset.splits['train']) else 0
- else:
- loader = dataset.loader('train', num_workers=args.num_workers)
- loss_meter = AverageMeter('loss', ':6.4f')
- total_length = len(loader)
- progress = ProgressMeter(total_length, [loss_meter], prefix='Training: ')
- for batch_num, batch in enumerate(tqdm(loader, total=len(loader))):
- batch = [tensor.to(args.device) for tensor in batch]
- inputs, lengths, future_words, log_probs, labels, classification_targets, syllables_to_go, future_word_num_syllables, rhyme_group_index = batch
- if args.task not in ['formality', 'iambic']:
- if not args.debug and len(inputs) != args.batch_size: # it'll screw up the bias...?
- continue
- scores = model(inputs, lengths, future_words, log_probs, syllables_to_go, future_word_num_syllables, rhyme_group_index, run_classifier=True)
- if args.task == 'formality': # we're learning for all positions at once. scores are batch x seq
- expanded_labels = classification_targets.unsqueeze(1).expand(-1, scores.shape[1]) # batch x seq
- length_mask = pad_mask(lengths).permute(1, 0) # batch x seq
- loss = criterion(scores.flatten()[length_mask.flatten()==1], expanded_labels.flatten().float()[length_mask.flatten()==1])
- elif args.task in ['iambic', 'newline']:
- use_indices = classification_targets.flatten() != -1
- loss = criterion(scores.flatten()[use_indices], classification_targets.flatten().float()[use_indices])
- else: # topic, rhyme
- loss = criterion(scores.flatten(), labels.flatten().float())
- optimizer.zero_grad()
- loss.backward()
- optimizer.step()
- loss_meter.update(loss.detach(), len(labels))
- if batch_num % args.train_print_freq == 0:
- progress.display(batch_num)
- progress.display(total_length)
- return data_start_index
-
-
-def validate(model, dataset, criterion, epoch, args):
- model.eval()
- random.seed(0)
- loader = dataset.loader('val', num_workers=args.num_workers)
- loss_meter = AverageMeter('loss', ':6.4f')
- total_length = len(loader)
- progress = ProgressMeter(total_length, [loss_meter], prefix='Validation: ')
- with torch.no_grad():
- for batch_num, batch in enumerate(tqdm(loader, total=len(loader))):
- batch = [tensor.to(args.device) for tensor in batch]
- inputs, lengths, future_words, log_probs, labels, classification_targets, syllables_to_go, future_word_num_syllables, rhyme_group_index = batch
- if args.task not in ['formality', 'iambic']: # topic predictor
- if not args.debug and len(inputs) != args.batch_size:
- continue
- scores = model(inputs, lengths, future_words, log_probs, syllables_to_go, future_word_num_syllables, rhyme_group_index, run_classifier=True)
- if args.task == 'formality': # we're learning for all positions at once. scores are batch x seq
- expanded_labels = classification_targets.unsqueeze(1).expand(-1, scores.shape[1]) # batch x seq
- length_mask = pad_mask(lengths).permute(1, 0) # batch x seq
- loss = criterion(scores.flatten()[length_mask.flatten()==1], expanded_labels.flatten().float()[length_mask.flatten()==1])
- elif args.task in ['iambic', 'newline']:
- use_indices = classification_targets.flatten() != -1
- loss = criterion(scores.flatten()[use_indices], classification_targets.flatten().float()[use_indices])
- else: # topic, rhyme
- loss = criterion(scores.flatten(), labels.flatten().float())
- loss_meter.update(loss.detach(), len(labels))
- if batch_num % args.train_print_freq == 0:
- progress.display(batch_num)
- progress.display(total_length)
- return loss_meter.avg
-
-
-def main(args):
- dataset = Dataset(args)
- os.makedirs(args.save_dir, exist_ok=True)
- with open(os.path.join(args.save_dir, 'dataset_info'), 'wb') as wf:
- pickle.dump(dataset.dataset_info, wf)
- if args.task == 'rhyme':
- with open(os.path.join(args.save_dir, 'rhyme_info'), 'wb') as wf:
- pickle.dump(dataset.rhyme_info, wf)
- if args.ckpt:
- checkpoint = torch.load(args.ckpt, map_location=args.device)
- start_epoch = checkpoint['epoch'] + 1
- best_val_metric = checkpoint['best_metric']
- model_args = checkpoint['args']
- model = Model(model_args, dataset.gpt_pad_id, len(dataset.index2word), rhyme_group_size=len(dataset.index2rhyme_group) if args.task == 'rhyme' else None) # no need to get the glove embeddings when reloading since they're saved in model ckpt anyway
- model.load_state_dict(checkpoint['state_dict'])
- model = model.to(args.device)
- optimizer = torch.optim.Adam(model.parameters(), lr=model_args.lr)
- optimizer.load_state_dict(checkpoint['optimizer'])
- data_start_index = checkpoint['data_start_index']
- print("=> loaded checkpoint '{}' (epoch {})"
- .format(args.ckpt, checkpoint['epoch']))
- # NOTE: just import pdb after loading the model here if you want to play with it, it's easy
- # model.eval()
- # import pdb; pdb.set_trace()
- else:
- model = Model(args, dataset.gpt_pad_id, len(dataset.index2word), rhyme_group_size=len(dataset.index2rhyme_group) if args.task == 'rhyme' else None, glove_embeddings=dataset.glove_embeddings)
- model = model.to(args.device)
- optimizer = torch.optim.Adam(model.parameters(), lr=args.lr)
- best_val_metric = 1e8 # lower is better for BCE
- data_start_index = 0
- print('num params', num_params(model))
- criterion = nn.BCEWithLogitsLoss().to(args.device)
-
- if args.evaluate:
- epoch = 0
- validate(model, dataset, criterion, epoch, args)
- return
- for epoch in range(args.epochs):
- print("TRAINING: Epoch {} at {}".format(epoch, time.ctime()))
- data_start_index = train(model, dataset, optimizer, criterion, epoch, args, data_start_index)
- if epoch % args.validation_freq == 0:
- print("VALIDATION: Epoch {} at {}".format(epoch, time.ctime()))
- metric = validate(model, dataset, criterion, epoch, args)
-
- if not args.debug:
- if metric < best_val_metric:
- print('new best val metric', metric)
- best_val_metric = metric
- save_checkpoint({
- 'epoch': epoch,
- 'state_dict': model.state_dict(),
- 'best_metric': best_val_metric,
- 'optimizer': optimizer.state_dict(),
- 'data_start_index': data_start_index,
- 'args': args
- }, os.path.join(args.save_dir, 'model_best.pth.tar'))
- save_checkpoint({
- 'epoch': epoch,
- 'state_dict': model.state_dict(),
- 'best_metric': metric,
- 'optimizer': optimizer.state_dict(),
- 'data_start_index': data_start_index,
- 'args': args
- }, os.path.join(args.save_dir, 'model_epoch' + str(epoch) + '.pth.tar'))
-
-
-if __name__=='__main__':
- parser = ArgumentParser()
-
- # DATA
- parser.add_argument('--task', type=str, required=True, choices=['iambic', 'rhyme', 'newline', 'topic', 'formality', 'clickbait'])
- parser.add_argument('--data_dir', type=str, required=True)
- parser.add_argument('--glove_file', type=str, help='glove embedding init, for topic task')
-
- # SAVE/LOAD
- parser.add_argument('--save_dir', type=str, required=True, help='where to save ckpts')
- parser.add_argument('--ckpt', type=str, default=None, help='load ckpt from file if given')
- parser.add_argument('--dataset_info', type=str, help='saved dataset info')
- parser.add_argument('--rhyme_info', type=str, help='saved dataset rhyme info, for a ckpt with task==rhyme')
-
- # TRAINING
- parser.add_argument('--batch_size', type=int, default=128)
- parser.add_argument('--epochs', type=int, default=100)
- parser.add_argument('--epoch_max_len', type=int, default=None, help='max batches per epoch if set, for more frequent validation')
- parser.add_argument('--validation_freq', type=int, default=1, help='validate every X epochs')
- parser.add_argument('--lr', type=float, default=1e-3, help='Adam learning rate')
- parser.add_argument('--seed', type=int, default=1, help='random seed')
- parser.add_argument('--device', type=str, default='cuda', choices=['cpu', 'cuda'])
- parser.add_argument('--num_workers', type=int, default=20, help='num workers for data loader')
- parser.add_argument('--evaluate', action='store_true', default=False)
- parser.add_argument('--debug', action='store_true', default=False)
-
- # PRINTING
- parser.add_argument('--train_print_freq', type=int, default=100, help='how often to print metrics (every X batches)')
-
- args = parser.parse_args()
-
- random.seed(args.seed)
- np.random.seed(args.seed)
- torch.manual_seed(args.seed)
- if args.evaluate:
- assert args.ckpt is not None
-
- main(args)
\ No newline at end of file
diff --git a/spaces/Eddycrack864/Applio-Inference/infer/lib/train/utils.py b/spaces/Eddycrack864/Applio-Inference/infer/lib/train/utils.py
deleted file mode 100644
index dd965fc4dd2af09e445a7f625f2681460874da7a..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/infer/lib/train/utils.py
+++ /dev/null
@@ -1,478 +0,0 @@
-import argparse
-import glob
-import json
-import logging
-import os
-import subprocess
-import sys
-import shutil
-
-import numpy as np
-import torch
-from scipy.io.wavfile import read
-
-MATPLOTLIB_FLAG = False
-
-logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
-logger = logging
-
-
-def load_checkpoint_d(checkpoint_path, combd, sbd, optimizer=None, load_opt=1):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location="cpu")
-
- ##################
- def go(model, bkey):
- saved_state_dict = checkpoint_dict[bkey]
- if hasattr(model, "module"):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict = {}
- for k, v in state_dict.items(): # 模型需要的shape
- try:
- new_state_dict[k] = saved_state_dict[k]
- if saved_state_dict[k].shape != state_dict[k].shape:
- logger.warn(
- "shape-%s-mismatch. need: %s, get: %s",
- k,
- state_dict[k].shape,
- saved_state_dict[k].shape,
- ) #
- raise KeyError
- except:
- # logger.info(traceback.format_exc())
- logger.info("%s is not in the checkpoint", k) # pretrain缺失的
- new_state_dict[k] = v # 模型自带的随机值
- if hasattr(model, "module"):
- model.module.load_state_dict(new_state_dict, strict=False)
- else:
- model.load_state_dict(new_state_dict, strict=False)
- return model
-
- go(combd, "combd")
- model = go(sbd, "sbd")
- #############
- logger.info("Loaded model weights")
-
- iteration = checkpoint_dict["iteration"]
- learning_rate = checkpoint_dict["learning_rate"]
- if (
- optimizer is not None and load_opt == 1
- ): ###加载不了,如果是空的的话,重新初始化,可能还会影响lr时间表的更新,因此在train文件最外围catch
- # try:
- optimizer.load_state_dict(checkpoint_dict["optimizer"])
- # except:
- # traceback.print_exc()
- logger.info("Loaded checkpoint '{}' (epoch {})".format(checkpoint_path, iteration))
- return model, optimizer, learning_rate, iteration
-
-
-# def load_checkpoint(checkpoint_path, model, optimizer=None):
-# assert os.path.isfile(checkpoint_path)
-# checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
-# iteration = checkpoint_dict['iteration']
-# learning_rate = checkpoint_dict['learning_rate']
-# if optimizer is not None:
-# optimizer.load_state_dict(checkpoint_dict['optimizer'])
-# # print(1111)
-# saved_state_dict = checkpoint_dict['model']
-# # print(1111)
-#
-# if hasattr(model, 'module'):
-# state_dict = model.module.state_dict()
-# else:
-# state_dict = model.state_dict()
-# new_state_dict= {}
-# for k, v in state_dict.items():
-# try:
-# new_state_dict[k] = saved_state_dict[k]
-# except:
-# logger.info("%s is not in the checkpoint" % k)
-# new_state_dict[k] = v
-# if hasattr(model, 'module'):
-# model.module.load_state_dict(new_state_dict)
-# else:
-# model.load_state_dict(new_state_dict)
-# logger.info("Loaded checkpoint '{}' (epoch {})" .format(
-# checkpoint_path, iteration))
-# return model, optimizer, learning_rate, iteration
-def load_checkpoint(checkpoint_path, model, optimizer=None, load_opt=1):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location="cpu")
-
- saved_state_dict = checkpoint_dict["model"]
- if hasattr(model, "module"):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict = {}
- for k, v in state_dict.items(): # 模型需要的shape
- try:
- new_state_dict[k] = saved_state_dict[k]
- if saved_state_dict[k].shape != state_dict[k].shape:
- logger.warn(
- "shape-%s-mismatch|need-%s|get-%s",
- k,
- state_dict[k].shape,
- saved_state_dict[k].shape,
- ) #
- raise KeyError
- except:
- # logger.info(traceback.format_exc())
- logger.info("%s is not in the checkpoint", k) # pretrain缺失的
- new_state_dict[k] = v # 模型自带的随机值
- if hasattr(model, "module"):
- model.module.load_state_dict(new_state_dict, strict=False)
- else:
- model.load_state_dict(new_state_dict, strict=False)
- logger.info("Loaded model weights")
-
- iteration = checkpoint_dict["iteration"]
- learning_rate = checkpoint_dict["learning_rate"]
- if (
- optimizer is not None and load_opt == 1
- ): ###加载不了,如果是空的的话,重新初始化,可能还会影响lr时间表的更新,因此在train文件最外围catch
- # try:
- optimizer.load_state_dict(checkpoint_dict["optimizer"])
- # except:
- # traceback.print_exc()
- logger.info("Loaded checkpoint '{}' (epoch {})".format(checkpoint_path, iteration))
- return model, optimizer, learning_rate, iteration
-
-
-def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path):
- logger.info(
- "Saving model and optimizer state at epoch {} to {}".format(
- iteration, checkpoint_path
- )
- )
- if hasattr(model, "module"):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- torch.save(
- {
- "model": state_dict,
- "iteration": iteration,
- "optimizer": optimizer.state_dict(),
- "learning_rate": learning_rate,
- },
- checkpoint_path,
- )
-
-
-def save_checkpoint_d(combd, sbd, optimizer, learning_rate, iteration, checkpoint_path):
- logger.info(
- "Saving model and optimizer state at epoch {} to {}".format(
- iteration, checkpoint_path
- )
- )
- if hasattr(combd, "module"):
- state_dict_combd = combd.module.state_dict()
- else:
- state_dict_combd = combd.state_dict()
- if hasattr(sbd, "module"):
- state_dict_sbd = sbd.module.state_dict()
- else:
- state_dict_sbd = sbd.state_dict()
- torch.save(
- {
- "combd": state_dict_combd,
- "sbd": state_dict_sbd,
- "iteration": iteration,
- "optimizer": optimizer.state_dict(),
- "learning_rate": learning_rate,
- },
- checkpoint_path,
- )
-
-
-def summarize(
- writer,
- global_step,
- scalars={},
- histograms={},
- images={},
- audios={},
- audio_sampling_rate=22050,
-):
- for k, v in scalars.items():
- writer.add_scalar(k, v, global_step)
- for k, v in histograms.items():
- writer.add_histogram(k, v, global_step)
- for k, v in images.items():
- writer.add_image(k, v, global_step, dataformats="HWC")
- for k, v in audios.items():
- writer.add_audio(k, v, global_step, audio_sampling_rate)
-
-
-def latest_checkpoint_path(dir_path, regex="G_*.pth"):
- f_list = glob.glob(os.path.join(dir_path, regex))
- f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f))))
- x = f_list[-1]
- logger.debug(x)
- return x
-
-
-def plot_spectrogram_to_numpy(spectrogram):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
-
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger("matplotlib")
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10, 2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none")
- plt.colorbar(im, ax=ax)
- plt.xlabel("Frames")
- plt.ylabel("Channels")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="")
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def plot_alignment_to_numpy(alignment, info=None):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
-
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger("matplotlib")
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(6, 4))
- im = ax.imshow(
- alignment.transpose(), aspect="auto", origin="lower", interpolation="none"
- )
- fig.colorbar(im, ax=ax)
- xlabel = "Decoder timestep"
- if info is not None:
- xlabel += "\n\n" + info
- plt.xlabel(xlabel)
- plt.ylabel("Encoder timestep")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="")
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
-
-
-def load_filepaths_and_text(filename, split="|"):
- with open(filename, encoding="utf-8") as f:
- filepaths_and_text = [line.strip().split(split) for line in f]
- return filepaths_and_text
-
-
-def get_hparams(init=True):
- """
- todo:
- 结尾七人组:
- 保存频率、总epoch done
- bs done
- pretrainG、pretrainD done
- 卡号:os.en["CUDA_VISIBLE_DEVICES"] done
- if_latest done
- 模型:if_f0 done
- 采样率:自动选择config done
- 是否缓存数据集进GPU:if_cache_data_in_gpu done
-
- -m:
- 自动决定training_files路径,改掉train_nsf_load_pretrain.py里的hps.data.training_files done
- -c不要了
- """
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "-se",
- "--save_every_epoch",
- type=int,
- required=True,
- help="checkpoint save frequency (epoch)",
- )
- parser.add_argument(
- "-te", "--total_epoch", type=int, required=True, help="total_epoch"
- )
- parser.add_argument(
- "-pg", "--pretrainG", type=str, default="", help="Pretrained Discriminator path"
- )
- parser.add_argument(
- "-pd", "--pretrainD", type=str, default="", help="Pretrained Generator path"
- )
- parser.add_argument("-g", "--gpus", type=str, default="0", help="split by -")
- parser.add_argument(
- "-bs", "--batch_size", type=int, required=True, help="batch size"
- )
- parser.add_argument(
- "-e", "--experiment_dir", type=str, required=True, help="experiment dir"
- ) # -m
- parser.add_argument(
- "-sr", "--sample_rate", type=str, required=True, help="sample rate, 32k/40k/48k"
- )
- parser.add_argument(
- "-sw",
- "--save_every_weights",
- type=str,
- default="0",
- help="save the extracted model in weights directory when saving checkpoints",
- )
- parser.add_argument(
- "-v", "--version", type=str, required=True, help="model version"
- )
- parser.add_argument(
- "-f0",
- "--if_f0",
- type=int,
- required=True,
- help="use f0 as one of the inputs of the model, 1 or 0",
- )
- parser.add_argument(
- "-l",
- "--if_latest",
- type=int,
- required=True,
- help="if only save the latest G/D pth file, 1 or 0",
- )
- parser.add_argument(
- "-c",
- "--if_cache_data_in_gpu",
- type=int,
- required=True,
- help="if caching the dataset in GPU memory, 1 or 0",
- )
-
- args = parser.parse_args()
- name = args.experiment_dir
- experiment_dir = os.path.join("./logs", args.experiment_dir)
-
- config_save_path = os.path.join(experiment_dir, "config.json")
- with open(config_save_path, "r") as f:
- config = json.load(f)
-
- hparams = HParams(**config)
- hparams.model_dir = hparams.experiment_dir = experiment_dir
- hparams.save_every_epoch = args.save_every_epoch
- hparams.name = name
- hparams.total_epoch = args.total_epoch
- hparams.pretrainG = args.pretrainG
- hparams.pretrainD = args.pretrainD
- hparams.version = args.version
- hparams.gpus = args.gpus
- hparams.train.batch_size = args.batch_size
- hparams.sample_rate = args.sample_rate
- hparams.if_f0 = args.if_f0
- hparams.if_latest = args.if_latest
- hparams.save_every_weights = args.save_every_weights
- hparams.if_cache_data_in_gpu = args.if_cache_data_in_gpu
- hparams.data.training_files = "%s/filelist.txt" % experiment_dir
- return hparams
-
-
-def get_hparams_from_dir(model_dir):
- config_save_path = os.path.join(model_dir, "config.json")
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- return hparams
-
-
-def check_git_hash(model_dir):
- source_dir = os.path.dirname(os.path.realpath(__file__))
- if not os.path.exists(os.path.join(source_dir, ".git")):
- logger.warn(
- "{} is not a git repository, therefore hash value comparison will be ignored.".format(
- source_dir
- )
- )
- return
-
- cur_hash = subprocess.getoutput("git rev-parse HEAD")
-
- path = os.path.join(model_dir, "githash")
- if os.path.exists(path):
- saved_hash = open(path).read()
- if saved_hash != cur_hash:
- logger.warn(
- "git hash values are different. {}(saved) != {}(current)".format(
- saved_hash[:8], cur_hash[:8]
- )
- )
- else:
- open(path, "w").write(cur_hash)
-
-
-def get_logger(model_dir, filename="train.log"):
- global logger
- logger = logging.getLogger(os.path.basename(model_dir))
- logger.setLevel(logging.DEBUG)
-
- formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- h = logging.FileHandler(os.path.join(model_dir, filename))
- h.setLevel(logging.DEBUG)
- h.setFormatter(formatter)
- logger.addHandler(h)
- return logger
-
-
-class HParams:
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
diff --git a/spaces/Enterprisium/Easy_GUI/lib/infer_pack/models_onnx.py b/spaces/Enterprisium/Easy_GUI/lib/infer_pack/models_onnx.py
deleted file mode 100644
index 963e67b29f828e9fdd096397952054fe77cf3d10..0000000000000000000000000000000000000000
--- a/spaces/Enterprisium/Easy_GUI/lib/infer_pack/models_onnx.py
+++ /dev/null
@@ -1,819 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from lib.infer_pack import modules
-from lib.infer_pack import attentions
-from lib.infer_pack import commons
-from lib.infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from lib.infer_pack.commons import init_weights
-import numpy as np
-from lib.infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder768(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(768, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMsNSFsidM(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- version,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- if version == "v1":
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- else:
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- self.speaker_map = None
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def construct_spkmixmap(self, n_speaker):
- self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels))
- for i in range(n_speaker):
- self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]]))
- self.speaker_map = self.speaker_map.unsqueeze(0)
-
- def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None):
- if self.speaker_map is not None: # [N, S] * [S, B, 1, H]
- g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1]
- g = g * self.speaker_map # [N, S, B, 1, H]
- g = torch.sum(g, dim=1) # [N, 1, B, 1, H]
- g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N]
- else:
- g = g.unsqueeze(0)
- g = self.emb_g(g).transpose(1, 2)
-
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class MultiPeriodDiscriminatorV2(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminatorV2, self).__init__()
- # periods = [2, 3, 5, 7, 11, 17]
- periods = [2, 3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/Epitech/Scarecrow/original_app/backend.py b/spaces/Epitech/Scarecrow/original_app/backend.py
deleted file mode 100644
index 6eed9e76fa1c65bbbedf5fb73d1948306a649031..0000000000000000000000000000000000000000
--- a/spaces/Epitech/Scarecrow/original_app/backend.py
+++ /dev/null
@@ -1,88 +0,0 @@
-import cv2
-import numpy as np
-import socket
-import pickle
-import struct
-
-# Load YOLO model
-net = cv2.dnn.readNet("yolov3.weights", "yolov3.cfg")
-classes = []
-with open("coco.names", "r") as f:
- classes = [line.strip() for line in f.readlines()]
-
-resolved_label = ''
-
-# Set up socket
-HOST = ''
-PORT = 8089
-s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
-print('Socket created')
-s.bind((HOST, PORT))
-print('Socket bind complete')
-s.listen(10)
-print('Socket now listening')
-
-# Accept connections
-conn, addr = s.accept()
-
-# Receive and process frames
-data = b''
-payload_size = struct.calcsize("L")
-while True:
- # Retrieve message size
- while len(data) < payload_size:
- data += conn.recv(4096)
- packed_msg_size = data[:payload_size]
- data = data[payload_size:]
- msg_size = struct.unpack("L", packed_msg_size)[0]
-
- # Retrieve all data based on message size
- while len(data) < msg_size:
- data += conn.recv(4096)
- frame_data = data[:msg_size]
- data = data[msg_size:]
-
- # Extract frame
- frame = pickle.loads(frame_data)
-
- # Run YOLO on frame
- blob = cv2.dnn.blobFromImage(frame, 1/255.0, (416, 416), swapRB=True, crop=False)
- net.setInput(blob)
- outputs = net.forward(net.getUnconnectedOutLayersNames())
- boxes = []
- confidences = []
- class_ids = []
- for output in outputs:
- for detection in output:
- scores = detection[5:]
- class_id = np.argmax(scores)
- confidence = scores[class_id]
- if confidence > 0.5:
- center_x = int(detection[0] * frame.shape[1])
- center_y = int(detection[1] * frame.shape[0])
- w = int(detection[2] * frame.shape[1])
- h = int(detection[3] * frame.shape[0])
- x = int(center_x - w/2)
- y = int(center_y - h/2)
- boxes.append([x, y, w, h])
- confidences.append(float(confidence))
- class_ids.append(class_id)
- indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)
- if len(indexes) > 0:
- for i in indexes.flatten():
- resolved_label = classes[class_ids[i]]
- print(resolved_label)
-
- # Display frame
- cv2.imshow('frame', frame)
- cv2.waitKey(1)
-
- # Send response to client
- try:
- if len(indexes) > 0:
- response = "[Scarecrow]: " + resolved_label
- else:
- response = "[Scarecrow]: NONE"
- except IndexError:
- response = "[Scarecrow]: ERROR"
- conn.sendall(response.encode())
\ No newline at end of file
diff --git a/spaces/EsoCode/text-generation-webui/extensions/multimodal/pipeline_loader.py b/spaces/EsoCode/text-generation-webui/extensions/multimodal/pipeline_loader.py
deleted file mode 100644
index 8fcd0a9b410fbc44a51941e0a87b294de871ef8b..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/extensions/multimodal/pipeline_loader.py
+++ /dev/null
@@ -1,52 +0,0 @@
-import traceback
-from importlib import import_module
-from pathlib import Path
-from typing import Tuple
-
-from extensions.multimodal.abstract_pipeline import AbstractMultimodalPipeline
-from modules import shared
-from modules.logging_colors import logger
-
-
-def _get_available_pipeline_modules():
- pipeline_path = Path(__file__).parent / 'pipelines'
- modules = [p for p in pipeline_path.iterdir() if p.is_dir()]
- return [m.name for m in modules if (m / 'pipelines.py').exists()]
-
-
-def load_pipeline(params: dict) -> Tuple[AbstractMultimodalPipeline, str]:
- pipeline_modules = {}
- available_pipeline_modules = _get_available_pipeline_modules()
- for name in available_pipeline_modules:
- try:
- pipeline_modules[name] = import_module(f'extensions.multimodal.pipelines.{name}.pipelines')
- except:
- logger.warning(f'Failed to get multimodal pipelines from {name}')
- logger.warning(traceback.format_exc())
-
- if shared.args.multimodal_pipeline is not None:
- for k in pipeline_modules:
- if hasattr(pipeline_modules[k], 'get_pipeline'):
- pipeline = getattr(pipeline_modules[k], 'get_pipeline')(shared.args.multimodal_pipeline, params)
- if pipeline is not None:
- return (pipeline, k)
- else:
- model_name = shared.args.model.lower()
- for k in pipeline_modules:
- if hasattr(pipeline_modules[k], 'get_pipeline_from_model_name'):
- pipeline = getattr(pipeline_modules[k], 'get_pipeline_from_model_name')(model_name, params)
- if pipeline is not None:
- return (pipeline, k)
-
- available = []
- for k in pipeline_modules:
- if hasattr(pipeline_modules[k], 'available_pipelines'):
- pipelines = getattr(pipeline_modules[k], 'available_pipelines')
- available += pipelines
-
- if shared.args.multimodal_pipeline is not None:
- log = f'Multimodal - ERROR: Failed to load multimodal pipeline "{shared.args.multimodal_pipeline}", available pipelines are: {available}.'
- else:
- log = f'Multimodal - ERROR: Failed to determine multimodal pipeline for model {shared.args.model}, please select one manually using --multimodal-pipeline [PIPELINE]. Available pipelines are: {available}.'
- logger.critical(f'{log} Please specify a correct pipeline, or disable the extension')
- raise RuntimeError(f'{log} Please specify a correct pipeline, or disable the extension')
diff --git a/spaces/EuroPython2022/clickbaitonator/fudge/predict_formality.py b/spaces/EuroPython2022/clickbaitonator/fudge/predict_formality.py
deleted file mode 100644
index 5cd409262ce2880724ab7d8c736fa985a1eefc28..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/clickbaitonator/fudge/predict_formality.py
+++ /dev/null
@@ -1,404 +0,0 @@
-import os
-import random
-import time
-import pickle
-import math
-from argparse import ArgumentParser
-
-from typing import Iterable, List, Optional, Tuple
-
-from tqdm import tqdm
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline, set_seed, GPT2Tokenizer, GPT2Model, MarianTokenizer, MarianMTModel
-from torch import Tensor
-
-from data import Dataset
-from model import Model
-from util import save_checkpoint, ProgressMeter, AverageMeter, num_params
-from constants import *
-
-def main(args):
- with open(args.dataset_info, 'rb') as rf:
- dataset_info = pickle.load(rf)
- tokenizer = MarianTokenizer.from_pretrained(args.model_string)
- tokenizer.add_special_tokens({'pad_token': PAD_TOKEN})
- pad_id = tokenizer.encode(PAD_TOKEN)[0]
- model = MarianMTModel.from_pretrained(args.model_string, return_dict=True).to(args.device)
- model.eval()
-
- checkpoint = torch.load(args.ckpt, map_location=args.device)
- model_args = checkpoint['args']
- conditioning_model = Model(model_args, pad_id, len(dataset_info.index2word)) # no need to get the glove embeddings when reloading since they're saved in model ckpt anyway
- conditioning_model.load_state_dict(checkpoint['state_dict'])
- conditioning_model = conditioning_model.to(args.device)
- conditioning_model.eval()
- print("=> loaded checkpoint '{}' (epoch {})"
- .format(args.ckpt, checkpoint['epoch']))
- print('num params', num_params(conditioning_model))
-
- while True:
- results = predict_formality(model,
- tokenizer,
- conditioning_model,
- [args.input_text],
- dataset_info,
- precondition_topk=args.precondition_topk,
- do_sample=args.do_sample,
- length_cutoff=args.length_cutoff,
- condition_lambda=args.condition_lambda,
- device=args.device)
- print(results)
- import pdb; pdb.set_trace()
-
-
-def predict_formality(model, tokenizer, conditioning_model, input_text, dataset_info, precondition_topk=200, do_sample=False, length_cutoff=512, condition_lambda=1.0, device='cuda'):
- with torch.no_grad():
- batch_size = len(input_text)
-
- # assumes initially all same length.
- # encode every x_i i \in [seq] word to respectable embedding
- encoded_input = [tokenizer.encode(it, return_tensors='pt').to(device) for it in input_text] # batch x seq
- encoded_input = torch.cat(encoded_input, dim=0)
-
- input_ids = torch.LongTensor([[58100]]).to(device)
- cur_len = 1
- max_length = length_cutoff
- min_length = 0
- temperature = 1.0
- top_k = 50
- top_p = 1.0
- repetition_penalty = 1.0
- no_repeat_ngram_size = 0
- bad_words_ids = [[58100]]
- pad_token_id = 58100
- eos_token_id = 0
- effective_batch_size = batch_size
- attention_mask = encoded_input.new_ones(encoded_input.shape)
- use_cache = True
- model_specific_kwargs = {'encoder_outputs': model.get_encoder()(encoded_input, attention_mask=attention_mask)}
-
- output = _generate_no_beam_search(model,
- conditioning_model,
- condition_lambda,
- precondition_topk,
- input_ids,
- cur_len,
- max_length,
- min_length,
- do_sample,
- temperature,
- top_k,
- top_p,
- repetition_penalty,
- no_repeat_ngram_size,
- bad_words_ids,
- pad_token_id,
- eos_token_id,
- batch_size,
- attention_mask,
- use_cache,
- model_specific_kwargs)
-
- return [tokenizer.decode(s[1:]) for s in output] # 1: to delete the pad token
-
-
-# hack of code from transformers/generation_utils.py
-# to get our conditioning
-def postprocess_next_token_scores(
- model,
- scores,
- input_ids,
- no_repeat_ngram_size,
- bad_words_ids,
- cur_len,
- min_length,
- max_length,
- eos_token_id,
- repetition_penalty,
- batch_size,
- num_beams,
-):
- # repetition penalty (from CTRL paper https://arxiv.org/abs/1909.05858)
- if repetition_penalty != 1.0:
- model.enforce_repetition_penalty_(
- scores,
- batch_size,
- num_beams,
- input_ids,
- repetition_penalty,
- )
-
- # set eos token prob to zero if min_length is not reached
- if eos_token_id is not None and cur_len < min_length:
- scores[:, eos_token_id] = -float("inf")
-
- if no_repeat_ngram_size > 0:
- # calculate a list of banned tokens to prevent repetitively generating the same ngrams
- num_batch_hypotheses = batch_size * num_beams
- # from fairseq: https://github.com/pytorch/fairseq/blob/a07cb6f40480928c9e0548b737aadd36ee66ac76/fairseq/sequence_generator.py#L345
- banned_batch_tokens = calc_banned_ngram_tokens(
- input_ids, num_batch_hypotheses, no_repeat_ngram_size, cur_len
- )
- for i, banned_tokens in enumerate(banned_batch_tokens):
- scores[i, banned_tokens] = -float("inf")
-
- if bad_words_ids is not None:
- # Exclude EOS token (already processed)
- bad_words_ids = list(filter(lambda bad_token_seq: bad_token_seq != [eos_token_id], bad_words_ids))
- # calculate a list of banned tokens according to bad words
- banned_tokens = calc_banned_bad_words_ids(input_ids.tolist(), bad_words_ids)
- # Modify the scores in place by setting the banned tokens logits to `-inf`
- set_scores_to_inf_for_banned_tokens(scores, banned_tokens)
-
- return scores
-
-def calc_banned_ngram_tokens(prev_input_ids: Tensor, num_hypos: int, no_repeat_ngram_size: int, cur_len: int) -> None:
- """Copied from fairseq for no_repeat_ngram in beam_search"""
- if cur_len + 1 < no_repeat_ngram_size:
- # return no banned tokens if we haven't generated no_repeat_ngram_size tokens yet
- return [[] for _ in range(num_hypos)]
- generated_ngrams = [{} for _ in range(num_hypos)]
- for idx in range(num_hypos):
- gen_tokens = prev_input_ids[idx].tolist()
- generated_ngram = generated_ngrams[idx]
- for ngram in zip(*[gen_tokens[i:] for i in range(no_repeat_ngram_size)]):
- prev_ngram_tuple = tuple(ngram[:-1])
- generated_ngram[prev_ngram_tuple] = generated_ngram.get(prev_ngram_tuple, []) + [ngram[-1]]
-
- def _get_generated_ngrams(hypo_idx):
- # Before decoding the next token, prevent decoding of ngrams that have already appeared
- start_idx = cur_len + 1 - no_repeat_ngram_size
- ngram_idx = tuple(prev_input_ids[hypo_idx, start_idx:cur_len].tolist())
- return generated_ngrams[hypo_idx].get(ngram_idx, [])
-
- banned_tokens = [_get_generated_ngrams(hypo_idx) for hypo_idx in range(num_hypos)]
- return banned_tokens
-
-
-def calc_banned_bad_words_ids(prev_input_ids: Iterable[int], bad_words_ids: Iterable[int]) -> Iterable[int]:
- banned_tokens = []
-
- def _tokens_match(prev_tokens, tokens):
- if len(tokens) == 0:
- # if bad word tokens is just one token always ban it
- return True
- if len(tokens) > len(prev_tokens):
- # if bad word tokens are longer than prev tokens they can't be equal
- return False
-
- if prev_tokens[-len(tokens) :] == tokens:
- # if tokens match
- return True
- else:
- return False
-
- for prev_input_ids_slice in prev_input_ids:
- banned_tokens_slice = []
-
- for banned_token_seq in bad_words_ids:
- assert len(banned_token_seq) > 0, "Banned words token sequences {} cannot have an empty list".format(
- bad_words_ids
- )
-
- if _tokens_match(prev_input_ids_slice, banned_token_seq[:-1]) is False:
- # if tokens do not match continue
- continue
-
- banned_tokens_slice.append(banned_token_seq[-1])
-
- banned_tokens.append(banned_tokens_slice)
-
- return banned_tokens
-
-def set_scores_to_inf_for_banned_tokens(scores: torch.Tensor, banned_tokens: List[List[int]]) -> None:
- """Modifies the scores in place by setting the banned token positions to `-inf`. Banned token is expected to be
- a list of list of banned tokens to ban in the format [[batch index, vocabulary position],...]
- Args:
- scores: logits distribution of shape (batch size, vocabulary size)
- banned_tokens: list of list of tokens to ban of length (batch_size)
- """
- banned_mask_list = []
- for idx, batch_banned_tokens in enumerate(banned_tokens):
- for token in batch_banned_tokens:
- banned_mask_list.append([idx, token])
- if not banned_mask_list:
- return
- banned_mask = torch.LongTensor(banned_mask_list)
- indices = torch.ones(len(banned_mask))
- # A sparse tensor is generated from a list of coordinates: [[0, 1], [0, 2], [2, 0]]. A conversion to dense tensor generates:
- # [ 0 1 1 ]
- # [ 0 0 0 ]
- # [ 1 0 0 ]
-
- banned_mask = torch.sparse.LongTensor(banned_mask.t(), indices, scores.size()).to(scores.device).to_dense().bool()
- scores.masked_fill_(banned_mask, -float("inf"))
-
-def _generate_no_beam_search(
- model,
- conditioning_model,
- condition_lambda,
- precondition_topk,
- input_ids,
- cur_len,
- max_length,
- min_length,
- do_sample,
- temperature,
- top_k,
- top_p,
- repetition_penalty,
- no_repeat_ngram_size,
- bad_words_ids,
- pad_token_id,
- eos_token_id,
- batch_size,
- attention_mask,
- use_cache,
- model_kwargs,
- ):
- """Generate sequences for each example without beam search (num_beams == 1).
- All returned sequence are generated independantly.
- """
- # length of generated sentences / unfinished sentences
- unfinished_sents = input_ids.new(batch_size).fill_(1)
- sent_lengths = input_ids.new(batch_size).fill_(max_length)
- past = None
- while cur_len < max_length:
- model_inputs = model.prepare_inputs_for_generation(
- input_ids, past=past, attention_mask=attention_mask, use_cache=use_cache, **model_kwargs
- )
-
- outputs = model(**model_inputs, return_dict=True)
- next_token_logits = outputs.logits[:, -1, :]
-
- # scores = model.postprocess_next_token_scores(
- # scores=next_token_logits,
- # input_ids=input_ids,
- # no_repeat_ngram_size=no_repeat_ngram_size,
- # bad_words_ids=bad_words_ids,
- # cur_len=cur_len,
- # min_length=min_length,
- # max_length=max_length,
- # eos_token_id=eos_token_id,
- # repetition_penalty=repetition_penalty,
- # batch_size=batch_size,
- # num_beams=1,
- # )
-
- scores = postprocess_next_token_scores(
- model=model,
- scores=next_token_logits,
- input_ids=input_ids,
- no_repeat_ngram_size=no_repeat_ngram_size,
- bad_words_ids=bad_words_ids,
- cur_len=cur_len,
- min_length=min_length,
- max_length=max_length,
- eos_token_id=eos_token_id,
- repetition_penalty=repetition_penalty,
- batch_size=batch_size,
- num_beams=1,
- )
-
- # if model has past, then set the past variable to speed up decoding
- if "past_key_values" in outputs:
- past = outputs.past_key_values
- elif "mems" in outputs:
- past = outputs.mems
-
- top_logits, top_indices = scores.topk(precondition_topk, dim=1) # batch x topk
- tplus1_candidates = torch.cat([input_ids.unsqueeze(1).expand(-1, precondition_topk, -1), top_indices.unsqueeze(2)], dim=2)[:, :, 1:] # batch x topk x seq+1, with pad dropped
- expanded_lengths = torch.LongTensor([[cur_len for _ in range(precondition_topk)] for _ in range(batch_size)]).to(scores.device)
- if condition_lambda == 0:
- condition_logits = torch.zeros_like(top_logits).float()
- else:
- condition_logits = conditioning_model(tplus1_candidates.flatten(0, 1), # batch*topk x seq+1
- expanded_lengths.flatten(0, 1), # batch*topk
- None,
- None,
- None)
- condition_logits = condition_logits.view(batch_size, precondition_topk, -1)[:, :, -1] # batch x topk of last formality pred
- condition_logits = condition_logits - torch.log(1 + torch.exp(condition_logits)) # get correct log probs
- # condition_logits = - torch.log(1 + torch.exp(condition_logits)) # for informal
- full_logits = top_logits + condition_lambda * condition_logits
- if do_sample:
- raise NotImplementedError
- else:
- # Greedy decoding
- next_token = top_indices[torch.arange(batch_size).to(top_indices.device), torch.argmax(full_logits, dim=-1)]
-
- # if do_sample:
- # # Temperature (higher temperature => more likely to sample low probability tokens)
- # if temperature != 1.0:
- # scores = scores / temperature
- # # Top-p/top-k filtering
- # next_token_logscores = top_k_top_p_filtering(scores, top_k=top_k, top_p=top_p)
- # # Sample
- # probs = F.softmax(next_token_logscores, dim=-1)
- # next_token = torch.multinomial(probs, num_samples=1).squeeze(1)
- # else:
- # # Greedy decoding
- # next_token = torch.argmax(next_token_logits, dim=-1)
-
- # update generations and finished sentences
- if eos_token_id is not None:
- # pad finished sentences if eos_token_id exist
- tokens_to_add = next_token * unfinished_sents + (pad_token_id) * (1 - unfinished_sents)
- else:
- tokens_to_add = next_token
-
- # add token and increase length by one
- input_ids = torch.cat([input_ids, tokens_to_add.unsqueeze(-1)], dim=-1)
- cur_len = cur_len + 1
-
- if eos_token_id is not None:
- eos_in_sents = tokens_to_add == eos_token_id
- # if sentence is unfinished and the token to add is eos, sent_lengths is filled with current length
- is_sents_unfinished_and_token_to_add_is_eos = unfinished_sents.mul(eos_in_sents.long()).bool()
- sent_lengths.masked_fill_(is_sents_unfinished_and_token_to_add_is_eos, cur_len)
- # unfinished_sents is set to zero if eos in sentence
- unfinished_sents.mul_((~eos_in_sents).long())
-
- # stop when there is a in each sentence, or if we exceed the maximul length
- if unfinished_sents.max() == 0:
- break
-
- # extend attention_mask for new generated input if only decoder
- if model.config.is_encoder_decoder is False:
- attention_mask = torch.cat(
- [attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1
- )
-
- return input_ids
-
-if __name__=='__main__':
- parser = ArgumentParser()
-
- # DATA
- parser.add_argument('--ckpt', type=str, required=True)
- parser.add_argument('--dataset_info', type=str, required=True, help='saved dataset info')
- parser.add_argument('--model_string', type=str, default='Helsinki-NLP/opus-mt-es-en')
-
- parser.add_argument('--input_text', type=str, default=None, required=True, help='text to run pred on')
-
- parser.add_argument('--precondition_topk', type=int, default=200, help='consider top k outputs from gpt at each step before conditioning and re-pruning')
- parser.add_argument('--do_sample', action='store_true', default=False, help='sample instead of greedy')
- parser.add_argument('--condition_lambda', type=float, default=1.0, help='lambda weight on conditioning model')
- parser.add_argument('--length_cutoff', type=int, default=512, help='max length')
-
- parser.add_argument('--seed', type=int, default=1, help='random seed')
- parser.add_argument('--device', type=str, default='cuda', choices=['cpu', 'cuda'])
- parser.add_argument('--debug', action='store_true', default=False)
-
- args = parser.parse_args()
-
- random.seed(args.seed)
- np.random.seed(args.seed)
- torch.manual_seed(args.seed)
-
- main(args)
-
-
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/vfnet/vfnet_r50_fpn_mstrain_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/vfnet/vfnet_r50_fpn_mstrain_2x_coco.py
deleted file mode 100644
index 6078bb98cacc04da23dcb7a661047902e0adefb3..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/vfnet/vfnet_r50_fpn_mstrain_2x_coco.py
+++ /dev/null
@@ -1,39 +0,0 @@
-_base_ = './vfnet_r50_fpn_1x_coco.py'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(
- type='Resize',
- img_scale=[(1333, 480), (1333, 960)],
- multiscale_mode='range',
- keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
-# learning policy
-lr_config = dict(step=[16, 22])
-runner = dict(type='EpochBasedRunner', max_epochs=24)
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/anchor/builder.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/anchor/builder.py
deleted file mode 100644
index d79b448ebca9f2b21d455046623172c48c5c3ef0..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/anchor/builder.py
+++ /dev/null
@@ -1,7 +0,0 @@
-from mmcv.utils import Registry, build_from_cfg
-
-ANCHOR_GENERATORS = Registry('Anchor generator')
-
-
-def build_anchor_generator(cfg, default_args=None):
- return build_from_cfg(cfg, ANCHOR_GENERATORS, default_args)
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/losses/accuracy.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/losses/accuracy.py
deleted file mode 100644
index 789a2240a491289c5801b6690116e8ca657d004f..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/losses/accuracy.py
+++ /dev/null
@@ -1,78 +0,0 @@
-import mmcv
-import torch.nn as nn
-
-
-@mmcv.jit(coderize=True)
-def accuracy(pred, target, topk=1, thresh=None):
- """Calculate accuracy according to the prediction and target.
-
- Args:
- pred (torch.Tensor): The model prediction, shape (N, num_class)
- target (torch.Tensor): The target of each prediction, shape (N, )
- topk (int | tuple[int], optional): If the predictions in ``topk``
- matches the target, the predictions will be regarded as
- correct ones. Defaults to 1.
- thresh (float, optional): If not None, predictions with scores under
- this threshold are considered incorrect. Default to None.
-
- Returns:
- float | tuple[float]: If the input ``topk`` is a single integer,
- the function will return a single float as accuracy. If
- ``topk`` is a tuple containing multiple integers, the
- function will return a tuple containing accuracies of
- each ``topk`` number.
- """
- assert isinstance(topk, (int, tuple))
- if isinstance(topk, int):
- topk = (topk, )
- return_single = True
- else:
- return_single = False
-
- maxk = max(topk)
- if pred.size(0) == 0:
- accu = [pred.new_tensor(0.) for i in range(len(topk))]
- return accu[0] if return_single else accu
- assert pred.ndim == 2 and target.ndim == 1
- assert pred.size(0) == target.size(0)
- assert maxk <= pred.size(1), \
- f'maxk {maxk} exceeds pred dimension {pred.size(1)}'
- pred_value, pred_label = pred.topk(maxk, dim=1)
- pred_label = pred_label.t() # transpose to shape (maxk, N)
- correct = pred_label.eq(target.view(1, -1).expand_as(pred_label))
- if thresh is not None:
- # Only prediction values larger than thresh are counted as correct
- correct = correct & (pred_value > thresh).t()
- res = []
- for k in topk:
- correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True)
- res.append(correct_k.mul_(100.0 / pred.size(0)))
- return res[0] if return_single else res
-
-
-class Accuracy(nn.Module):
-
- def __init__(self, topk=(1, ), thresh=None):
- """Module to calculate the accuracy.
-
- Args:
- topk (tuple, optional): The criterion used to calculate the
- accuracy. Defaults to (1,).
- thresh (float, optional): If not None, predictions with scores
- under this threshold are considered incorrect. Default to None.
- """
- super().__init__()
- self.topk = topk
- self.thresh = thresh
-
- def forward(self, pred, target):
- """Forward function to calculate accuracy.
-
- Args:
- pred (torch.Tensor): Prediction of models.
- target (torch.Tensor): Target for each prediction.
-
- Returns:
- tuple[float]: The accuracies under different topk criterions.
- """
- return accuracy(pred, target, self.topk, self.thresh)
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/losses/gfocal_loss.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/losses/gfocal_loss.py
deleted file mode 100644
index 9d3b8833dc50c76f6741db5341dbf8da3402d07b..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/losses/gfocal_loss.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import mmcv
-import torch.nn as nn
-import torch.nn.functional as F
-
-from ..builder import LOSSES
-from .utils import weighted_loss
-
-
-@mmcv.jit(derivate=True, coderize=True)
-@weighted_loss
-def quality_focal_loss(pred, target, beta=2.0):
- r"""Quality Focal Loss (QFL) is from `Generalized Focal Loss: Learning
- Qualified and Distributed Bounding Boxes for Dense Object Detection
- `_.
-
- Args:
- pred (torch.Tensor): Predicted joint representation of classification
- and quality (IoU) estimation with shape (N, C), C is the number of
- classes.
- target (tuple([torch.Tensor])): Target category label with shape (N,)
- and target quality label with shape (N,).
- beta (float): The beta parameter for calculating the modulating factor.
- Defaults to 2.0.
-
- Returns:
- torch.Tensor: Loss tensor with shape (N,).
- """
- assert len(target) == 2, """target for QFL must be a tuple of two elements,
- including category label and quality label, respectively"""
- # label denotes the category id, score denotes the quality score
- label, score = target
-
- # negatives are supervised by 0 quality score
- pred_sigmoid = pred.sigmoid()
- scale_factor = pred_sigmoid
- zerolabel = scale_factor.new_zeros(pred.shape)
- loss = F.binary_cross_entropy_with_logits(
- pred, zerolabel, reduction='none') * scale_factor.pow(beta)
-
- # FG cat_id: [0, num_classes -1], BG cat_id: num_classes
- bg_class_ind = pred.size(1)
- pos = ((label >= 0) & (label < bg_class_ind)).nonzero().squeeze(1)
- pos_label = label[pos].long()
- # positives are supervised by bbox quality (IoU) score
- scale_factor = score[pos] - pred_sigmoid[pos, pos_label]
- loss[pos, pos_label] = F.binary_cross_entropy_with_logits(
- pred[pos, pos_label], score[pos],
- reduction='none') * scale_factor.abs().pow(beta)
-
- loss = loss.sum(dim=1, keepdim=False)
- return loss
-
-
-@mmcv.jit(derivate=True, coderize=True)
-@weighted_loss
-def distribution_focal_loss(pred, label):
- r"""Distribution Focal Loss (DFL) is from `Generalized Focal Loss: Learning
- Qualified and Distributed Bounding Boxes for Dense Object Detection
- `_.
-
- Args:
- pred (torch.Tensor): Predicted general distribution of bounding boxes
- (before softmax) with shape (N, n+1), n is the max value of the
- integral set `{0, ..., n}` in paper.
- label (torch.Tensor): Target distance label for bounding boxes with
- shape (N,).
-
- Returns:
- torch.Tensor: Loss tensor with shape (N,).
- """
- dis_left = label.long()
- dis_right = dis_left + 1
- weight_left = dis_right.float() - label
- weight_right = label - dis_left.float()
- loss = F.cross_entropy(pred, dis_left, reduction='none') * weight_left \
- + F.cross_entropy(pred, dis_right, reduction='none') * weight_right
- return loss
-
-
-@LOSSES.register_module()
-class QualityFocalLoss(nn.Module):
- r"""Quality Focal Loss (QFL) is a variant of `Generalized Focal Loss:
- Learning Qualified and Distributed Bounding Boxes for Dense Object
- Detection `_.
-
- Args:
- use_sigmoid (bool): Whether sigmoid operation is conducted in QFL.
- Defaults to True.
- beta (float): The beta parameter for calculating the modulating factor.
- Defaults to 2.0.
- reduction (str): Options are "none", "mean" and "sum".
- loss_weight (float): Loss weight of current loss.
- """
-
- def __init__(self,
- use_sigmoid=True,
- beta=2.0,
- reduction='mean',
- loss_weight=1.0):
- super(QualityFocalLoss, self).__init__()
- assert use_sigmoid is True, 'Only sigmoid in QFL supported now.'
- self.use_sigmoid = use_sigmoid
- self.beta = beta
- self.reduction = reduction
- self.loss_weight = loss_weight
-
- def forward(self,
- pred,
- target,
- weight=None,
- avg_factor=None,
- reduction_override=None):
- """Forward function.
-
- Args:
- pred (torch.Tensor): Predicted joint representation of
- classification and quality (IoU) estimation with shape (N, C),
- C is the number of classes.
- target (tuple([torch.Tensor])): Target category label with shape
- (N,) and target quality label with shape (N,).
- weight (torch.Tensor, optional): The weight of loss for each
- prediction. Defaults to None.
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- reduction_override (str, optional): The reduction method used to
- override the original reduction method of the loss.
- Defaults to None.
- """
- assert reduction_override in (None, 'none', 'mean', 'sum')
- reduction = (
- reduction_override if reduction_override else self.reduction)
- if self.use_sigmoid:
- loss_cls = self.loss_weight * quality_focal_loss(
- pred,
- target,
- weight,
- beta=self.beta,
- reduction=reduction,
- avg_factor=avg_factor)
- else:
- raise NotImplementedError
- return loss_cls
-
-
-@LOSSES.register_module()
-class DistributionFocalLoss(nn.Module):
- r"""Distribution Focal Loss (DFL) is a variant of `Generalized Focal Loss:
- Learning Qualified and Distributed Bounding Boxes for Dense Object
- Detection `_.
-
- Args:
- reduction (str): Options are `'none'`, `'mean'` and `'sum'`.
- loss_weight (float): Loss weight of current loss.
- """
-
- def __init__(self, reduction='mean', loss_weight=1.0):
- super(DistributionFocalLoss, self).__init__()
- self.reduction = reduction
- self.loss_weight = loss_weight
-
- def forward(self,
- pred,
- target,
- weight=None,
- avg_factor=None,
- reduction_override=None):
- """Forward function.
-
- Args:
- pred (torch.Tensor): Predicted general distribution of bounding
- boxes (before softmax) with shape (N, n+1), n is the max value
- of the integral set `{0, ..., n}` in paper.
- target (torch.Tensor): Target distance label for bounding boxes
- with shape (N,).
- weight (torch.Tensor, optional): The weight of loss for each
- prediction. Defaults to None.
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- reduction_override (str, optional): The reduction method used to
- override the original reduction method of the loss.
- Defaults to None.
- """
- assert reduction_override in (None, 'none', 'mean', 'sum')
- reduction = (
- reduction_override if reduction_override else self.reduction)
- loss_cls = self.loss_weight * distribution_focal_loss(
- pred, target, weight, reduction=reduction, avg_factor=avg_factor)
- return loss_cls
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r101-d8_512x512_160k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r101-d8_512x512_160k_ade20k.py
deleted file mode 100644
index df9c2aca9c7c1999d74a08a58aca5d220f7df54a..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r101-d8_512x512_160k_ade20k.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './nonlocal_r50-d8_512x512_160k_ade20k.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/losses/__init__.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/losses/__init__.py
deleted file mode 100644
index beca72045694273d63465bac2f27dbc6672271db..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/losses/__init__.py
+++ /dev/null
@@ -1,12 +0,0 @@
-from .accuracy import Accuracy, accuracy
-from .cross_entropy_loss import (CrossEntropyLoss, binary_cross_entropy,
- cross_entropy, mask_cross_entropy)
-from .dice_loss import DiceLoss
-from .lovasz_loss import LovaszLoss
-from .utils import reduce_loss, weight_reduce_loss, weighted_loss
-
-__all__ = [
- 'accuracy', 'Accuracy', 'cross_entropy', 'binary_cross_entropy',
- 'mask_cross_entropy', 'CrossEntropyLoss', 'reduce_loss',
- 'weight_reduce_loss', 'weighted_loss', 'LovaszLoss', 'DiceLoss'
-]
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/evaluate.py b/spaces/HarryLee/eCommerceImageCaptioning/evaluate.py
deleted file mode 100644
index 2ba9aaecb23051a08fa8a98bde623b7971552c88..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/evaluate.py
+++ /dev/null
@@ -1,152 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import os
-import sys
-import json
-from itertools import chain
-
-import numpy as np
-import torch
-import torch.distributed as dist
-from fairseq import distributed_utils, options, tasks, utils
-from fairseq.dataclass.utils import convert_namespace_to_omegaconf
-from fairseq.logging import progress_bar
-from fairseq.utils import reset_logging
-from omegaconf import DictConfig
-
-from utils import checkpoint_utils
-from utils.eval_utils import eval_step
-
-logging.basicConfig(
- format="%(asctime)s | %(levelname)s | %(name)s | %(message)s",
- datefmt="%Y-%m-%d %H:%M:%S",
- level=os.environ.get("LOGLEVEL", "INFO").upper(),
- stream=sys.stdout,
-)
-logger = logging.getLogger("ofa.evaluate")
-
-
-def apply_half(t):
- if t.dtype is torch.float32:
- return t.to(dtype=torch.half)
- return t
-
-
-def main(cfg: DictConfig):
- utils.import_user_module(cfg.common)
-
- reset_logging()
- logger.info(cfg)
-
- assert (
- cfg.dataset.max_tokens is not None or cfg.dataset.batch_size is not None
- ), "Must specify batch size either with --max-tokens or --batch-size"
-
- # Fix seed for stochastic decoding
- if cfg.common.seed is not None and not cfg.generation.no_seed_provided:
- np.random.seed(cfg.common.seed)
- utils.set_torch_seed(cfg.common.seed)
-
- use_fp16 = cfg.common.fp16
- use_cuda = torch.cuda.is_available() and not cfg.common.cpu
-
- if use_cuda:
- torch.cuda.set_device(cfg.distributed_training.device_id)
-
- # Load ensemble
- overrides = eval(cfg.common_eval.model_overrides)
- logger.info("loading model(s) from {}".format(cfg.common_eval.path))
- models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
- utils.split_paths(cfg.common_eval.path),
- arg_overrides=overrides,
- suffix=cfg.checkpoint.checkpoint_suffix,
- strict=(cfg.checkpoint.checkpoint_shard_count == 1),
- num_shards=cfg.checkpoint.checkpoint_shard_count,
- )
-
- # loading the dataset should happen after the checkpoint has been loaded so we can give it the saved task config
- task.load_dataset(cfg.dataset.gen_subset, task_cfg=saved_cfg.task)
-
- # Move models to GPU
- for model in models:
- model.eval()
- if use_fp16:
- model.half()
- if use_cuda and not cfg.distributed_training.pipeline_model_parallel:
- model.cuda()
- model.prepare_for_inference_(cfg)
-
- # Load dataset (possibly sharded)
- itr = task.get_batch_iterator(
- dataset=task.dataset(cfg.dataset.gen_subset),
- max_tokens=cfg.dataset.max_tokens,
- max_sentences=cfg.dataset.batch_size,
- max_positions=utils.resolve_max_positions(
- task.max_positions(), *[m.max_positions() for m in models]
- ),
- ignore_invalid_inputs=cfg.dataset.skip_invalid_size_inputs_valid_test,
- required_batch_size_multiple=cfg.dataset.required_batch_size_multiple,
- seed=cfg.common.seed,
- num_shards=cfg.distributed_training.distributed_world_size,
- shard_id=cfg.distributed_training.distributed_rank,
- num_workers=cfg.dataset.num_workers,
- data_buffer_size=cfg.dataset.data_buffer_size,
- ).next_epoch_itr(shuffle=False)
- progress = progress_bar.progress_bar(
- itr,
- log_format=cfg.common.log_format,
- log_interval=cfg.common.log_interval,
- default_log_format=("tqdm" if not cfg.common.no_progress_bar else "simple"),
- )
-
- # Initialize generator
- generator = task.build_generator(models, cfg.generation)
-
- results = []
- score_sum = torch.FloatTensor([0]).cuda()
- score_cnt = torch.FloatTensor([0]).cuda()
- for sample in progress:
- if "net_input" not in sample:
- continue
- sample = utils.move_to_cuda(sample) if use_cuda else sample
- sample = utils.apply_to_sample(apply_half, sample) if cfg.common.fp16 else sample
- with torch.no_grad():
- result, scores = eval_step(task, generator, models, sample)
- results += result
- score_sum += sum(scores) if scores is not None else 0
- score_cnt += len(scores) if scores is not None else 0
- progress.log({"sentences": sample["nsentences"]})
-
- gather_results = None
- if cfg.distributed_training.distributed_world_size > 1:
- gather_results = [None for _ in range(dist.get_world_size())]
- dist.all_gather_object(gather_results, results)
- dist.all_reduce(score_sum.data)
- dist.all_reduce(score_cnt.data)
- if score_cnt.item() > 0:
- logger.info("score_sum: {}, score_cnt: {}, score: {}".format(
- score_sum, score_cnt, round(score_sum.item() / score_cnt.item(), 4)
- ))
-
- if cfg.distributed_training.distributed_world_size == 1 or dist.get_rank() == 0:
- os.makedirs(cfg.common_eval.results_path, exist_ok=True)
- output_path = os.path.join(cfg.common_eval.results_path, "{}_predict.json".format(cfg.dataset.gen_subset))
- gather_results = list(chain(*gather_results)) if gather_results is not None else results
- with open(output_path, 'w') as fw:
- json.dump(gather_results, fw)
-
-
-def cli_main():
- parser = options.get_generation_parser()
- args = options.parse_args_and_arch(parser)
- cfg = convert_namespace_to_omegaconf(args)
- distributed_utils.call_main(cfg, main)
-
-
-if __name__ == "__main__":
- cli_main()
\ No newline at end of file
diff --git a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/widgets/npmi.py b/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/widgets/npmi.py
deleted file mode 100644
index 93b706d7cb07db76417a56a1348e7dd24cca0f36..0000000000000000000000000000000000000000
--- a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/widgets/npmi.py
+++ /dev/null
@@ -1,161 +0,0 @@
-import gradio as gr
-import pandas as pd
-
-from widgets.widget_base import Widget
-from data_measurements.dataset_statistics import DatasetStatisticsCacheClass as dmt_cls
-import utils
-
-logs = utils.prepare_logging(__file__)
-
-
-class Npmi(Widget):
- def __init__(self):
- self.npmi_first_word = gr.Dropdown(
- render=False, label="What is the first word you want to select?"
- )
- self.npmi_second_word = gr.Dropdown(
- render=False, label="What is the second word you want to select?"
- )
- self.npmi_error_text = gr.Markdown(render=False)
- self.npmi_df = gr.HTML(render=False)
- self.sort = gr.Dropdown(label="Sort By Column", render=False)
- self.npmi_empty_text = gr.Markdown(render=False)
- self.npmi_description = gr.Markdown(render=False)
-
- @property
- def output_components(self):
- return [
- self.npmi_first_word,
- self.npmi_second_word,
- self.sort,
- self.npmi_error_text,
- self.npmi_df,
- self.npmi_description,
- self.npmi_empty_text,
- ]
-
- def render(self):
- with gr.TabItem("Word Association: nPMI"):
- self.npmi_description.render()
- self.npmi_first_word.render()
- self.npmi_second_word.render()
- self.sort.render()
- self.npmi_df.render()
- self.npmi_empty_text.render()
- self.npmi_error_text.render()
-
- def update(self, dstats: dmt_cls):
- min_vocab = dstats.min_vocab_count
- npmi_stats = dstats.npmi_obj
- available_terms = npmi_stats.avail_identity_terms
- output = {comp: gr.update(visible=False) for comp in self.output_components}
- if npmi_stats and len(available_terms) > 0:
- output[self.npmi_description] = gr.Markdown.update(
- value=self.expander_npmi_description(min_vocab), visible=True
- )
- output[self.npmi_first_word] = gr.Dropdown.update(
- choices=available_terms, value=available_terms[0], visible=True
- )
- output[self.npmi_second_word] = gr.Dropdown.update(
- choices=available_terms[::-1], value=available_terms[-1], visible=True
- )
- output[self.sort] = gr.Dropdown.update(choices=['bias', available_terms[0], available_terms[-1]],
- value='bias')
- output.update(
- self.npmi_show(available_terms[0], available_terms[-1], 'bias', dstats)
- )
- else:
- output[self.npmi_error_text] = gr.Markdown.update(
- visible=True,
- value="No words found co-occurring with both of the selected identity terms.",
- )
- return output
-
- def npmi_show(self, term1, term2, sort_col, dstats):
- npmi_stats = dstats.npmi_obj
- paired_results = npmi_stats.get_display(term1, term2)
- output = {}
- if paired_results.empty:
- output[self.npmi_empty_text] = gr.Markdown.update(
- value="""No words that co-occur enough times for results! Or there's a 🐛.
- Or we're still computing this one. 🤷""",
- visible=True,
- )
- output[self.npmi_df] = gr.DataFrame.update(visible=False)
- else:
- output[self.npmi_empty_text] = gr.Markdown.update(visible=False)
- logs.debug("Results to be shown in streamlit are")
- logs.debug(paired_results)
- s = pd.DataFrame(
- paired_results.sort_values(sort_col, ascending=False)
- )
- s.index.name = "word"
- s = s.reset_index().round(4)
- bias_col = [col for col in s.columns if col != "word"]
- # Keep the dataframe from being crazy big.
- if s.shape[0] > 10000:
- bias_thres = max(abs(s[s[0]][5000]), abs(s[s[0]][-5000]))
- logs.info(f"filtering with bias threshold: {bias_thres}")
- s_filtered = s[s[0].abs() > bias_thres]
- else:
- s_filtered = s
- out_df = (
- s_filtered.style.background_gradient(subset=bias_col)
- .format(formatter="{:,.3f}", subset=bias_col)
- .set_properties(**{"text-align": "center", "width": "100em"})
- .set_caption(
- "nPMI scores between the selected identity terms and the words they both co-occur with"
- )
- )
- output[self.npmi_df] = out_df.to_html()
- return output
-
- @staticmethod
- def expander_npmi_description(min_vocab):
- return f"""
- Use this widget to identify problematic biases and stereotypes in
- your data.
-
- nPMI scores for a word help to identify potentially
- problematic associations, ranked by how close the association is.
-
- nPMI bias scores for paired words help to identify how word
- associations are skewed between the selected selected words
- ([Aka et al., 2021](https://arxiv.org/abs/2103.03417)).
-
- You can select from gender and sexual orientation
- identity terms that appear in the dataset at least {min_vocab} times.
-
- The resulting ranked words are those that co-occur with both identity terms.
-
- The more *positive* the score, the more associated the word is with
- the first identity term.
- The more *negative* the score, the more associated the word is with
- the second identity term.
-
- -----
- """
-
- def update_sort_and_npmi(self, first_word, second_word, sort_col, dstats):
- output = {self.sort: gr.Dropdown.update(choices=['bias', first_word, second_word],
- value='bias')}
- new_df = self.npmi_show(first_word, second_word, sort_col, dstats)
- output.update(new_df)
- return output
-
- def add_events(self, state: gr.State):
- self.npmi_first_word.change(
- self.update_sort_and_npmi,
- inputs=[self.npmi_first_word, self.npmi_second_word, self.sort, state],
- outputs=[self.npmi_df, self.npmi_empty_text, self.sort],
- )
- self.npmi_second_word.change(
- self.update_sort_and_npmi,
- inputs=[self.npmi_first_word, self.npmi_second_word, self.sort, state],
- outputs=[self.npmi_df, self.npmi_empty_text, self.sort],
- )
- self.sort.change(
- self.npmi_show,
- inputs=[self.npmi_first_word, self.npmi_second_word, self.sort, state],
- outputs=[self.npmi_df, self.npmi_empty_text],
- )
diff --git a/spaces/ICML2022/OFA/data/mm_data/caption_dataset.py b/spaces/ICML2022/OFA/data/mm_data/caption_dataset.py
deleted file mode 100644
index 2109b19ec0958b5a84429b412d4f62052324147c..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/data/mm_data/caption_dataset.py
+++ /dev/null
@@ -1,154 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-from io import BytesIO
-
-import logging
-import warnings
-import string
-
-import numpy as np
-import torch
-import base64
-from torchvision import transforms
-
-from PIL import Image, ImageFile
-
-from data import data_utils
-from data.ofa_dataset import OFADataset
-
-ImageFile.LOAD_TRUNCATED_IMAGES = True
-ImageFile.MAX_IMAGE_PIXELS = None
-Image.MAX_IMAGE_PIXELS = None
-
-logger = logging.getLogger(__name__)
-warnings.filterwarnings("ignore", "(Possibly )?corrupt EXIF data", UserWarning)
-
-IMAGENET_DEFAULT_MEAN = (0.485, 0.456, 0.406)
-IMAGENET_DEFAULT_STD = (0.229, 0.224, 0.225)
-
-
-def collate(samples, pad_idx, eos_idx):
- if len(samples) == 0:
- return {}
-
- def merge(key):
- return data_utils.collate_tokens(
- [s[key] for s in samples],
- pad_idx,
- eos_idx=eos_idx,
- )
-
- id = np.array([s["id"] for s in samples])
- src_tokens = merge("source")
- src_lengths = torch.LongTensor([s["source"].ne(pad_idx).long().sum() for s in samples])
-
- patch_images = torch.stack([sample['patch_image'] for sample in samples], dim=0)
- patch_masks = torch.cat([sample['patch_mask'] for sample in samples])
-
- prev_output_tokens = None
- target = None
- if samples[0].get("target", None) is not None:
- target = merge("target")
- tgt_lengths = torch.LongTensor([s["target"].ne(pad_idx).long().sum() for s in samples])
- ntokens = tgt_lengths.sum().item()
-
- if samples[0].get("prev_output_tokens", None) is not None:
- prev_output_tokens = merge("prev_output_tokens")
- else:
- ntokens = src_lengths.sum().item()
-
- batch = {
- "id": id,
- "nsentences": len(samples),
- "ntokens": ntokens,
- "net_input": {
- "src_tokens": src_tokens,
- "src_lengths": src_lengths,
- "patch_images": patch_images,
- "patch_masks": patch_masks,
- "prev_output_tokens": prev_output_tokens
- },
- "target": target,
- }
-
- return batch
-
-
-class CaptionDataset(OFADataset):
- def __init__(
- self,
- split,
- dataset,
- bpe,
- src_dict,
- tgt_dict=None,
- max_src_length=128,
- max_tgt_length=30,
- patch_image_size=224,
- imagenet_default_mean_and_std=False,
- scst=False
- ):
- super().__init__(split, dataset, bpe, src_dict, tgt_dict)
- self.max_src_length = max_src_length
- self.max_tgt_length = max_tgt_length
- self.patch_image_size = patch_image_size
- self.scst = scst
-
- self.transtab = str.maketrans({key: None for key in string.punctuation})
-
- if imagenet_default_mean_and_std:
- mean = IMAGENET_DEFAULT_MEAN
- std = IMAGENET_DEFAULT_STD
- else:
- mean = [0.5, 0.5, 0.5]
- std = [0.5, 0.5, 0.5]
-
- self.patch_resize_transform = transforms.Compose([
- lambda image: image.convert("RGB"),
- transforms.Resize((patch_image_size, patch_image_size), interpolation=Image.BICUBIC),
- transforms.ToTensor(),
- transforms.Normalize(mean=mean, std=std),
- ])
-
- def __getitem__(self, index):
- uniq_id, image, caption = self.dataset[index]
-
- image = Image.open(BytesIO(base64.urlsafe_b64decode(image)))
- patch_image = self.patch_resize_transform(image)
- patch_mask = torch.tensor([True])
-
- if self.split == 'train' and not self.scst:
- caption = caption.translate(self.transtab).strip()
- caption_token_list = caption.strip().split()
- tgt_caption = ' '.join(caption_token_list[:self.max_tgt_length])
- else:
- caption = ' '.join(caption.strip().split())
- caption_list = [cap.translate(self.transtab).strip() for cap in caption.strip().split('&&')]
- tgt_caption = '&&'.join(caption_list)
- src_item = self.encode_text(" what does the image describe?")
- tgt_item = self.encode_text(" {}".format(tgt_caption))
-
- src_item = torch.cat([self.bos_item, src_item, self.eos_item])
- target_item = torch.cat([tgt_item, self.eos_item])
- prev_output_item = torch.cat([self.bos_item, tgt_item])
-
- example = {
- "id": uniq_id,
- "source": src_item,
- "patch_image": patch_image,
- "patch_mask": patch_mask,
- "target": target_item,
- "prev_output_tokens": prev_output_item
- }
- return example
-
- def collater(self, samples, pad_to_length=None):
- """Merge a list of samples to form a mini-batch.
- Args:
- samples (List[dict]): samples to collate
- Returns:
- dict: a mini-batch with the following keys:
- """
- return collate(samples, pad_idx=self.pad, eos_idx=self.eos)
\ No newline at end of file
diff --git a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/decode.sh b/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/decode.sh
deleted file mode 100644
index 811cb63c88bb7cdd03b0a250ef2db32b5eaa50df..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/decode.sh
+++ /dev/null
@@ -1,38 +0,0 @@
-#!/bin/bash
-
-set -u
-
-val_sets="dev_other"
-graph_name=graph
-decode_suffix=""
-decode_script="steps/decode_fmllr.sh"
-decode_args=""
-nj=60
-
-. ./cmd.sh
-. ./path.sh
-. parse_options.sh
-
-set -x
-exp_dir=$1
-data_root=$2
-lang_test=$3
-
-graph=$exp_dir/$graph_name
-
-if [ ! -d $graph ]; then
- utils/mkgraph.sh $lang_test $exp_dir $graph
-fi
-
-for part in $val_sets; do
- dec_dir=$exp_dir/decode${decode_suffix}_${part}
- if [ ! -d $dec_dir ]; then
- echo "decoding $part for $exp_dir"
- $decode_script --nj $nj --cmd "$decode_cmd" $decode_args \
- $graph $data_root/$part $dec_dir &
- else
- echo "$dec_dir exists. skip"
- fi
-done
-
-wait
diff --git a/spaces/IDEA-CCNL/Ziya-v1/utils.py b/spaces/IDEA-CCNL/Ziya-v1/utils.py
deleted file mode 100644
index 7bc51115bba855faa5bb0e6a205b7f56bcbe634c..0000000000000000000000000000000000000000
--- a/spaces/IDEA-CCNL/Ziya-v1/utils.py
+++ /dev/null
@@ -1,654 +0,0 @@
-import torch
-from typing import Optional, Tuple, Union, List, Callable
-from transformers.generation.logits_process import LogitsProcessor
-from transformers.generation.beam_search import BeamSearchScorer
-from transformers.deepspeed import is_deepspeed_zero3_enabled
-from transformers.generation.utils import (
- LogitsProcessorList,
- StoppingCriteriaList,
- GenerationConfig,
- GenerationMixin,
-)
-from transformers import LlamaForCausalLM
-import warnings
-import torch.distributed as dist
-from torch import nn
-import copy
-
-
-class SteamGenerationMixin(LlamaForCausalLM):
- # support for streamly generation
- # TODO: group_beam_search
- @torch.no_grad()
- def stream_generate(
- self,
- input_ids: Optional[torch.Tensor] = None,
- generation_config: Optional[GenerationConfig] = None,
- logits_processor: Optional[LogitsProcessorList] = None,
- stopping_criteria: Optional[StoppingCriteriaList] = None,
- prefix_allowed_tokens_fn: Optional[
- Callable[[int, torch.Tensor], List[int]]
- ] = None,
- **kwargs,
- ):
- self._reorder_cache = self.base_model._reorder_cache
- if is_deepspeed_zero3_enabled() and dist.world_size() > 1:
- synced_gpus = True
- else:
- synced_gpus = False
-
- if kwargs.get("attention_mask", None) is not None:
- # concat prompt attention mask
- prefix_attention_mask = torch.ones(
- kwargs["input_ids"].shape[0], self.peft_config.num_virtual_tokens
- ).to(kwargs["input_ids"].device)
- kwargs["attention_mask"] = torch.cat(
- (prefix_attention_mask, kwargs["attention_mask"]), dim=1
- )
- if kwargs.get("position_ids", None) is not None:
- warnings.warn(
- "Position ids are not supported for parameter efficient tuning. Ignoring position ids."
- )
- kwargs["position_ids"] = None
- if kwargs.get("token_type_ids", None) is not None:
- warnings.warn(
- "Token type ids are not supported for parameter efficient tuning. Ignoring token type ids"
- )
- kwargs["token_type_ids"] = None
-
- batch_size, input_ids_seq_length = input_ids.shape[0], input_ids.shape[-1]
-
- if generation_config is None:
- generation_config = self.generation_config
- generation_config = copy.deepcopy(generation_config)
- model_kwargs = generation_config.update(**kwargs)
-
- bos_token_id, eos_token_id, pad_token_id = (
- generation_config.bos_token_id,
- generation_config.eos_token_id,
- generation_config.pad_token_id,
- )
-
- if isinstance(eos_token_id, int):
- eos_token_id = [eos_token_id]
-
- has_default_max_length = (
- kwargs.get("max_length") is None
- and generation_config.max_length is not None
- )
- if has_default_max_length and generation_config.max_new_tokens is None:
- warnings.warn(
- f"Using `max_length`'s default ({generation_config.max_length}) to control the generation length. "
- "This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we"
- " recommend using `max_new_tokens` to control the maximum length of the generation.",
- UserWarning,
- )
- elif generation_config.max_new_tokens is not None:
- generation_config.max_length = (
- generation_config.max_new_tokens + input_ids_seq_length
- )
- if generation_config.min_new_tokens is not None:
- generation_config.min_length = (
- generation_config.min_new_tokens + input_ids_seq_length
- )
-
- if input_ids_seq_length >= generation_config.max_length:
- input_ids_string = (
- "decoder_input_ids" if self.config.is_encoder_decoder else "input_ids"
- )
-
- # 2. Set generation parameters if not already defined
- logits_processor = (
- logits_processor if logits_processor is not None else LogitsProcessorList()
- )
- stopping_criteria = (
- stopping_criteria
- if stopping_criteria is not None
- else StoppingCriteriaList()
- )
- # 7. determine generation mode
- is_constraint_gen_mode = (
- generation_config.constraints is not None or generation_config.force_words_ids is not None
- )
-
- is_contrastive_search_gen_mode = (
- generation_config.top_k is not None
- and generation_config.top_k > 1
- and generation_config.do_sample is False
- and generation_config.penalty_alpha is not None
- and generation_config.penalty_alpha > 0
- )
-
- is_greedy_gen_mode = (
- (generation_config.num_beams == 1)
- and (generation_config.num_beam_groups == 1)
- and generation_config.do_sample is False
- and not is_constraint_gen_mode
- and not is_contrastive_search_gen_mode
- )
- # beam=1 and do_sample=True
- is_sample_gen_mode = (
- (generation_config.num_beams == 1)
- and (generation_config.num_beam_groups == 1)
- and generation_config.do_sample is True
- and not is_constraint_gen_mode
- and not is_contrastive_search_gen_mode
- )
- is_beam_gen_mode = (
- (generation_config.num_beams > 1)
- and (generation_config.num_beam_groups == 1)
- and generation_config.do_sample is False
- and not is_constraint_gen_mode
- and not is_contrastive_search_gen_mode
- )
- is_beam_sample_gen_mode = (
- (generation_config.num_beams > 1)
- and (generation_config.num_beam_groups == 1)
- and generation_config.do_sample is True
- and not is_constraint_gen_mode
- and not is_contrastive_search_gen_mode
- )
- is_group_beam_gen_mode = (
- (generation_config.num_beams > 1)
- and (generation_config.num_beam_groups > 1)
- and not is_constraint_gen_mode
- and not is_contrastive_search_gen_mode
- )
- # 8. prepare distribution pre_processing samplers
- logits_processor = self._get_logits_processor(
- generation_config=generation_config,
- input_ids_seq_length=input_ids_seq_length,
- encoder_input_ids=input_ids,
- prefix_allowed_tokens_fn=prefix_allowed_tokens_fn,
- logits_processor=logits_processor,
- )
- # 9. prepare stopping criteria
- stopping_criteria = self._get_stopping_criteria(
- generation_config=generation_config, stopping_criteria=stopping_criteria
- )
- logits_warper = self._get_logits_warper(generation_config)
-
- if is_greedy_gen_mode:
- # 11. run greedy search
- return self.greedy_search(
- input_ids,
- logits_processor,
- stopping_criteria,
- generation_config,
- synced_gpus,
- **model_kwargs,
- )
- elif is_sample_gen_mode:
- # 12. expand input_ids with `num_return_sequences` additional sequences per batch
- input_ids, model_kwargs = self._expand_inputs_for_generation(
- input_ids=input_ids,
- expand_size=generation_config.num_return_sequences,
- is_encoder_decoder=self.config.is_encoder_decoder,
- **model_kwargs,
- )
- return self.stream_sample(
- generation_config,
- input_ids,
- logits_processor,
- logits_warper,
- stopping_criteria,
- synced_gpus,
- **model_kwargs,
- )
- elif is_beam_gen_mode:
- return self.beam_search(
- generation_config,
- input_ids,
- logits_processor,
- stopping_criteria,
- synced_gpus,
- **model_kwargs,
- )
- elif is_beam_sample_gen_mode:
- # interleave input_ids with `num_beams` additional sequences per batch
- return self.beam_sample(
- input_ids,
- logits_processor,
- logits_warper,
- stopping_criteria,
- generation_config,
- synced_gpus,
- **model_kwargs,
- )
- else:
- raise Exception('not implement')
-
- def stream_sample(
- self,
- generation_config,
- input_ids,
- logits_processor,
- logits_warper,
- stopping_criteria,
- synced_gpus,
- **model_kwargs,
- ):
- bos_token_id, eos_token_id, pad_token_id = (
- generation_config.bos_token_id,
- generation_config.eos_token_id,
- generation_config.pad_token_id,
- )
- if isinstance(eos_token_id, int):
- eos_token_id = [eos_token_id]
- eos_token_id_tensor = torch.tensor(eos_token_id).to(input_ids.device) if eos_token_id is not None else None
- # keep track of which sequences are already finished
- unfinished_sequences = torch.ones(input_ids.shape[0], dtype=torch.long, device=input_ids.device)
- this_peer_finished = False # used by synced_gpus only
- scores=()
- # auto-regressive generation
- while True:
- if synced_gpus:
- # Under synced_gpus the `forward` call must continue until all gpus complete their sequence.
- # The following logic allows an early break if all peers finished generating their sequence
- this_peer_finished_flag = torch.tensor(0.0 if this_peer_finished else 1.0).to(input_ids.device)
- # send 0.0 if we finished, 1.0 otherwise
- dist.all_reduce(this_peer_finished_flag, op=dist.ReduceOp.SUM)
- # did all peers finish? the reduced sum will be 0.0 then
- if this_peer_finished_flag.item() == 0.0:
- break
- # prepare model inputs
- model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
- # forward pass to get next token
- outputs = self(
- **model_inputs,
- return_dict=True,
- )
- if synced_gpus and this_peer_finished:
- continue # don't waste resources running the code we don't need
- next_token_logits = outputs.logits[:, -1, :]
- # pre-process distribution
- next_token_scores = logits_processor(input_ids, next_token_logits)
- next_token_scores = logits_warper(input_ids, next_token_scores)
-
- # sample
- probs = nn.functional.softmax(next_token_scores, dim=-1)
- next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
-
- # finished sentences should have their next token be a padding token
- if eos_token_id is not None:
- if pad_token_id is None:
- raise ValueError("If `eos_token_id` is defined, make sure that `pad_token_id` is defined.")
- next_tokens = next_tokens * unfinished_sequences + pad_token_id * (1 - unfinished_sequences)
-
- # update generated ids, model inputs, and length for next step
- input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1)
- model_kwargs = self._update_model_kwargs_for_generation(
- outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder
- )
- yield input_ids
- # torch.cuda.empty_cache()
- # if eos_token was found in one sentence, set sentence to finished
- if eos_token_id_tensor is not None:
- unfinished_sequences = unfinished_sequences.mul(
- next_tokens.tile(eos_token_id_tensor.shape[0], 1).ne(eos_token_id_tensor.unsqueeze(1)).prod(dim=0)
- )
-
- # stop when each sentence is finished, or if we exceed the maximum length
- if unfinished_sequences.max() == 0 or stopping_criteria(input_ids, scores):
- if not synced_gpus:
- break
- else:
- this_peer_finished = True
- return input_ids
-
- def empty_cache(self):
- torch.cuda.empty_cache()
-
- def beam_sample(
- self,
- input_ids,
- logits_processor,
- logits_warper,
- stopping_criteria,
- generation_config,
- synced_gpus,
- **model_kwargs,
- ):
- bos_token_id, eos_token_id, pad_token_id = (
- generation_config.bos_token_id,
- generation_config.eos_token_id,
- generation_config.pad_token_id,
- )
- if isinstance(eos_token_id, int):
- eos_token_id = [eos_token_id]
- eos_token_id_tensor = torch.tensor(eos_token_id).to(input_ids.device) if eos_token_id is not None else None
- num_beams = generation_config.num_beams
- batch_size, cur_len = input_ids.shape[0], input_ids.shape[-1]
- beam_scorer = BeamSearchScorer(
- batch_size=batch_size,
- num_beams=generation_config.num_beams,
- device=input_ids.device,
- length_penalty=generation_config.length_penalty,
- do_early_stopping=generation_config.early_stopping,
- num_beam_hyps_to_keep=generation_config.num_return_sequences,
- max_length=generation_config.max_length,
- )
- input_ids, model_kwargs = self._expand_inputs_for_generation(
- input_ids=input_ids,
- expand_size=generation_config.num_beams * generation_config.num_return_sequences,
- is_encoder_decoder=self.config.is_encoder_decoder,
- **model_kwargs,
- )
- scores = ()
- beam_scores = torch.zeros((batch_size, num_beams), dtype=torch.float, device=input_ids.device)
- beam_scores = beam_scores.view((batch_size * num_beams,))
-
- this_peer_finished = False # used by synced_gpus only
- while True:
- if synced_gpus:
- # Under synced_gpus the `forward` call must continue until all gpus complete their sequence.
- # The following logic allows an early break if all peers finished generating their sequence
- this_peer_finished_flag = torch.tensor(0.0 if this_peer_finished else 1.0).to(input_ids.device)
- # send 0.0 if we finished, 1.0 otherwise
- dist.all_reduce(this_peer_finished_flag, op=dist.ReduceOp.SUM)
- # did all peers finish? the reduced sum will be 0.0 then
- if this_peer_finished_flag.item() == 0.0:
- break
-
- model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
- outputs = self(
- **model_inputs,
- return_dict=True,
- )
-
- if synced_gpus and this_peer_finished:
- cur_len = cur_len + 1
- continue # don't waste resources running the code we don't need
-
- next_token_logits = outputs.logits[:, -1, :]
-
- # hack: adjust tokens for Marian. For Marian we have to make sure that the `pad_token_id`
- # cannot be generated both before and after the `nn.functional.log_softmax` operation.
- next_token_logits = self.adjust_logits_during_generation(next_token_logits, cur_len=cur_len)
- next_token_scores = nn.functional.log_softmax(
- next_token_logits, dim=-1
- ) # (batch_size * num_beams, vocab_size)
-
- next_token_scores_processed = logits_processor(input_ids, next_token_scores)
- next_token_scores = next_token_scores_processed + beam_scores[:, None].expand_as(next_token_scores)
- # Note: logits warpers are intentionally applied after adding running beam scores. On some logits warpers
- # (like top_p) this is indiferent, but on others (like temperature) it is not. For reference, see
- # https://github.com/huggingface/transformers/pull/5420#discussion_r449779867
- next_token_scores = logits_warper(input_ids, next_token_scores)
-
- # reshape for beam search
- vocab_size = next_token_scores.shape[-1]
- next_token_scores = next_token_scores.view(batch_size, num_beams * vocab_size)
-
- probs = nn.functional.softmax(next_token_scores, dim=-1)
-
- next_tokens = torch.multinomial(probs, num_samples=2 * num_beams)
- next_token_scores = torch.gather(next_token_scores, -1, next_tokens)
-
- next_token_scores, _indices = torch.sort(next_token_scores, descending=True, dim=1)
- next_tokens = torch.gather(next_tokens, -1, _indices)
-
- next_indices = torch.div(next_tokens, vocab_size, rounding_mode="floor")
- next_tokens = next_tokens % vocab_size
-
- # stateless
- beam_outputs = beam_scorer.process(
- input_ids,
- next_token_scores,
- next_tokens,
- next_indices,
- pad_token_id=pad_token_id,
- eos_token_id=eos_token_id,
- beam_indices=None,
- )
- beam_scores = beam_outputs["next_beam_scores"]
- beam_next_tokens = beam_outputs["next_beam_tokens"]
- beam_idx = beam_outputs["next_beam_indices"]
-
- input_ids = torch.cat([input_ids[beam_idx, :], beam_next_tokens.unsqueeze(-1)], dim=-1)
- yield input_ids
- model_kwargs = self._update_model_kwargs_for_generation(
- outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder
- )
- if model_kwargs["past_key_values"] is not None:
- model_kwargs["past_key_values"] = self._reorder_cache(model_kwargs["past_key_values"], beam_idx)
-
- # increase cur_len
- cur_len = cur_len + 1
-
- if beam_scorer.is_done or stopping_criteria(input_ids, scores):
- if not synced_gpus:
- break
- else:
- this_peer_finished = True
-
- sequence_outputs = beam_scorer.finalize(
- input_ids,
- beam_scores,
- next_tokens,
- next_indices,
- pad_token_id=pad_token_id,
- eos_token_id=eos_token_id,
- max_length=stopping_criteria.max_length,
- beam_indices=None,
- )
- yield sequence_outputs["sequences"]
-
- def greedy_search(
- self,
- input_ids,
- logits_processor,
- stopping_criteria,
- generation_config,
- synced_gpus,
- **model_kwargs,
- ):
- # init values
- bos_token_id, eos_token_id, pad_token_id = (
- generation_config.bos_token_id,
- generation_config.eos_token_id,
- generation_config.pad_token_id,
- )
- if isinstance(eos_token_id, int):
- eos_token_id = [eos_token_id]
- eos_token_id_tensor = torch.tensor(eos_token_id).to(input_ids.device) if eos_token_id is not None else None
- # init attention / hidden states / scores tuples
- scores = ()
- # keep track of which sequences are already finished
- unfinished_sequences = torch.ones(input_ids.shape[0], dtype=torch.long, device=input_ids.device)
- this_peer_finished = False # used by synced_gpus only
- while True:
- if synced_gpus:
- # Under synced_gpus the `forward` call must continue until all gpus complete their sequence.
- # The following logic allows an early break if all peers finished generating their sequence
- this_peer_finished_flag = torch.tensor(0.0 if this_peer_finished else 1.0).to(input_ids.device)
- # send 0.0 if we finished, 1.0 otherwise
- dist.all_reduce(this_peer_finished_flag, op=dist.ReduceOp.SUM)
- # did all peers finish? the reduced sum will be 0.0 then
- if this_peer_finished_flag.item() == 0.0:
- break
-
- # prepare model inputs
- model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
- # forward pass to get next token
- outputs = self(
- **model_inputs,
- return_dict=True,
- )
-
- if synced_gpus and this_peer_finished:
- continue # don't waste resources running the code we don't need
-
- next_token_logits = outputs.logits[:, -1, :]
- # pre-process distribution
- next_tokens_scores = logits_processor(input_ids, next_token_logits)
- # argmax
- next_tokens = torch.argmax(next_tokens_scores, dim=-1)
- # finished sentences should have their next token be a padding token
- if eos_token_id is not None:
- if pad_token_id is None:
- raise ValueError("If `eos_token_id` is defined, make sure that `pad_token_id` is defined.")
- next_tokens = next_tokens * unfinished_sequences + pad_token_id * (1 - unfinished_sequences)
- # update generated ids, model inputs, and length for next step
- input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1)
- model_kwargs = self._update_model_kwargs_for_generation(
- outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder
- )
- yield input_ids
- # if eos_token was found in one sentence, set sentence to finished
- if eos_token_id_tensor is not None:
- unfinished_sequences = unfinished_sequences.mul(
- next_tokens.tile(eos_token_id_tensor.shape[0], 1).ne(eos_token_id_tensor.unsqueeze(1)).prod(dim=0)
- )
-
- # stop when each sentence is finished, or if we exceed the maximum length
- if unfinished_sequences.max() == 0 or stopping_criteria(input_ids, scores):
- if not synced_gpus:
- break
- else:
- this_peer_finished = True
- yield input_ids
-
- def beam_search(
- self,
- generation_config,
- input_ids,
- logits_processor,
- stopping_criteria,
- synced_gpus,
- **model_kwargs,
- ):
- # 10. go into beam search generation modes
- # 11. prepare beam search scorer
- bos_token_id, eos_token_id, pad_token_id = (
- generation_config.bos_token_id,
- generation_config.eos_token_id,
- generation_config.pad_token_id,
- )
- if isinstance(eos_token_id, int):
- eos_token_id = [eos_token_id]
- num_beams = generation_config.num_beams
- batch_size, input_ids_seq_length = input_ids.shape[0], input_ids.shape[-1]
- beam_scorer = BeamSearchScorer(
- batch_size=batch_size,
- num_beams=generation_config.num_beams,
- device=input_ids.device,
- length_penalty=generation_config.length_penalty,
- do_early_stopping=generation_config.early_stopping,
- num_beam_hyps_to_keep=generation_config.num_return_sequences,
- max_length=generation_config.max_length,
- )
- # 12. interleave input_ids with `num_beams` additional sequences per batch
- input_ids, model_kwargs = self._expand_inputs_for_generation(
- input_ids=input_ids,
- expand_size=generation_config.num_beams,
- is_encoder_decoder=self.config.is_encoder_decoder,
- **model_kwargs,
- )
- # beam_search logits
- batch_beam_size, cur_len = input_ids.shape
- if num_beams * batch_size != batch_beam_size:
- raise ValueError(
- f"Batch dimension of `input_ids` should be {num_beams * batch_size}, but is {batch_beam_size}."
- )
- beam_scores = torch.zeros(
- (batch_size, num_beams), dtype=torch.float, device=input_ids.device
- )
- beam_scores[:, 1:] = -1e9
- beam_scores = beam_scores.view((batch_size * num_beams,))
- this_peer_finished = False # used by synced_gpus only
- while True:
- if synced_gpus:
- # Under synced_gpus the `forward` call must continue until all gpus complete their sequence.
- # The following logic allows an early break if all peers finished generating their sequence
- this_peer_finished_flag = torch.tensor(
- 0.0 if this_peer_finished else 1.0
- ).to(input_ids.device)
- # send 0.0 if we finished, 1.0 otherwise
- dist.all_reduce(this_peer_finished_flag, op=dist.ReduceOp.SUM)
- # did all peers finish? the reduced sum will be 0.0 then
- if this_peer_finished_flag.item() == 0.0:
- break
-
- model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
- outputs = self(
- **model_inputs,
- return_dict=True,
- output_attentions=False,
- output_hidden_states=False,
- )
-
- if synced_gpus and this_peer_finished:
- cur_len = cur_len + 1
- continue # don't waste resources running the code we don't need
-
- next_token_logits = outputs.logits[:, -1, :]
- # next_token_logits = self.adjust_logits_during_generation(next_token_logits, cur_len=cur_len) hack: adjust tokens for Marian.
- next_token_scores = nn.functional.log_softmax(
- next_token_logits, dim=-1
- ) # (batch_size * num_beams, vocab_size)
- next_token_scores_processed = logits_processor(input_ids, next_token_scores)
- next_token_scores = next_token_scores_processed + beam_scores[
- :, None
- ].expand_as(next_token_scores)
-
- # reshape for beam search
- vocab_size = next_token_scores.shape[-1]
- next_token_scores = next_token_scores.view(
- batch_size, num_beams * vocab_size
- )
-
- # Sample 2 next tokens for each beam (so we have some spare tokens and match output of beam search)
- next_token_scores, next_tokens = torch.topk(
- next_token_scores, 2 * num_beams, dim=1, largest=True, sorted=True
- )
- next_indices = torch.div(next_tokens, vocab_size, rounding_mode="floor")
- next_tokens = next_tokens % vocab_size
- # stateless
- beam_outputs = beam_scorer.process(
- input_ids,
- next_token_scores,
- next_tokens,
- next_indices,
- pad_token_id=pad_token_id,
- eos_token_id=eos_token_id,
- beam_indices=None,
- )
- beam_scores = beam_outputs["next_beam_scores"]
- beam_next_tokens = beam_outputs["next_beam_tokens"]
- beam_idx = beam_outputs["next_beam_indices"]
-
- input_ids = torch.cat(
- [input_ids[beam_idx, :], beam_next_tokens.unsqueeze(-1)], dim=-1
- )
- model_kwargs = self._update_model_kwargs_for_generation(
- outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder
- )
- if model_kwargs["past_key_values"] is not None:
- model_kwargs["past_key_values"] = self._reorder_cache(
- model_kwargs["past_key_values"], beam_idx
- )
-
- # increase cur_len
- cur_len = cur_len + 1
-
- yield input_ids
-
- if beam_scorer.is_done or stopping_criteria(input_ids, None):
- if not synced_gpus:
- break
- else:
- this_peer_finished = True
-
- final_result = beam_scorer.finalize(
- input_ids,
- beam_scores,
- next_tokens,
- next_indices,
- pad_token_id=pad_token_id,
- eos_token_id=eos_token_id,
- max_length=stopping_criteria.max_length,
- beam_indices=None,
- )
- yield final_result["sequences"]
-
diff --git a/spaces/JPMadsen/JP_Audio/README.md b/spaces/JPMadsen/JP_Audio/README.md
deleted file mode 100644
index fe4617299df9dcf4d1e00629d2c2a9c0164daca9..0000000000000000000000000000000000000000
--- a/spaces/JPMadsen/JP_Audio/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: JP Audio
-emoji: 🔥
-colorFrom: red
-colorTo: blue
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/experimental/rl/value_guided_sampling.py b/spaces/Jackflack09/diffuse-custom/diffusers/experimental/rl/value_guided_sampling.py
deleted file mode 100644
index 4dd935f54d608f45c8ae69eda5a571f1bf65084b..0000000000000000000000000000000000000000
--- a/spaces/Jackflack09/diffuse-custom/diffusers/experimental/rl/value_guided_sampling.py
+++ /dev/null
@@ -1,130 +0,0 @@
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import numpy as np
-import torch
-
-import tqdm
-
-from ...models.unet_1d import UNet1DModel
-from ...pipeline_utils import DiffusionPipeline
-from ...utils.dummy_pt_objects import DDPMScheduler
-
-
-class ValueGuidedRLPipeline(DiffusionPipeline):
- def __init__(
- self,
- value_function: UNet1DModel,
- unet: UNet1DModel,
- scheduler: DDPMScheduler,
- env,
- ):
- super().__init__()
- self.value_function = value_function
- self.unet = unet
- self.scheduler = scheduler
- self.env = env
- self.data = env.get_dataset()
- self.means = dict()
- for key in self.data.keys():
- try:
- self.means[key] = self.data[key].mean()
- except:
- pass
- self.stds = dict()
- for key in self.data.keys():
- try:
- self.stds[key] = self.data[key].std()
- except:
- pass
- self.state_dim = env.observation_space.shape[0]
- self.action_dim = env.action_space.shape[0]
-
- def normalize(self, x_in, key):
- return (x_in - self.means[key]) / self.stds[key]
-
- def de_normalize(self, x_in, key):
- return x_in * self.stds[key] + self.means[key]
-
- def to_torch(self, x_in):
- if type(x_in) is dict:
- return {k: self.to_torch(v) for k, v in x_in.items()}
- elif torch.is_tensor(x_in):
- return x_in.to(self.unet.device)
- return torch.tensor(x_in, device=self.unet.device)
-
- def reset_x0(self, x_in, cond, act_dim):
- for key, val in cond.items():
- x_in[:, key, act_dim:] = val.clone()
- return x_in
-
- def run_diffusion(self, x, conditions, n_guide_steps, scale):
- batch_size = x.shape[0]
- y = None
- for i in tqdm.tqdm(self.scheduler.timesteps):
- # create batch of timesteps to pass into model
- timesteps = torch.full((batch_size,), i, device=self.unet.device, dtype=torch.long)
- for _ in range(n_guide_steps):
- with torch.enable_grad():
- x.requires_grad_()
- y = self.value_function(x.permute(0, 2, 1), timesteps).sample
- grad = torch.autograd.grad([y.sum()], [x])[0]
-
- posterior_variance = self.scheduler._get_variance(i)
- model_std = torch.exp(0.5 * posterior_variance)
- grad = model_std * grad
- grad[timesteps < 2] = 0
- x = x.detach()
- x = x + scale * grad
- x = self.reset_x0(x, conditions, self.action_dim)
- prev_x = self.unet(x.permute(0, 2, 1), timesteps).sample.permute(0, 2, 1)
- # TODO: set prediction_type when instantiating the model
- x = self.scheduler.step(prev_x, i, x, predict_epsilon=False)["prev_sample"]
-
- # apply conditions to the trajectory
- x = self.reset_x0(x, conditions, self.action_dim)
- x = self.to_torch(x)
- return x, y
-
- def __call__(self, obs, batch_size=64, planning_horizon=32, n_guide_steps=2, scale=0.1):
- # normalize the observations and create batch dimension
- obs = self.normalize(obs, "observations")
- obs = obs[None].repeat(batch_size, axis=0)
-
- conditions = {0: self.to_torch(obs)}
- shape = (batch_size, planning_horizon, self.state_dim + self.action_dim)
-
- # generate initial noise and apply our conditions (to make the trajectories start at current state)
- x1 = torch.randn(shape, device=self.unet.device)
- x = self.reset_x0(x1, conditions, self.action_dim)
- x = self.to_torch(x)
-
- # run the diffusion process
- x, y = self.run_diffusion(x, conditions, n_guide_steps, scale)
-
- # sort output trajectories by value
- sorted_idx = y.argsort(0, descending=True).squeeze()
- sorted_values = x[sorted_idx]
- actions = sorted_values[:, :, : self.action_dim]
- actions = actions.detach().cpu().numpy()
- denorm_actions = self.de_normalize(actions, key="actions")
-
- # select the action with the highest value
- if y is not None:
- selected_index = 0
- else:
- # if we didn't run value guiding, select a random action
- selected_index = np.random.randint(0, batch_size)
- denorm_actions = denorm_actions[selected_index, 0]
- return denorm_actions
diff --git a/spaces/KOFTRFU204/AICoverGen/src/mdx.py b/spaces/KOFTRFU204/AICoverGen/src/mdx.py
deleted file mode 100644
index 448e65d45cb1272c06f3ffa015cef8abd1257d9a..0000000000000000000000000000000000000000
--- a/spaces/KOFTRFU204/AICoverGen/src/mdx.py
+++ /dev/null
@@ -1,292 +0,0 @@
-import gc
-import hashlib
-import os
-import queue
-import threading
-import warnings
-
-import librosa
-import numpy as np
-import onnxruntime as ort
-import soundfile as sf
-import torch
-from tqdm import tqdm
-
-warnings.filterwarnings("ignore")
-stem_naming = {'Vocals': 'Instrumental', 'Other': 'Instruments', 'Instrumental': 'Vocals', 'Drums': 'Drumless', 'Bass': 'Bassless'}
-
-
-class MDXModel:
- def __init__(self, device, dim_f, dim_t, n_fft, hop=1024, stem_name=None, compensation=1.000):
- self.dim_f = dim_f
- self.dim_t = dim_t
- self.dim_c = 4
- self.n_fft = n_fft
- self.hop = hop
- self.stem_name = stem_name
- self.compensation = compensation
-
- self.n_bins = self.n_fft // 2 + 1
- self.chunk_size = hop * (self.dim_t - 1)
- self.window = torch.hann_window(window_length=self.n_fft, periodic=True).to(device)
-
- out_c = self.dim_c
-
- self.freq_pad = torch.zeros([1, out_c, self.n_bins - self.dim_f, self.dim_t]).to(device)
-
- def stft(self, x):
- x = x.reshape([-1, self.chunk_size])
- x = torch.stft(x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True, return_complex=True)
- x = torch.view_as_real(x)
- x = x.permute([0, 3, 1, 2])
- x = x.reshape([-1, 2, 2, self.n_bins, self.dim_t]).reshape([-1, 4, self.n_bins, self.dim_t])
- return x[:, :, :self.dim_f]
-
- def istft(self, x, freq_pad=None):
- freq_pad = self.freq_pad.repeat([x.shape[0], 1, 1, 1]) if freq_pad is None else freq_pad
- x = torch.cat([x, freq_pad], -2)
- # c = 4*2 if self.target_name=='*' else 2
- x = x.reshape([-1, 2, 2, self.n_bins, self.dim_t]).reshape([-1, 2, self.n_bins, self.dim_t])
- x = x.permute([0, 2, 3, 1])
- x = x.contiguous()
- x = torch.view_as_complex(x)
- x = torch.istft(x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True)
- return x.reshape([-1, 2, self.chunk_size])
-
-
-class MDX:
- DEFAULT_SR = 44100
- # Unit: seconds
- DEFAULT_CHUNK_SIZE = 0 * DEFAULT_SR
- DEFAULT_MARGIN_SIZE = 1 * DEFAULT_SR
-
- DEFAULT_PROCESSOR = 0
-
- def __init__(self, model_path: str, params: MDXModel, processor=DEFAULT_PROCESSOR):
-
- # Set the device and the provider (CPU or CUDA)
- #self.device = torch.device(f'cuda:{processor}') if processor >= 0 else torch.device('cpu')
- self.device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu')
- #self.provider = ['CUDAExecutionProvider'] if processor >= 0 else ['CPUExecutionProvider']
- self.provider = ['CPUExecutionProvider']
-
- self.model = params
-
- # Load the ONNX model using ONNX Runtime
- self.ort = ort.InferenceSession(model_path, providers=self.provider)
- # Preload the model for faster performance
- self.ort.run(None, {'input': torch.rand(1, 4, params.dim_f, params.dim_t).numpy()})
- self.process = lambda spec: self.ort.run(None, {'input': spec.cpu().numpy()})[0]
-
- self.prog = None
-
- @staticmethod
- def get_hash(model_path):
- try:
- with open(model_path, 'rb') as f:
- f.seek(- 10000 * 1024, 2)
- model_hash = hashlib.md5(f.read()).hexdigest()
- except:
- model_hash = hashlib.md5(open(model_path, 'rb').read()).hexdigest()
-
- return model_hash
-
- @staticmethod
- def segment(wave, combine=True, chunk_size=DEFAULT_CHUNK_SIZE, margin_size=DEFAULT_MARGIN_SIZE):
- """
- Segment or join segmented wave array
-
- Args:
- wave: (np.array) Wave array to be segmented or joined
- combine: (bool) If True, combines segmented wave array. If False, segments wave array.
- chunk_size: (int) Size of each segment (in samples)
- margin_size: (int) Size of margin between segments (in samples)
-
- Returns:
- numpy array: Segmented or joined wave array
- """
-
- if combine:
- processed_wave = None # Initializing as None instead of [] for later numpy array concatenation
- for segment_count, segment in enumerate(wave):
- start = 0 if segment_count == 0 else margin_size
- end = None if segment_count == len(wave) - 1 else -margin_size
- if margin_size == 0:
- end = None
- if processed_wave is None: # Create array for first segment
- processed_wave = segment[:, start:end]
- else: # Concatenate to existing array for subsequent segments
- processed_wave = np.concatenate((processed_wave, segment[:, start:end]), axis=-1)
-
- else:
- processed_wave = []
- sample_count = wave.shape[-1]
-
- if chunk_size <= 0 or chunk_size > sample_count:
- chunk_size = sample_count
-
- if margin_size > chunk_size:
- margin_size = chunk_size
-
- for segment_count, skip in enumerate(range(0, sample_count, chunk_size)):
-
- margin = 0 if segment_count == 0 else margin_size
- end = min(skip + chunk_size + margin_size, sample_count)
- start = skip - margin
-
- cut = wave[:, start:end].copy()
- processed_wave.append(cut)
-
- if end == sample_count:
- break
-
- return processed_wave
-
- def pad_wave(self, wave):
- """
- Pad the wave array to match the required chunk size
-
- Args:
- wave: (np.array) Wave array to be padded
-
- Returns:
- tuple: (padded_wave, pad, trim)
- - padded_wave: Padded wave array
- - pad: Number of samples that were padded
- - trim: Number of samples that were trimmed
- """
- n_sample = wave.shape[1]
- trim = self.model.n_fft // 2
- gen_size = self.model.chunk_size - 2 * trim
- pad = gen_size - n_sample % gen_size
-
- # Padded wave
- wave_p = np.concatenate((np.zeros((2, trim)), wave, np.zeros((2, pad)), np.zeros((2, trim))), 1)
-
- mix_waves = []
- for i in range(0, n_sample + pad, gen_size):
- waves = np.array(wave_p[:, i:i + self.model.chunk_size])
- mix_waves.append(waves)
-
- print(self.device)
-
- mix_waves = torch.tensor(mix_waves, dtype=torch.float32).to(self.device)
-
- return mix_waves, pad, trim
-
- def _process_wave(self, mix_waves, trim, pad, q: queue.Queue, _id: int):
- """
- Process each wave segment in a multi-threaded environment
-
- Args:
- mix_waves: (torch.Tensor) Wave segments to be processed
- trim: (int) Number of samples trimmed during padding
- pad: (int) Number of samples padded during padding
- q: (queue.Queue) Queue to hold the processed wave segments
- _id: (int) Identifier of the processed wave segment
-
- Returns:
- numpy array: Processed wave segment
- """
- mix_waves = mix_waves.split(1)
- with torch.no_grad():
- pw = []
- for mix_wave in mix_waves:
- self.prog.update()
- spec = self.model.stft(mix_wave)
- processed_spec = torch.tensor(self.process(spec))
- processed_wav = self.model.istft(processed_spec.to(self.device))
- processed_wav = processed_wav[:, :, trim:-trim].transpose(0, 1).reshape(2, -1).cpu().numpy()
- pw.append(processed_wav)
- processed_signal = np.concatenate(pw, axis=-1)[:, :-pad]
- q.put({_id: processed_signal})
- return processed_signal
-
- def process_wave(self, wave: np.array, mt_threads=1):
- """
- Process the wave array in a multi-threaded environment
-
- Args:
- wave: (np.array) Wave array to be processed
- mt_threads: (int) Number of threads to be used for processing
-
- Returns:
- numpy array: Processed wave array
- """
- self.prog = tqdm(total=0)
- chunk = wave.shape[-1] // mt_threads
- waves = self.segment(wave, False, chunk)
-
- # Create a queue to hold the processed wave segments
- q = queue.Queue()
- threads = []
- for c, batch in enumerate(waves):
- mix_waves, pad, trim = self.pad_wave(batch)
- self.prog.total = len(mix_waves) * mt_threads
- thread = threading.Thread(target=self._process_wave, args=(mix_waves, trim, pad, q, c))
- thread.start()
- threads.append(thread)
- for thread in threads:
- thread.join()
- self.prog.close()
-
- processed_batches = []
- while not q.empty():
- processed_batches.append(q.get())
- processed_batches = [list(wave.values())[0] for wave in
- sorted(processed_batches, key=lambda d: list(d.keys())[0])]
- assert len(processed_batches) == len(waves), 'Incomplete processed batches, please reduce batch size!'
- return self.segment(processed_batches, True, chunk)
-
-
-def run_mdx(model_params, output_dir, model_path, filename, exclude_main=False, exclude_inversion=False, suffix=None, invert_suffix=None, denoise=False, keep_orig=True, m_threads=2):
- device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu')
-
- #device_properties = torch.cuda.get_device_properties(device)
- print("Device", device)
- vram_gb = 12 #device_properties.total_memory / 1024**3
- m_threads = 1 if vram_gb < 8 else 2
-
- model_hash = MDX.get_hash(model_path)
- mp = model_params.get(model_hash)
- model = MDXModel(
- device,
- dim_f=mp["mdx_dim_f_set"],
- dim_t=2 ** mp["mdx_dim_t_set"],
- n_fft=mp["mdx_n_fft_scale_set"],
- stem_name=mp["primary_stem"],
- compensation=mp["compensate"]
- )
-
- mdx_sess = MDX(model_path, model)
- wave, sr = librosa.load(filename, mono=False, sr=44100)
- # normalizing input wave gives better output
- peak = max(np.max(wave), abs(np.min(wave)))
- wave /= peak
- if denoise:
- wave_processed = -(mdx_sess.process_wave(-wave, m_threads)) + (mdx_sess.process_wave(wave, m_threads))
- wave_processed *= 0.5
- else:
- wave_processed = mdx_sess.process_wave(wave, m_threads)
- # return to previous peak
- wave_processed *= peak
- stem_name = model.stem_name if suffix is None else suffix
-
- main_filepath = None
- if not exclude_main:
- main_filepath = os.path.join(output_dir, f"{os.path.basename(os.path.splitext(filename)[0])}_{stem_name}.wav")
- sf.write(main_filepath, wave_processed.T, sr)
-
- invert_filepath = None
- if not exclude_inversion:
- diff_stem_name = stem_naming.get(stem_name) if invert_suffix is None else invert_suffix
- stem_name = f"{stem_name}_diff" if diff_stem_name is None else diff_stem_name
- invert_filepath = os.path.join(output_dir, f"{os.path.basename(os.path.splitext(filename)[0])}_{stem_name}.wav")
- sf.write(invert_filepath, (-wave_processed.T * model.compensation) + wave.T, sr)
-
- if not keep_orig:
- os.remove(filename)
-
- del mdx_sess, wave_processed, wave
- gc.collect()
- return main_filepath, invert_filepath
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/diffq/uniform.py b/spaces/Kangarroar/ApplioRVC-Inference/diffq/uniform.py
deleted file mode 100644
index f61e9129c04caaa33c66f726bf2433d51689cfa5..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/diffq/uniform.py
+++ /dev/null
@@ -1,121 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Classic uniform quantization over n bits.
-"""
-from typing import Tuple
-import torch
-
-from .base import BaseQuantizer
-from .utils import simple_repr
-
-
-def uniform_quantize(p: torch.Tensor, bits: torch.Tensor = torch.tensor(8.)):
- """
- Quantize the given weights over `bits` bits.
-
- Returns:
- - quantized levels
- - (min, max) range.
-
- """
- assert (bits >= 1).all() and (bits <= 15).all()
- num_levels = (2 ** bits.float()).long()
- mn = p.min().item()
- mx = p.max().item()
- p = (p - mn) / (mx - mn) # put p in [0, 1]
- unit = 1 / (num_levels - 1) # quantization unit
- levels = (p / unit).round()
- if (bits <= 8).all():
- levels = levels.byte()
- else:
- levels = levels.short()
- return levels, (mn, mx)
-
-
-def uniform_unquantize(levels: torch.Tensor, scales: Tuple[float, float],
- bits: torch.Tensor = torch.tensor(8.)):
- """
- Unquantize the weights from the levels and scale. Return a float32 tensor.
- """
- mn, mx = scales
- num_levels = 2 ** bits.float()
- unit = 1 / (num_levels - 1)
- levels = levels.float()
- p = levels * unit # in [0, 1]
- return p * (mx - mn) + mn
-
-
-class UniformQuantizer(BaseQuantizer):
- def __init__(self, model: torch.nn.Module, bits: float = 8., min_size: float = 0.01,
- float16: bool = False, qat: bool = False, exclude=[], detect_bound=True):
- """
- Args:
- model (torch.nn.Module): model to quantize
- bits (float): number of bits to quantize over.
- min_size (float): minimum size in MB of a parameter to be quantized.
- float16 (bool): if a layer is smaller than min_size, should we still do float16?
- qat (bool): perform quantized aware training.
- exclude (list[str]): list of patterns used to match parameters to exclude.
- For instance `['bias']` to exclude all bias terms.
- detect_bound (bool): if True, will detect bound parameters and reuse
- the same quantized tensor for both.
- """
- self.bits = float(bits)
- self.qat = qat
-
- super().__init__(model, min_size, float16, exclude, detect_bound)
-
- def __repr__(self):
- return simple_repr(self, )
-
- def _pre_forward_train(self):
- if self.qat:
- for qparam in self._qparams:
- if qparam.other is not None:
- new_param = qparam.other.module._parameters[qparam.other.name]
- else:
- quantized = self._quantize_param(qparam)
- qvalue = self._unquantize_param(qparam, quantized)
- new_param = qparam.param + (qvalue - qparam.param).detach()
- qparam.module._parameters[qparam.name] = new_param
- return True
- return False
-
- def _post_forward_train(self):
- if self.qat:
- for qparam in self._qparams:
- qparam.module._parameters[qparam.name] = qparam.param
- return True
- return False
-
- def _quantize_param(self, qparam):
- levels, scales = uniform_quantize(qparam.param.data, torch.tensor(self.bits))
- return (levels, scales)
-
- def _unquantize_param(self, qparam, quantized):
- levels, scales = quantized
- return uniform_unquantize(levels, scales, torch.tensor(self.bits))
-
- def model_size(self):
- """
- Non differentiable model size in MB.
- """
- total = super().model_size()
- subtotal = 0
- for qparam in self._qparams:
- if qparam.other is None: # if parameter is bound, count only one copy.
- subtotal += self.bits * qparam.param.numel() + 64 # 2 float for the overall scales
- subtotal /= 2**20 * 8 # bits to MegaBytes
- return total + subtotal
-
- def true_model_size(self):
- """
- Return the true quantized model size, in MB, without extra
- compression.
- """
- return self.model_size().item()
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/mdx.py b/spaces/Kangarroar/ApplioRVC-Inference/mdx.py
deleted file mode 100644
index 4cc7c08b37bc371294f2f82b3382424a5455b7c2..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/mdx.py
+++ /dev/null
@@ -1,228 +0,0 @@
-import torch
-import onnxruntime as ort
-from tqdm import tqdm
-import warnings
-import numpy as np
-import hashlib
-import queue
-import threading
-
-warnings.filterwarnings("ignore")
-
-class MDX_Model:
- def __init__(self, device, dim_f, dim_t, n_fft, hop=1024, stem_name=None, compensation=1.000):
- self.dim_f = dim_f
- self.dim_t = dim_t
- self.dim_c = 4
- self.n_fft = n_fft
- self.hop = hop
- self.stem_name = stem_name
- self.compensation = compensation
-
- self.n_bins = self.n_fft//2+1
- self.chunk_size = hop * (self.dim_t-1)
- self.window = torch.hann_window(window_length=self.n_fft, periodic=True).to(device)
-
- out_c = self.dim_c
-
- self.freq_pad = torch.zeros([1, out_c, self.n_bins-self.dim_f, self.dim_t]).to(device)
-
- def stft(self, x):
- x = x.reshape([-1, self.chunk_size])
- x = torch.stft(x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True, return_complex=True)
- x = torch.view_as_real(x)
- x = x.permute([0,3,1,2])
- x = x.reshape([-1,2,2,self.n_bins,self.dim_t]).reshape([-1,4,self.n_bins,self.dim_t])
- return x[:,:,:self.dim_f]
-
- def istft(self, x, freq_pad=None):
- freq_pad = self.freq_pad.repeat([x.shape[0],1,1,1]) if freq_pad is None else freq_pad
- x = torch.cat([x, freq_pad], -2)
- # c = 4*2 if self.target_name=='*' else 2
- x = x.reshape([-1,2,2,self.n_bins,self.dim_t]).reshape([-1,2,self.n_bins,self.dim_t])
- x = x.permute([0,2,3,1])
- x = x.contiguous()
- x = torch.view_as_complex(x)
- x = torch.istft(x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True)
- return x.reshape([-1,2,self.chunk_size])
-
-
-class MDX:
-
- DEFAULT_SR = 44100
- # Unit: seconds
- DEFAULT_CHUNK_SIZE = 0 * DEFAULT_SR
- DEFAULT_MARGIN_SIZE = 1 * DEFAULT_SR
-
- DEFAULT_PROCESSOR = 0
-
- def __init__(self, model_path:str, params:MDX_Model, processor=DEFAULT_PROCESSOR):
-
- # Set the device and the provider (CPU or CUDA)
- self.device = torch.device(f'cuda:{processor}') if processor >= 0 else torch.device('cpu')
- self.provider = ['CUDAExecutionProvider'] if processor >= 0 else ['CPUExecutionProvider']
-
- self.model = params
-
- # Load the ONNX model using ONNX Runtime
- self.ort = ort.InferenceSession(model_path, providers=self.provider)
- # Preload the model for faster performance
- self.ort.run(None, {'input':torch.rand(1, 4, params.dim_f, params.dim_t).numpy()})
- self.process = lambda spec:self.ort.run(None, {'input': spec.cpu().numpy()})[0]
-
- self.prog = None
-
- @staticmethod
- def get_hash(model_path):
- try:
- with open(model_path, 'rb') as f:
- f.seek(- 10000 * 1024, 2)
- model_hash = hashlib.md5(f.read()).hexdigest()
- except:
- model_hash = hashlib.md5(open(model_path,'rb').read()).hexdigest()
-
- return model_hash
-
- @staticmethod
- def segment(wave, combine=True, chunk_size=DEFAULT_CHUNK_SIZE, margin_size=DEFAULT_MARGIN_SIZE):
- """
- Segment or join segmented wave array
-
- Args:
- wave: (np.array) Wave array to be segmented or joined
- combine: (bool) If True, combines segmented wave array. If False, segments wave array.
- chunk_size: (int) Size of each segment (in samples)
- margin_size: (int) Size of margin between segments (in samples)
-
- Returns:
- numpy array: Segmented or joined wave array
- """
-
- if combine:
- processed_wave = None # Initializing as None instead of [] for later numpy array concatenation
- for segment_count, segment in enumerate(wave):
- start = 0 if segment_count == 0 else margin_size
- end = None if segment_count == len(wave)-1 else -margin_size
- if margin_size == 0:
- end = None
- if processed_wave is None: # Create array for first segment
- processed_wave = segment[:, start:end]
- else: # Concatenate to existing array for subsequent segments
- processed_wave = np.concatenate((processed_wave, segment[:, start:end]), axis=-1)
-
- else:
- processed_wave = []
- sample_count = wave.shape[-1]
-
- if chunk_size <= 0 or chunk_size > sample_count:
- chunk_size = sample_count
-
- if margin_size > chunk_size:
- margin_size = chunk_size
-
- for segment_count, skip in enumerate(range(0, sample_count, chunk_size)):
-
- margin = 0 if segment_count == 0 else margin_size
- end = min(skip+chunk_size+margin_size, sample_count)
- start = skip-margin
-
- cut = wave[:,start:end].copy()
- processed_wave.append(cut)
-
- if end == sample_count:
- break
-
- return processed_wave
-
- def pad_wave(self, wave):
- """
- Pad the wave array to match the required chunk size
-
- Args:
- wave: (np.array) Wave array to be padded
-
- Returns:
- tuple: (padded_wave, pad, trim)
- - padded_wave: Padded wave array
- - pad: Number of samples that were padded
- - trim: Number of samples that were trimmed
- """
- n_sample = wave.shape[1]
- trim = self.model.n_fft//2
- gen_size = self.model.chunk_size-2*trim
- pad = gen_size - n_sample%gen_size
-
- # Padded wave
- wave_p = np.concatenate((np.zeros((2,trim)), wave, np.zeros((2,pad)), np.zeros((2,trim))), 1)
-
- mix_waves = []
- for i in range(0, n_sample+pad, gen_size):
- waves = np.array(wave_p[:, i:i+self.model.chunk_size])
- mix_waves.append(waves)
-
- mix_waves = torch.tensor(mix_waves, dtype=torch.float32).to(self.device)
-
- return mix_waves, pad, trim
-
- def _process_wave(self, mix_waves, trim, pad, q:queue.Queue, _id:int):
- """
- Process each wave segment in a multi-threaded environment
-
- Args:
- mix_waves: (torch.Tensor) Wave segments to be processed
- trim: (int) Number of samples trimmed during padding
- pad: (int) Number of samples padded during padding
- q: (queue.Queue) Queue to hold the processed wave segments
- _id: (int) Identifier of the processed wave segment
-
- Returns:
- numpy array: Processed wave segment
- """
- mix_waves = mix_waves.split(1)
- with torch.no_grad():
- pw = []
- for mix_wave in mix_waves:
- self.prog.update()
- spec = self.model.stft(mix_wave)
- processed_spec = torch.tensor(self.process(spec))
- processed_wav = self.model.istft(processed_spec.to(self.device))
- processed_wav = processed_wav[:,:,trim:-trim].transpose(0,1).reshape(2, -1).cpu().numpy()
- pw.append(processed_wav)
- processed_signal = np.concatenate(pw, axis=-1)[:, :-pad]
- q.put({_id:processed_signal})
- return processed_signal
-
- def process_wave(self, wave:np.array, mt_threads=1):
- """
- Process the wave array in a multi-threaded environment
-
- Args:
- wave: (np.array) Wave array to be processed
- mt_threads: (int) Number of threads to be used for processing
-
- Returns:
- numpy array: Processed wave array
- """
- self.prog = tqdm(total=0)
- chunk = wave.shape[-1]//mt_threads
- waves = self.segment(wave, False, chunk)
-
- # Create a queue to hold the processed wave segments
- q = queue.Queue()
- threads = []
- for c, batch in enumerate(waves):
- mix_waves, pad, trim = self.pad_wave(batch)
- self.prog.total = len(mix_waves)*mt_threads
- thread = threading.Thread(target=self._process_wave, args=(mix_waves, trim, pad, q, c))
- thread.start()
- threads.append(thread)
- for thread in threads:
- thread.join()
- self.prog.close()
-
- processed_batches = []
- while not q.empty():
- processed_batches.append(q.get())
- processed_batches = [list(wave.values())[0] for wave in sorted(processed_batches, key=lambda d: list(d.keys())[0])]
- assert len(processed_batches) == len(waves), 'Incomplete processed batches, please reduce batch size!'
- return self.segment(processed_batches, True, chunk)
\ No newline at end of file
diff --git a/spaces/KaygNas/cut-it/src/Robot.ts b/spaces/KaygNas/cut-it/src/Robot.ts
deleted file mode 100644
index 059298e775efd832c2375cfc74c4328c66db040e..0000000000000000000000000000000000000000
--- a/spaces/KaygNas/cut-it/src/Robot.ts
+++ /dev/null
@@ -1,131 +0,0 @@
-import '@babylonjs/loaders/glTF'
-import { Animation, CreateBox, CubicEase, SceneLoader, Vector3 } from '@babylonjs/core'
-import type { Animatable, ISceneLoaderAsyncResult, Mesh, Scene } from '@babylonjs/core'
-import { MODEL_ASSETS_ROOT_URL } from './constants'
-import { assert } from './utils'
-import { LaserCutter } from './LaserCutter'
-
-enum Pose {
- TakeOff,
- Land,
- SpinLeft,
- SpinRight,
- Hover,
- Forward,
- Backward,
-}
-export class Robot {
- static Pose = Pose
- assets?: ISceneLoaderAsyncResult
- scene: Scene
- mesh: Mesh
- pose: Pose = Pose.Land
- laserCutter: LaserCutter
-
- private _movingAnimatable = new Set()
-
- constructor(scene: Scene) {
- this.mesh = CreateBox('Root', { size: 2 }, scene)
- this.mesh.isVisible = false
- this.scene = scene
- this.laserCutter = new LaserCutter(scene)
- this.mesh.addChild(this.laserCutter.pivot)
- this.loadAssets(scene)
- }
-
- private async loadAssets(scene: Scene) {
- const result = await SceneLoader.ImportMeshAsync(null, `${MODEL_ASSETS_ROOT_URL}/buster_drone/`, 'buster_drone.gltf', scene)
- const root = result.meshes[0]
- const bbox = root.getHierarchyBoundingVectors()
-
- this.assets = result
- this.assets.animationGroups.forEach(anim => anim.pause())
- this.mesh.addChild(root)
- this.laserCutter.pivot.translate(Vector3.Up(), 1)
- root.translate(Vector3.Up(), -bbox.min.y)
- }
-
- async takeOff() {
- await this._playAnimation(Pose.TakeOff, false)
- this._playAnimation(Pose.Hover, true)
- if (this._movingAnimatable.size === 0)
- await this.moveTo(new Vector3(0.0, 12.0, 0.0))
- }
-
- async land() {
- await this._playAnimation(Pose.Land, false)
- }
-
- async moveStop() {
- await this._playAnimation(Pose.Backward, false)
- await this._playAnimation(Pose.Forward, false)
- this._playAnimation(Pose.Hover, true)
- }
-
- async moveTo(destination: Vector3) {
- const tn = this.mesh
- assert(!!tn, 'Root must be existed.')
-
- this._movingAnimatable.forEach((anim) => {
- anim.stop()
- this._movingAnimatable.delete(anim)
- })
-
- const SPEED = 6.0
- const position = tn.position.clone()
- const frameRate = 1 / (destination.subtract(position).length() / SPEED)
- const anim = new Animation('Move', 'position', frameRate, Animation.ANIMATIONTYPE_VECTOR3, Animation.ANIMATIONLOOPMODE_CONSTANT)
- anim.setKeys([
- { frame: 0, value: position },
- { frame: 1, value: destination },
- ])
- anim.setEasingFunction(new CubicEase())
- tn.animations.push(anim)
- const animatable = this.scene.beginAnimation(tn, 0, 1)
- this._movingAnimatable.add(animatable)
- await animatable.waitAsync()
- tn.animations.splice(tn.animations.findIndex(a => a === anim), 1)
- this._movingAnimatable.delete(animatable)
-
- await this.moveStop()
- }
-
- private async _playAnimation(pose: Pose, loop: boolean, percentage: number = 1) {
- this.pose = pose
- const anims: Record = {
- [Pose.TakeOff]: [0, 200, 2],
- [Pose.Land]: [200, 0, 2],
- [Pose.Forward]: [230, 201, 1.5],
- [Pose.Backward]: [201, 230, 1.5],
- [Pose.SpinLeft]: [231, 293],
- [Pose.SpinRight]: [293, 231],
- [Pose.Hover]: [400, 600],
- }
- if (anims[pose]) {
- let [startFrame, endFrame, speedRatio = 1] = anims[pose]
- if (startFrame > endFrame)
- startFrame = (startFrame - endFrame) * percentage + endFrame
- else
- endFrame = startFrame + (endFrame - startFrame) * percentage
-
- await this._playFrames(startFrame, endFrame, loop, speedRatio)
- }
- }
-
- private _playFrames(from: number, to: number, loop: boolean, speedRatio: number) {
- // Frames inspected in Blender is 600, but in here it's 1500, scale to align with Blender.
- const SCALE = 1500 / 600
- const { scene } = this
- const anims = this.assets?.animationGroups.flatMap((g) => {
- return g.targetedAnimations.flatMap((target) => {
- target.animation.enableBlending = true
- return scene.beginAnimation(target.target, from * SCALE, to * SCALE, loop, speedRatio)
- })
- })
- return Promise.any((anims ?? []).flatMap(anim => anim.waitAsync()))
- }
-
- static create(scene: Scene) {
- return new Robot(scene)
- }
-}
diff --git a/spaces/Kevin676/Demucs_v4/README.md b/spaces/Kevin676/Demucs_v4/README.md
deleted file mode 100644
index b4036935c62ef668c5dcc6fa326f91d89702a39c..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/Demucs_v4/README.md
+++ /dev/null
@@ -1,38 +0,0 @@
----
-title: Demucs Music Source Separation (v4)
-emoji: ⚡
-colorFrom: red
-colorTo: purple
-sdk: gradio
-app_file: app.py
-pinned: false
-duplicated_from: Thafx/Demucs_v4_2s_HT
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
\ No newline at end of file
diff --git a/spaces/Korakoe/convert-sd-ckpt-cpu/README.md b/spaces/Korakoe/convert-sd-ckpt-cpu/README.md
deleted file mode 100644
index d1573bfa3ad52f643dfe1810e35fdf8c111df9af..0000000000000000000000000000000000000000
--- a/spaces/Korakoe/convert-sd-ckpt-cpu/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Convert to Diffusers
-emoji: 🤖
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-sdk_version: 3.9.1
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: diffusers/convert-sd-ckpt
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/reppoints_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/reppoints_head.py
deleted file mode 100644
index 22f3e3401a4abd9cc35b41d24efe23e5655a905e..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/reppoints_head.py
+++ /dev/null
@@ -1,885 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Dict, List, Sequence, Tuple
-
-import numpy as np
-import torch
-import torch.nn as nn
-from mmcv.cnn import ConvModule
-from mmcv.ops import DeformConv2d
-from mmengine.config import ConfigDict
-from mmengine.structures import InstanceData
-from torch import Tensor
-
-from mmdet.registry import MODELS, TASK_UTILS
-from mmdet.utils import ConfigType, InstanceList, MultiConfig, OptInstanceList
-from ..task_modules.prior_generators import MlvlPointGenerator
-from ..task_modules.samplers import PseudoSampler
-from ..utils import (filter_scores_and_topk, images_to_levels, multi_apply,
- unmap)
-from .anchor_free_head import AnchorFreeHead
-
-
-@MODELS.register_module()
-class RepPointsHead(AnchorFreeHead):
- """RepPoint head.
-
- Args:
- num_classes (int): Number of categories excluding the background
- category.
- in_channels (int): Number of channels in the input feature map.
- point_feat_channels (int): Number of channels of points features.
- num_points (int): Number of points.
- gradient_mul (float): The multiplier to gradients from
- points refinement and recognition.
- point_strides (Sequence[int]): points strides.
- point_base_scale (int): bbox scale for assigning labels.
- loss_cls (:obj:`ConfigDict` or dict): Config of classification loss.
- loss_bbox_init (:obj:`ConfigDict` or dict): Config of initial points
- loss.
- loss_bbox_refine (:obj:`ConfigDict` or dict): Config of points loss in
- refinement.
- use_grid_points (bool): If we use bounding box representation, the
- reppoints is represented as grid points on the bounding box.
- center_init (bool): Whether to use center point assignment.
- transform_method (str): The methods to transform RepPoints to bbox.
- init_cfg (:obj:`ConfigDict` or dict or list[:obj:`ConfigDict` or \
- dict]): Initialization config dict.
- """ # noqa: W605
-
- def __init__(self,
- num_classes: int,
- in_channels: int,
- point_feat_channels: int = 256,
- num_points: int = 9,
- gradient_mul: float = 0.1,
- point_strides: Sequence[int] = [8, 16, 32, 64, 128],
- point_base_scale: int = 4,
- loss_cls: ConfigType = dict(
- type='FocalLoss',
- use_sigmoid=True,
- gamma=2.0,
- alpha=0.25,
- loss_weight=1.0),
- loss_bbox_init: ConfigType = dict(
- type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=0.5),
- loss_bbox_refine: ConfigType = dict(
- type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0),
- use_grid_points: bool = False,
- center_init: bool = True,
- transform_method: str = 'moment',
- moment_mul: float = 0.01,
- init_cfg: MultiConfig = dict(
- type='Normal',
- layer='Conv2d',
- std=0.01,
- override=dict(
- type='Normal',
- name='reppoints_cls_out',
- std=0.01,
- bias_prob=0.01)),
- **kwargs) -> None:
- self.num_points = num_points
- self.point_feat_channels = point_feat_channels
- self.use_grid_points = use_grid_points
- self.center_init = center_init
-
- # we use deform conv to extract points features
- self.dcn_kernel = int(np.sqrt(num_points))
- self.dcn_pad = int((self.dcn_kernel - 1) / 2)
- assert self.dcn_kernel * self.dcn_kernel == num_points, \
- 'The points number should be a square number.'
- assert self.dcn_kernel % 2 == 1, \
- 'The points number should be an odd square number.'
- dcn_base = np.arange(-self.dcn_pad,
- self.dcn_pad + 1).astype(np.float64)
- dcn_base_y = np.repeat(dcn_base, self.dcn_kernel)
- dcn_base_x = np.tile(dcn_base, self.dcn_kernel)
- dcn_base_offset = np.stack([dcn_base_y, dcn_base_x], axis=1).reshape(
- (-1))
- self.dcn_base_offset = torch.tensor(dcn_base_offset).view(1, -1, 1, 1)
-
- super().__init__(
- num_classes=num_classes,
- in_channels=in_channels,
- loss_cls=loss_cls,
- init_cfg=init_cfg,
- **kwargs)
-
- self.gradient_mul = gradient_mul
- self.point_base_scale = point_base_scale
- self.point_strides = point_strides
- self.prior_generator = MlvlPointGenerator(
- self.point_strides, offset=0.)
-
- if self.train_cfg:
- self.init_assigner = TASK_UTILS.build(
- self.train_cfg['init']['assigner'])
- self.refine_assigner = TASK_UTILS.build(
- self.train_cfg['refine']['assigner'])
-
- if self.train_cfg.get('sampler', None) is not None:
- self.sampler = TASK_UTILS.build(
- self.train_cfg['sampler'], default_args=dict(context=self))
- else:
- self.sampler = PseudoSampler(context=self)
-
- self.transform_method = transform_method
- if self.transform_method == 'moment':
- self.moment_transfer = nn.Parameter(
- data=torch.zeros(2), requires_grad=True)
- self.moment_mul = moment_mul
-
- self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False)
- if self.use_sigmoid_cls:
- self.cls_out_channels = self.num_classes
- else:
- self.cls_out_channels = self.num_classes + 1
- self.loss_bbox_init = MODELS.build(loss_bbox_init)
- self.loss_bbox_refine = MODELS.build(loss_bbox_refine)
-
- def _init_layers(self) -> None:
- """Initialize layers of the head."""
- self.relu = nn.ReLU(inplace=True)
- self.cls_convs = nn.ModuleList()
- self.reg_convs = nn.ModuleList()
- for i in range(self.stacked_convs):
- chn = self.in_channels if i == 0 else self.feat_channels
- self.cls_convs.append(
- ConvModule(
- chn,
- self.feat_channels,
- 3,
- stride=1,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg))
- self.reg_convs.append(
- ConvModule(
- chn,
- self.feat_channels,
- 3,
- stride=1,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg))
- pts_out_dim = 4 if self.use_grid_points else 2 * self.num_points
- self.reppoints_cls_conv = DeformConv2d(self.feat_channels,
- self.point_feat_channels,
- self.dcn_kernel, 1,
- self.dcn_pad)
- self.reppoints_cls_out = nn.Conv2d(self.point_feat_channels,
- self.cls_out_channels, 1, 1, 0)
- self.reppoints_pts_init_conv = nn.Conv2d(self.feat_channels,
- self.point_feat_channels, 3,
- 1, 1)
- self.reppoints_pts_init_out = nn.Conv2d(self.point_feat_channels,
- pts_out_dim, 1, 1, 0)
- self.reppoints_pts_refine_conv = DeformConv2d(self.feat_channels,
- self.point_feat_channels,
- self.dcn_kernel, 1,
- self.dcn_pad)
- self.reppoints_pts_refine_out = nn.Conv2d(self.point_feat_channels,
- pts_out_dim, 1, 1, 0)
-
- def points2bbox(self, pts: Tensor, y_first: bool = True) -> Tensor:
- """Converting the points set into bounding box.
-
- Args:
- pts (Tensor): the input points sets (fields), each points
- set (fields) is represented as 2n scalar.
- y_first (bool): if y_first=True, the point set is
- represented as [y1, x1, y2, x2 ... yn, xn], otherwise
- the point set is represented as
- [x1, y1, x2, y2 ... xn, yn]. Defaults to True.
-
- Returns:
- Tensor: each points set is converting to a bbox [x1, y1, x2, y2].
- """
- pts_reshape = pts.view(pts.shape[0], -1, 2, *pts.shape[2:])
- pts_y = pts_reshape[:, :, 0, ...] if y_first else pts_reshape[:, :, 1,
- ...]
- pts_x = pts_reshape[:, :, 1, ...] if y_first else pts_reshape[:, :, 0,
- ...]
- if self.transform_method == 'minmax':
- bbox_left = pts_x.min(dim=1, keepdim=True)[0]
- bbox_right = pts_x.max(dim=1, keepdim=True)[0]
- bbox_up = pts_y.min(dim=1, keepdim=True)[0]
- bbox_bottom = pts_y.max(dim=1, keepdim=True)[0]
- bbox = torch.cat([bbox_left, bbox_up, bbox_right, bbox_bottom],
- dim=1)
- elif self.transform_method == 'partial_minmax':
- pts_y = pts_y[:, :4, ...]
- pts_x = pts_x[:, :4, ...]
- bbox_left = pts_x.min(dim=1, keepdim=True)[0]
- bbox_right = pts_x.max(dim=1, keepdim=True)[0]
- bbox_up = pts_y.min(dim=1, keepdim=True)[0]
- bbox_bottom = pts_y.max(dim=1, keepdim=True)[0]
- bbox = torch.cat([bbox_left, bbox_up, bbox_right, bbox_bottom],
- dim=1)
- elif self.transform_method == 'moment':
- pts_y_mean = pts_y.mean(dim=1, keepdim=True)
- pts_x_mean = pts_x.mean(dim=1, keepdim=True)
- pts_y_std = torch.std(pts_y - pts_y_mean, dim=1, keepdim=True)
- pts_x_std = torch.std(pts_x - pts_x_mean, dim=1, keepdim=True)
- moment_transfer = (self.moment_transfer * self.moment_mul) + (
- self.moment_transfer.detach() * (1 - self.moment_mul))
- moment_width_transfer = moment_transfer[0]
- moment_height_transfer = moment_transfer[1]
- half_width = pts_x_std * torch.exp(moment_width_transfer)
- half_height = pts_y_std * torch.exp(moment_height_transfer)
- bbox = torch.cat([
- pts_x_mean - half_width, pts_y_mean - half_height,
- pts_x_mean + half_width, pts_y_mean + half_height
- ],
- dim=1)
- else:
- raise NotImplementedError
- return bbox
-
- def gen_grid_from_reg(self, reg: Tensor,
- previous_boxes: Tensor) -> Tuple[Tensor]:
- """Base on the previous bboxes and regression values, we compute the
- regressed bboxes and generate the grids on the bboxes.
-
- Args:
- reg (Tensor): the regression value to previous bboxes.
- previous_boxes (Tensor): previous bboxes.
-
- Returns:
- Tuple[Tensor]: generate grids on the regressed bboxes.
- """
- b, _, h, w = reg.shape
- bxy = (previous_boxes[:, :2, ...] + previous_boxes[:, 2:, ...]) / 2.
- bwh = (previous_boxes[:, 2:, ...] -
- previous_boxes[:, :2, ...]).clamp(min=1e-6)
- grid_topleft = bxy + bwh * reg[:, :2, ...] - 0.5 * bwh * torch.exp(
- reg[:, 2:, ...])
- grid_wh = bwh * torch.exp(reg[:, 2:, ...])
- grid_left = grid_topleft[:, [0], ...]
- grid_top = grid_topleft[:, [1], ...]
- grid_width = grid_wh[:, [0], ...]
- grid_height = grid_wh[:, [1], ...]
- intervel = torch.linspace(0., 1., self.dcn_kernel).view(
- 1, self.dcn_kernel, 1, 1).type_as(reg)
- grid_x = grid_left + grid_width * intervel
- grid_x = grid_x.unsqueeze(1).repeat(1, self.dcn_kernel, 1, 1, 1)
- grid_x = grid_x.view(b, -1, h, w)
- grid_y = grid_top + grid_height * intervel
- grid_y = grid_y.unsqueeze(2).repeat(1, 1, self.dcn_kernel, 1, 1)
- grid_y = grid_y.view(b, -1, h, w)
- grid_yx = torch.stack([grid_y, grid_x], dim=2)
- grid_yx = grid_yx.view(b, -1, h, w)
- regressed_bbox = torch.cat([
- grid_left, grid_top, grid_left + grid_width, grid_top + grid_height
- ], 1)
- return grid_yx, regressed_bbox
-
- def forward(self, feats: Tuple[Tensor]) -> Tuple[Tensor]:
- return multi_apply(self.forward_single, feats)
-
- def forward_single(self, x: Tensor) -> Tuple[Tensor]:
- """Forward feature map of a single FPN level."""
- dcn_base_offset = self.dcn_base_offset.type_as(x)
- # If we use center_init, the initial reppoints is from center points.
- # If we use bounding bbox representation, the initial reppoints is
- # from regular grid placed on a pre-defined bbox.
- if self.use_grid_points or not self.center_init:
- scale = self.point_base_scale / 2
- points_init = dcn_base_offset / dcn_base_offset.max() * scale
- bbox_init = x.new_tensor([-scale, -scale, scale,
- scale]).view(1, 4, 1, 1)
- else:
- points_init = 0
- cls_feat = x
- pts_feat = x
- for cls_conv in self.cls_convs:
- cls_feat = cls_conv(cls_feat)
- for reg_conv in self.reg_convs:
- pts_feat = reg_conv(pts_feat)
- # initialize reppoints
- pts_out_init = self.reppoints_pts_init_out(
- self.relu(self.reppoints_pts_init_conv(pts_feat)))
- if self.use_grid_points:
- pts_out_init, bbox_out_init = self.gen_grid_from_reg(
- pts_out_init, bbox_init.detach())
- else:
- pts_out_init = pts_out_init + points_init
- # refine and classify reppoints
- pts_out_init_grad_mul = (1 - self.gradient_mul) * pts_out_init.detach(
- ) + self.gradient_mul * pts_out_init
- dcn_offset = pts_out_init_grad_mul - dcn_base_offset
- cls_out = self.reppoints_cls_out(
- self.relu(self.reppoints_cls_conv(cls_feat, dcn_offset)))
- pts_out_refine = self.reppoints_pts_refine_out(
- self.relu(self.reppoints_pts_refine_conv(pts_feat, dcn_offset)))
- if self.use_grid_points:
- pts_out_refine, bbox_out_refine = self.gen_grid_from_reg(
- pts_out_refine, bbox_out_init.detach())
- else:
- pts_out_refine = pts_out_refine + pts_out_init.detach()
-
- if self.training:
- return cls_out, pts_out_init, pts_out_refine
- else:
- return cls_out, self.points2bbox(pts_out_refine)
-
- def get_points(self, featmap_sizes: List[Tuple[int]],
- batch_img_metas: List[dict], device: str) -> tuple:
- """Get points according to feature map sizes.
-
- Args:
- featmap_sizes (list[tuple]): Multi-level feature map sizes.
- batch_img_metas (list[dict]): Image meta info.
-
- Returns:
- tuple: points of each image, valid flags of each image
- """
- num_imgs = len(batch_img_metas)
-
- # since feature map sizes of all images are the same, we only compute
- # points center for one time
- multi_level_points = self.prior_generator.grid_priors(
- featmap_sizes, device=device, with_stride=True)
- points_list = [[point.clone() for point in multi_level_points]
- for _ in range(num_imgs)]
-
- # for each image, we compute valid flags of multi level grids
- valid_flag_list = []
- for img_id, img_meta in enumerate(batch_img_metas):
- multi_level_flags = self.prior_generator.valid_flags(
- featmap_sizes, img_meta['pad_shape'], device=device)
- valid_flag_list.append(multi_level_flags)
-
- return points_list, valid_flag_list
-
- def centers_to_bboxes(self, point_list: List[Tensor]) -> List[Tensor]:
- """Get bboxes according to center points.
-
- Only used in :class:`MaxIoUAssigner`.
- """
- bbox_list = []
- for i_img, point in enumerate(point_list):
- bbox = []
- for i_lvl in range(len(self.point_strides)):
- scale = self.point_base_scale * self.point_strides[i_lvl] * 0.5
- bbox_shift = torch.Tensor([-scale, -scale, scale,
- scale]).view(1, 4).type_as(point[0])
- bbox_center = torch.cat(
- [point[i_lvl][:, :2], point[i_lvl][:, :2]], dim=1)
- bbox.append(bbox_center + bbox_shift)
- bbox_list.append(bbox)
- return bbox_list
-
- def offset_to_pts(self, center_list: List[Tensor],
- pred_list: List[Tensor]) -> List[Tensor]:
- """Change from point offset to point coordinate."""
- pts_list = []
- for i_lvl in range(len(self.point_strides)):
- pts_lvl = []
- for i_img in range(len(center_list)):
- pts_center = center_list[i_img][i_lvl][:, :2].repeat(
- 1, self.num_points)
- pts_shift = pred_list[i_lvl][i_img]
- yx_pts_shift = pts_shift.permute(1, 2, 0).view(
- -1, 2 * self.num_points)
- y_pts_shift = yx_pts_shift[..., 0::2]
- x_pts_shift = yx_pts_shift[..., 1::2]
- xy_pts_shift = torch.stack([x_pts_shift, y_pts_shift], -1)
- xy_pts_shift = xy_pts_shift.view(*yx_pts_shift.shape[:-1], -1)
- pts = xy_pts_shift * self.point_strides[i_lvl] + pts_center
- pts_lvl.append(pts)
- pts_lvl = torch.stack(pts_lvl, 0)
- pts_list.append(pts_lvl)
- return pts_list
-
- def _get_targets_single(self,
- flat_proposals: Tensor,
- valid_flags: Tensor,
- gt_instances: InstanceData,
- gt_instances_ignore: InstanceData,
- stage: str = 'init',
- unmap_outputs: bool = True) -> tuple:
- """Compute corresponding GT box and classification targets for
- proposals.
-
- Args:
- flat_proposals (Tensor): Multi level points of a image.
- valid_flags (Tensor): Multi level valid flags of a image.
- gt_instances (InstanceData): It usually includes ``bboxes`` and
- ``labels`` attributes.
- gt_instances_ignore (InstanceData): It includes ``bboxes``
- attribute data that is ignored during training and testing.
- stage (str): 'init' or 'refine'. Generate target for
- init stage or refine stage. Defaults to 'init'.
- unmap_outputs (bool): Whether to map outputs back to
- the original set of anchors. Defaults to True.
-
- Returns:
- tuple:
-
- - labels (Tensor): Labels of each level.
- - label_weights (Tensor): Label weights of each level.
- - bbox_targets (Tensor): BBox targets of each level.
- - bbox_weights (Tensor): BBox weights of each level.
- - pos_inds (Tensor): positive samples indexes.
- - neg_inds (Tensor): negative samples indexes.
- - sampling_result (:obj:`SamplingResult`): Sampling results.
- """
- inside_flags = valid_flags
- if not inside_flags.any():
- raise ValueError(
- 'There is no valid proposal inside the image boundary. Please '
- 'check the image size.')
- # assign gt and sample proposals
- proposals = flat_proposals[inside_flags, :]
- pred_instances = InstanceData(priors=proposals)
-
- if stage == 'init':
- assigner = self.init_assigner
- pos_weight = self.train_cfg['init']['pos_weight']
- else:
- assigner = self.refine_assigner
- pos_weight = self.train_cfg['refine']['pos_weight']
-
- assign_result = assigner.assign(pred_instances, gt_instances,
- gt_instances_ignore)
- sampling_result = self.sampler.sample(assign_result, pred_instances,
- gt_instances)
-
- num_valid_proposals = proposals.shape[0]
- bbox_gt = proposals.new_zeros([num_valid_proposals, 4])
- pos_proposals = torch.zeros_like(proposals)
- proposals_weights = proposals.new_zeros([num_valid_proposals, 4])
- labels = proposals.new_full((num_valid_proposals, ),
- self.num_classes,
- dtype=torch.long)
- label_weights = proposals.new_zeros(
- num_valid_proposals, dtype=torch.float)
-
- pos_inds = sampling_result.pos_inds
- neg_inds = sampling_result.neg_inds
- if len(pos_inds) > 0:
- bbox_gt[pos_inds, :] = sampling_result.pos_gt_bboxes
- pos_proposals[pos_inds, :] = proposals[pos_inds, :]
- proposals_weights[pos_inds, :] = 1.0
-
- labels[pos_inds] = sampling_result.pos_gt_labels
- if pos_weight <= 0:
- label_weights[pos_inds] = 1.0
- else:
- label_weights[pos_inds] = pos_weight
- if len(neg_inds) > 0:
- label_weights[neg_inds] = 1.0
-
- # map up to original set of proposals
- if unmap_outputs:
- num_total_proposals = flat_proposals.size(0)
- labels = unmap(
- labels,
- num_total_proposals,
- inside_flags,
- fill=self.num_classes) # fill bg label
- label_weights = unmap(label_weights, num_total_proposals,
- inside_flags)
- bbox_gt = unmap(bbox_gt, num_total_proposals, inside_flags)
- pos_proposals = unmap(pos_proposals, num_total_proposals,
- inside_flags)
- proposals_weights = unmap(proposals_weights, num_total_proposals,
- inside_flags)
-
- return (labels, label_weights, bbox_gt, pos_proposals,
- proposals_weights, pos_inds, neg_inds, sampling_result)
-
- def get_targets(self,
- proposals_list: List[Tensor],
- valid_flag_list: List[Tensor],
- batch_gt_instances: InstanceList,
- batch_img_metas: List[dict],
- batch_gt_instances_ignore: OptInstanceList = None,
- stage: str = 'init',
- unmap_outputs: bool = True,
- return_sampling_results: bool = False) -> tuple:
- """Compute corresponding GT box and classification targets for
- proposals.
-
- Args:
- proposals_list (list[Tensor]): Multi level points/bboxes of each
- image.
- valid_flag_list (list[Tensor]): Multi level valid flags of each
- image.
- batch_gt_instances (list[:obj:`InstanceData`]): Batch of
- gt_instance. It usually includes ``bboxes`` and ``labels``
- attributes.
- batch_img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- batch_gt_instances_ignore (list[:obj:`InstanceData`], optional):
- Batch of gt_instances_ignore. It includes ``bboxes`` attribute
- data that is ignored during training and testing.
- Defaults to None.
- stage (str): 'init' or 'refine'. Generate target for init stage or
- refine stage.
- unmap_outputs (bool): Whether to map outputs back to the original
- set of anchors.
- return_sampling_results (bool): Whether to return the sampling
- results. Defaults to False.
-
- Returns:
- tuple:
-
- - labels_list (list[Tensor]): Labels of each level.
- - label_weights_list (list[Tensor]): Label weights of each
- level.
- - bbox_gt_list (list[Tensor]): Ground truth bbox of each level.
- - proposals_list (list[Tensor]): Proposals(points/bboxes) of
- each level.
- - proposal_weights_list (list[Tensor]): Proposal weights of
- each level.
- - avg_factor (int): Average factor that is used to average
- the loss. When using sampling method, avg_factor is usually
- the sum of positive and negative priors. When using
- `PseudoSampler`, `avg_factor` is usually equal to the number
- of positive priors.
- """
- assert stage in ['init', 'refine']
- num_imgs = len(batch_img_metas)
- assert len(proposals_list) == len(valid_flag_list) == num_imgs
-
- # points number of multi levels
- num_level_proposals = [points.size(0) for points in proposals_list[0]]
-
- # concat all level points and flags to a single tensor
- for i in range(num_imgs):
- assert len(proposals_list[i]) == len(valid_flag_list[i])
- proposals_list[i] = torch.cat(proposals_list[i])
- valid_flag_list[i] = torch.cat(valid_flag_list[i])
-
- if batch_gt_instances_ignore is None:
- batch_gt_instances_ignore = [None] * num_imgs
-
- (all_labels, all_label_weights, all_bbox_gt, all_proposals,
- all_proposal_weights, pos_inds_list, neg_inds_list,
- sampling_results_list) = multi_apply(
- self._get_targets_single,
- proposals_list,
- valid_flag_list,
- batch_gt_instances,
- batch_gt_instances_ignore,
- stage=stage,
- unmap_outputs=unmap_outputs)
-
- # sampled points of all images
- avg_refactor = sum(
- [results.avg_factor for results in sampling_results_list])
- labels_list = images_to_levels(all_labels, num_level_proposals)
- label_weights_list = images_to_levels(all_label_weights,
- num_level_proposals)
- bbox_gt_list = images_to_levels(all_bbox_gt, num_level_proposals)
- proposals_list = images_to_levels(all_proposals, num_level_proposals)
- proposal_weights_list = images_to_levels(all_proposal_weights,
- num_level_proposals)
- res = (labels_list, label_weights_list, bbox_gt_list, proposals_list,
- proposal_weights_list, avg_refactor)
- if return_sampling_results:
- res = res + (sampling_results_list, )
-
- return res
-
- def loss_by_feat_single(self, cls_score: Tensor, pts_pred_init: Tensor,
- pts_pred_refine: Tensor, labels: Tensor,
- label_weights, bbox_gt_init: Tensor,
- bbox_weights_init: Tensor, bbox_gt_refine: Tensor,
- bbox_weights_refine: Tensor, stride: int,
- avg_factor_init: int,
- avg_factor_refine: int) -> Tuple[Tensor]:
- """Calculate the loss of a single scale level based on the features
- extracted by the detection head.
-
- Args:
- cls_score (Tensor): Box scores for each scale level
- Has shape (N, num_classes, h_i, w_i).
- pts_pred_init (Tensor): Points of shape
- (batch_size, h_i * w_i, num_points * 2).
- pts_pred_refine (Tensor): Points refined of shape
- (batch_size, h_i * w_i, num_points * 2).
- labels (Tensor): Ground truth class indices with shape
- (batch_size, h_i * w_i).
- label_weights (Tensor): Label weights of shape
- (batch_size, h_i * w_i).
- bbox_gt_init (Tensor): BBox regression targets in the init stage
- of shape (batch_size, h_i * w_i, 4).
- bbox_weights_init (Tensor): BBox regression loss weights in the
- init stage of shape (batch_size, h_i * w_i, 4).
- bbox_gt_refine (Tensor): BBox regression targets in the refine
- stage of shape (batch_size, h_i * w_i, 4).
- bbox_weights_refine (Tensor): BBox regression loss weights in the
- refine stage of shape (batch_size, h_i * w_i, 4).
- stride (int): Point stride.
- avg_factor_init (int): Average factor that is used to average
- the loss in the init stage.
- avg_factor_refine (int): Average factor that is used to average
- the loss in the refine stage.
-
- Returns:
- Tuple[Tensor]: loss components.
- """
- # classification loss
- labels = labels.reshape(-1)
- label_weights = label_weights.reshape(-1)
- cls_score = cls_score.permute(0, 2, 3,
- 1).reshape(-1, self.cls_out_channels)
- cls_score = cls_score.contiguous()
- loss_cls = self.loss_cls(
- cls_score, labels, label_weights, avg_factor=avg_factor_refine)
-
- # points loss
- bbox_gt_init = bbox_gt_init.reshape(-1, 4)
- bbox_weights_init = bbox_weights_init.reshape(-1, 4)
- bbox_pred_init = self.points2bbox(
- pts_pred_init.reshape(-1, 2 * self.num_points), y_first=False)
- bbox_gt_refine = bbox_gt_refine.reshape(-1, 4)
- bbox_weights_refine = bbox_weights_refine.reshape(-1, 4)
- bbox_pred_refine = self.points2bbox(
- pts_pred_refine.reshape(-1, 2 * self.num_points), y_first=False)
- normalize_term = self.point_base_scale * stride
- loss_pts_init = self.loss_bbox_init(
- bbox_pred_init / normalize_term,
- bbox_gt_init / normalize_term,
- bbox_weights_init,
- avg_factor=avg_factor_init)
- loss_pts_refine = self.loss_bbox_refine(
- bbox_pred_refine / normalize_term,
- bbox_gt_refine / normalize_term,
- bbox_weights_refine,
- avg_factor=avg_factor_refine)
- return loss_cls, loss_pts_init, loss_pts_refine
-
- def loss_by_feat(
- self,
- cls_scores: List[Tensor],
- pts_preds_init: List[Tensor],
- pts_preds_refine: List[Tensor],
- batch_gt_instances: InstanceList,
- batch_img_metas: List[dict],
- batch_gt_instances_ignore: OptInstanceList = None
- ) -> Dict[str, Tensor]:
- """Calculate the loss based on the features extracted by the detection
- head.
-
- Args:
- cls_scores (list[Tensor]): Box scores for each scale level,
- each is a 4D-tensor, of shape (batch_size, num_classes, h, w).
- pts_preds_init (list[Tensor]): Points for each scale level, each is
- a 3D-tensor, of shape (batch_size, h_i * w_i, num_points * 2).
- pts_preds_refine (list[Tensor]): Points refined for each scale
- level, each is a 3D-tensor, of shape
- (batch_size, h_i * w_i, num_points * 2).
- batch_gt_instances (list[:obj:`InstanceData`]): Batch of
- gt_instance. It usually includes ``bboxes`` and ``labels``
- attributes.
- batch_img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- batch_gt_instances_ignore (list[:obj:`InstanceData`], Optional):
- Batch of gt_instances_ignore. It includes ``bboxes`` attribute
- data that is ignored during training and testing.
- Defaults to None.
-
- Returns:
- dict[str, Tensor]: A dictionary of loss components.
- """
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
- device = cls_scores[0].device
-
- # target for initial stage
- center_list, valid_flag_list = self.get_points(featmap_sizes,
- batch_img_metas, device)
- pts_coordinate_preds_init = self.offset_to_pts(center_list,
- pts_preds_init)
- if self.train_cfg['init']['assigner']['type'] == 'PointAssigner':
- # Assign target for center list
- candidate_list = center_list
- else:
- # transform center list to bbox list and
- # assign target for bbox list
- bbox_list = self.centers_to_bboxes(center_list)
- candidate_list = bbox_list
- cls_reg_targets_init = self.get_targets(
- proposals_list=candidate_list,
- valid_flag_list=valid_flag_list,
- batch_gt_instances=batch_gt_instances,
- batch_img_metas=batch_img_metas,
- batch_gt_instances_ignore=batch_gt_instances_ignore,
- stage='init',
- return_sampling_results=False)
- (*_, bbox_gt_list_init, candidate_list_init, bbox_weights_list_init,
- avg_factor_init) = cls_reg_targets_init
-
- # target for refinement stage
- center_list, valid_flag_list = self.get_points(featmap_sizes,
- batch_img_metas, device)
- pts_coordinate_preds_refine = self.offset_to_pts(
- center_list, pts_preds_refine)
- bbox_list = []
- for i_img, center in enumerate(center_list):
- bbox = []
- for i_lvl in range(len(pts_preds_refine)):
- bbox_preds_init = self.points2bbox(
- pts_preds_init[i_lvl].detach())
- bbox_shift = bbox_preds_init * self.point_strides[i_lvl]
- bbox_center = torch.cat(
- [center[i_lvl][:, :2], center[i_lvl][:, :2]], dim=1)
- bbox.append(bbox_center +
- bbox_shift[i_img].permute(1, 2, 0).reshape(-1, 4))
- bbox_list.append(bbox)
- cls_reg_targets_refine = self.get_targets(
- proposals_list=bbox_list,
- valid_flag_list=valid_flag_list,
- batch_gt_instances=batch_gt_instances,
- batch_img_metas=batch_img_metas,
- batch_gt_instances_ignore=batch_gt_instances_ignore,
- stage='refine',
- return_sampling_results=False)
- (labels_list, label_weights_list, bbox_gt_list_refine,
- candidate_list_refine, bbox_weights_list_refine,
- avg_factor_refine) = cls_reg_targets_refine
-
- # compute loss
- losses_cls, losses_pts_init, losses_pts_refine = multi_apply(
- self.loss_by_feat_single,
- cls_scores,
- pts_coordinate_preds_init,
- pts_coordinate_preds_refine,
- labels_list,
- label_weights_list,
- bbox_gt_list_init,
- bbox_weights_list_init,
- bbox_gt_list_refine,
- bbox_weights_list_refine,
- self.point_strides,
- avg_factor_init=avg_factor_init,
- avg_factor_refine=avg_factor_refine)
- loss_dict_all = {
- 'loss_cls': losses_cls,
- 'loss_pts_init': losses_pts_init,
- 'loss_pts_refine': losses_pts_refine
- }
- return loss_dict_all
-
- # Same as base_dense_head/_get_bboxes_single except self._bbox_decode
- def _predict_by_feat_single(self,
- cls_score_list: List[Tensor],
- bbox_pred_list: List[Tensor],
- score_factor_list: List[Tensor],
- mlvl_priors: List[Tensor],
- img_meta: dict,
- cfg: ConfigDict,
- rescale: bool = False,
- with_nms: bool = True) -> InstanceData:
- """Transform outputs of a single image into bbox predictions.
-
- Args:
- cls_score_list (list[Tensor]): Box scores from all scale
- levels of a single image, each item has shape
- (num_priors * num_classes, H, W).
- bbox_pred_list (list[Tensor]): Box energies / deltas from
- all scale levels of a single image, each item has shape
- (num_priors * 4, H, W).
- score_factor_list (list[Tensor]): Score factor from all scale
- levels of a single image. RepPoints head does not need
- this value.
- mlvl_priors (list[Tensor]): Each element in the list is
- the priors of a single level in feature pyramid, has shape
- (num_priors, 2).
- img_meta (dict): Image meta info.
- cfg (:obj:`ConfigDict`): Test / postprocessing configuration,
- if None, test_cfg would be used.
- rescale (bool): If True, return boxes in original image space.
- Defaults to False.
- with_nms (bool): If True, do nms before return boxes.
- Defaults to True.
-
- Returns:
- :obj:`InstanceData`: Detection results of each image
- after the post process.
- Each item usually contains following keys.
-
- - scores (Tensor): Classification scores, has a shape
- (num_instance, )
- - labels (Tensor): Labels of bboxes, has a shape
- (num_instances, ).
- - bboxes (Tensor): Has a shape (num_instances, 4),
- the last dimension 4 arrange as (x1, y1, x2, y2).
- """
- cfg = self.test_cfg if cfg is None else cfg
- assert len(cls_score_list) == len(bbox_pred_list)
- img_shape = img_meta['img_shape']
- nms_pre = cfg.get('nms_pre', -1)
-
- mlvl_bboxes = []
- mlvl_scores = []
- mlvl_labels = []
- for level_idx, (cls_score, bbox_pred, priors) in enumerate(
- zip(cls_score_list, bbox_pred_list, mlvl_priors)):
- assert cls_score.size()[-2:] == bbox_pred.size()[-2:]
- bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4)
-
- cls_score = cls_score.permute(1, 2,
- 0).reshape(-1, self.cls_out_channels)
- if self.use_sigmoid_cls:
- scores = cls_score.sigmoid()
- else:
- scores = cls_score.softmax(-1)[:, :-1]
-
- # After https://github.com/open-mmlab/mmdetection/pull/6268/,
- # this operation keeps fewer bboxes under the same `nms_pre`.
- # There is no difference in performance for most models. If you
- # find a slight drop in performance, you can set a larger
- # `nms_pre` than before.
- results = filter_scores_and_topk(
- scores, cfg.score_thr, nms_pre,
- dict(bbox_pred=bbox_pred, priors=priors))
- scores, labels, _, filtered_results = results
-
- bbox_pred = filtered_results['bbox_pred']
- priors = filtered_results['priors']
-
- bboxes = self._bbox_decode(priors, bbox_pred,
- self.point_strides[level_idx],
- img_shape)
-
- mlvl_bboxes.append(bboxes)
- mlvl_scores.append(scores)
- mlvl_labels.append(labels)
-
- results = InstanceData()
- results.bboxes = torch.cat(mlvl_bboxes)
- results.scores = torch.cat(mlvl_scores)
- results.labels = torch.cat(mlvl_labels)
-
- return self._bbox_post_process(
- results=results,
- cfg=cfg,
- rescale=rescale,
- with_nms=with_nms,
- img_meta=img_meta)
-
- def _bbox_decode(self, points: Tensor, bbox_pred: Tensor, stride: int,
- max_shape: Tuple[int, int]) -> Tensor:
- """Decode the prediction to bounding box.
-
- Args:
- points (Tensor): shape (h_i * w_i, 2).
- bbox_pred (Tensor): shape (h_i * w_i, 4).
- stride (int): Stride for bbox_pred in different level.
- max_shape (Tuple[int, int]): image shape.
-
- Returns:
- Tensor: Bounding boxes decoded.
- """
- bbox_pos_center = torch.cat([points[:, :2], points[:, :2]], dim=1)
- bboxes = bbox_pred * stride + bbox_pos_center
- x1 = bboxes[:, 0].clamp(min=0, max=max_shape[1])
- y1 = bboxes[:, 1].clamp(min=0, max=max_shape[0])
- x2 = bboxes[:, 2].clamp(min=0, max=max_shape[1])
- y2 = bboxes[:, 3].clamp(min=0, max=max_shape[0])
- decoded_bboxes = torch.stack([x1, y1, x2, y2], dim=-1)
- return decoded_bboxes
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/assigners/dynamic_soft_label_assigner.py b/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/assigners/dynamic_soft_label_assigner.py
deleted file mode 100644
index 04549d172bb85a4147ad8eeee16336cd4b02dab1..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/assigners/dynamic_soft_label_assigner.py
+++ /dev/null
@@ -1,227 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Optional, Tuple
-
-import torch
-import torch.nn.functional as F
-from mmengine.structures import InstanceData
-from torch import Tensor
-
-from mmdet.registry import TASK_UTILS
-from mmdet.structures.bbox import BaseBoxes
-from mmdet.utils import ConfigType
-from .assign_result import AssignResult
-from .base_assigner import BaseAssigner
-
-INF = 100000000
-EPS = 1.0e-7
-
-
-def center_of_mass(masks: Tensor, eps: float = 1e-7) -> Tensor:
- """Compute the masks center of mass.
-
- Args:
- masks: Mask tensor, has shape (num_masks, H, W).
- eps: a small number to avoid normalizer to be zero.
- Defaults to 1e-7.
- Returns:
- Tensor: The masks center of mass. Has shape (num_masks, 2).
- """
- n, h, w = masks.shape
- grid_h = torch.arange(h, device=masks.device)[:, None]
- grid_w = torch.arange(w, device=masks.device)
- normalizer = masks.sum(dim=(1, 2)).float().clamp(min=eps)
- center_y = (masks * grid_h).sum(dim=(1, 2)) / normalizer
- center_x = (masks * grid_w).sum(dim=(1, 2)) / normalizer
- center = torch.cat([center_x[:, None], center_y[:, None]], dim=1)
- return center
-
-
-@TASK_UTILS.register_module()
-class DynamicSoftLabelAssigner(BaseAssigner):
- """Computes matching between predictions and ground truth with dynamic soft
- label assignment.
-
- Args:
- soft_center_radius (float): Radius of the soft center prior.
- Defaults to 3.0.
- topk (int): Select top-k predictions to calculate dynamic k
- best matches for each gt. Defaults to 13.
- iou_weight (float): The scale factor of iou cost. Defaults to 3.0.
- iou_calculator (ConfigType): Config of overlaps Calculator.
- Defaults to dict(type='BboxOverlaps2D').
- """
-
- def __init__(
- self,
- soft_center_radius: float = 3.0,
- topk: int = 13,
- iou_weight: float = 3.0,
- iou_calculator: ConfigType = dict(type='mmdet.BboxOverlaps2D')
- ) -> None:
- self.soft_center_radius = soft_center_radius
- self.topk = topk
- self.iou_weight = iou_weight
- self.iou_calculator = TASK_UTILS.build(iou_calculator)
-
- def assign(self,
- pred_instances: InstanceData,
- gt_instances: InstanceData,
- gt_instances_ignore: Optional[InstanceData] = None,
- **kwargs) -> AssignResult:
- """Assign gt to priors.
-
- Args:
- pred_instances (:obj:`InstanceData`): Instances of model
- predictions. It includes ``priors``, and the priors can
- be anchors or points, or the bboxes predicted by the
- previous stage, has shape (n, 4). The bboxes predicted by
- the current model or stage will be named ``bboxes``,
- ``labels``, and ``scores``, the same as the ``InstanceData``
- in other places.
- gt_instances (:obj:`InstanceData`): Ground truth of instance
- annotations. It usually includes ``bboxes``, with shape (k, 4),
- and ``labels``, with shape (k, ).
- gt_instances_ignore (:obj:`InstanceData`, optional): Instances
- to be ignored during training. It includes ``bboxes``
- attribute data that is ignored during training and testing.
- Defaults to None.
- Returns:
- obj:`AssignResult`: The assigned result.
- """
- gt_bboxes = gt_instances.bboxes
- gt_labels = gt_instances.labels
- num_gt = gt_bboxes.size(0)
-
- decoded_bboxes = pred_instances.bboxes
- pred_scores = pred_instances.scores
- priors = pred_instances.priors
- num_bboxes = decoded_bboxes.size(0)
-
- # assign 0 by default
- assigned_gt_inds = decoded_bboxes.new_full((num_bboxes, ),
- 0,
- dtype=torch.long)
- if num_gt == 0 or num_bboxes == 0:
- # No ground truth or boxes, return empty assignment
- max_overlaps = decoded_bboxes.new_zeros((num_bboxes, ))
- if num_gt == 0:
- # No truth, assign everything to background
- assigned_gt_inds[:] = 0
- assigned_labels = decoded_bboxes.new_full((num_bboxes, ),
- -1,
- dtype=torch.long)
- return AssignResult(
- num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels)
-
- prior_center = priors[:, :2]
- if isinstance(gt_bboxes, BaseBoxes):
- is_in_gts = gt_bboxes.find_inside_points(prior_center)
- else:
- # Tensor boxes will be treated as horizontal boxes by defaults
- lt_ = prior_center[:, None] - gt_bboxes[:, :2]
- rb_ = gt_bboxes[:, 2:] - prior_center[:, None]
-
- deltas = torch.cat([lt_, rb_], dim=-1)
- is_in_gts = deltas.min(dim=-1).values > 0
-
- valid_mask = is_in_gts.sum(dim=1) > 0
-
- valid_decoded_bbox = decoded_bboxes[valid_mask]
- valid_pred_scores = pred_scores[valid_mask]
- num_valid = valid_decoded_bbox.size(0)
-
- if num_valid == 0:
- # No ground truth or boxes, return empty assignment
- max_overlaps = decoded_bboxes.new_zeros((num_bboxes, ))
- assigned_labels = decoded_bboxes.new_full((num_bboxes, ),
- -1,
- dtype=torch.long)
- return AssignResult(
- num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels)
- if hasattr(gt_instances, 'masks'):
- gt_center = center_of_mass(gt_instances.masks, eps=EPS)
- elif isinstance(gt_bboxes, BaseBoxes):
- gt_center = gt_bboxes.centers
- else:
- # Tensor boxes will be treated as horizontal boxes by defaults
- gt_center = (gt_bboxes[:, :2] + gt_bboxes[:, 2:]) / 2.0
- valid_prior = priors[valid_mask]
- strides = valid_prior[:, 2]
- distance = (valid_prior[:, None, :2] - gt_center[None, :, :]
- ).pow(2).sum(-1).sqrt() / strides[:, None]
- soft_center_prior = torch.pow(10, distance - self.soft_center_radius)
-
- pairwise_ious = self.iou_calculator(valid_decoded_bbox, gt_bboxes)
- iou_cost = -torch.log(pairwise_ious + EPS) * self.iou_weight
-
- gt_onehot_label = (
- F.one_hot(gt_labels.to(torch.int64),
- pred_scores.shape[-1]).float().unsqueeze(0).repeat(
- num_valid, 1, 1))
- valid_pred_scores = valid_pred_scores.unsqueeze(1).repeat(1, num_gt, 1)
-
- soft_label = gt_onehot_label * pairwise_ious[..., None]
- scale_factor = soft_label - valid_pred_scores.sigmoid()
- soft_cls_cost = F.binary_cross_entropy_with_logits(
- valid_pred_scores, soft_label,
- reduction='none') * scale_factor.abs().pow(2.0)
- soft_cls_cost = soft_cls_cost.sum(dim=-1)
-
- cost_matrix = soft_cls_cost + iou_cost + soft_center_prior
-
- matched_pred_ious, matched_gt_inds = self.dynamic_k_matching(
- cost_matrix, pairwise_ious, num_gt, valid_mask)
-
- # convert to AssignResult format
- assigned_gt_inds[valid_mask] = matched_gt_inds + 1
- assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1)
- assigned_labels[valid_mask] = gt_labels[matched_gt_inds].long()
- max_overlaps = assigned_gt_inds.new_full((num_bboxes, ),
- -INF,
- dtype=torch.float32)
- max_overlaps[valid_mask] = matched_pred_ious
- return AssignResult(
- num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels)
-
- def dynamic_k_matching(self, cost: Tensor, pairwise_ious: Tensor,
- num_gt: int,
- valid_mask: Tensor) -> Tuple[Tensor, Tensor]:
- """Use IoU and matching cost to calculate the dynamic top-k positive
- targets. Same as SimOTA.
-
- Args:
- cost (Tensor): Cost matrix.
- pairwise_ious (Tensor): Pairwise iou matrix.
- num_gt (int): Number of gt.
- valid_mask (Tensor): Mask for valid bboxes.
-
- Returns:
- tuple: matched ious and gt indexes.
- """
- matching_matrix = torch.zeros_like(cost, dtype=torch.uint8)
- # select candidate topk ious for dynamic-k calculation
- candidate_topk = min(self.topk, pairwise_ious.size(0))
- topk_ious, _ = torch.topk(pairwise_ious, candidate_topk, dim=0)
- # calculate dynamic k for each gt
- dynamic_ks = torch.clamp(topk_ious.sum(0).int(), min=1)
- for gt_idx in range(num_gt):
- _, pos_idx = torch.topk(
- cost[:, gt_idx], k=dynamic_ks[gt_idx], largest=False)
- matching_matrix[:, gt_idx][pos_idx] = 1
-
- del topk_ious, dynamic_ks, pos_idx
-
- prior_match_gt_mask = matching_matrix.sum(1) > 1
- if prior_match_gt_mask.sum() > 0:
- cost_min, cost_argmin = torch.min(
- cost[prior_match_gt_mask, :], dim=1)
- matching_matrix[prior_match_gt_mask, :] *= 0
- matching_matrix[prior_match_gt_mask, cost_argmin] = 1
- # get foreground mask inside box and center prior
- fg_mask_inboxes = matching_matrix.sum(1) > 0
- valid_mask[valid_mask.clone()] = fg_mask_inboxes
-
- matched_gt_inds = matching_matrix[fg_mask_inboxes, :].argmax(1)
- matched_pred_ious = (matching_matrix *
- pairwise_ious).sum(1)[fg_mask_inboxes]
- return matched_pred_ious, matched_gt_inds
diff --git a/spaces/LanguageBind/LanguageBind/i_cls/zero_shot.py b/spaces/LanguageBind/LanguageBind/i_cls/zero_shot.py
deleted file mode 100644
index 895acff9afc34e4b463ce4fdf5dacdb1eaff24b3..0000000000000000000000000000000000000000
--- a/spaces/LanguageBind/LanguageBind/i_cls/zero_shot.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import logging
-
-import torch
-import torch.nn.functional as F
-from tqdm import tqdm
-
-from open_clip import get_input_dtype, get_tokenizer, build_zero_shot_classifier, \
- IMAGENET_CLASSNAMES, OPENAI_IMAGENET_TEMPLATES
-from open_clip.factory import HF_HUB_PREFIX
-from .precision import get_autocast
-
-
-def accuracy(output, target, topk=(1,)):
- pred = output.topk(max(topk), 1, True, True)[1].t()
- correct = pred.eq(target.view(1, -1).expand_as(pred))
- return [float(correct[:k].reshape(-1).float().sum(0, keepdim=True).cpu().numpy()) for k in topk]
-
-
-def run(model, classifier, dataloader, args):
- autocast = get_autocast(args.precision)
- input_dtype = get_input_dtype(args.precision)
-
- with torch.no_grad():
- top1, top5, n = 0., 0., 0.
- for images, target in tqdm(dataloader, unit_scale=args.batch_size):
- images = images.to(device=args.device, dtype=input_dtype)
- images = images.unsqueeze(2)
- target = target.to(args.device)
-
- with autocast():
- # predict
- output = model(image=images)
- image_features = output['image_features'] if isinstance(output, dict) else output[0]
- logits = 100. * image_features @ classifier
-
- # measure accuracy
- acc1, acc5 = accuracy(logits, target, topk=(1, 5))
- top1 += acc1
- top5 += acc5
- n += images.size(0)
-
- top1 = (top1 / n)
- top5 = (top5 / n)
- return top1, top5
-
-
-def zero_shot_eval(model, data, epoch, args):
- if 'imagenet-val' not in data and 'imagenet-v2' not in data:
- return {}
- if args.zeroshot_frequency == 0:
- return {}
- if (epoch % args.zeroshot_frequency) != 0 and epoch != args.epochs:
- return {}
- if args.distributed and not args.horovod:
- model = model.module
-
- logging.info('Starting zero-shot imagenet.')
-
- logging.info('Building zero-shot classifier')
- autocast = get_autocast(args.precision)
- with autocast():
- tokenizer = get_tokenizer(HF_HUB_PREFIX+args.model, cache_dir=args.cache_dir)
- # tokenizer = get_tokenizer("ViT-L-14")
- classifier = build_zero_shot_classifier(
- model,
- tokenizer=tokenizer,
- classnames=IMAGENET_CLASSNAMES,
- templates=OPENAI_IMAGENET_TEMPLATES,
- num_classes_per_batch=10,
- device=args.device,
- use_tqdm=True,
- )
-
- logging.info('Using classifier')
- results = {}
- if 'imagenet-val' in data:
- top1, top5 = run(model, classifier, data['imagenet-val'].dataloader, args)
- results['imagenet-zeroshot-val-top1'] = top1
- results['imagenet-zeroshot-val-top5'] = top5
- if 'imagenet-v2' in data:
- top1, top5 = run(model, classifier, data['imagenet-v2'].dataloader, args)
- results['imagenetv2-zeroshot-val-top1'] = top1
- results['imagenetv2-zeroshot-val-top5'] = top5
-
- logging.info('Finished zero-shot imagenet.')
-
- return results
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/loader_themes.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/loader_themes.py
deleted file mode 100644
index a2884bc2a55b1f342847baae4c395e40dba40bfa..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/loader_themes.py
+++ /dev/null
@@ -1,80 +0,0 @@
-import ast
-import json
-import os
-import importlib
-import logging
-logger = logging.getLogger(__name__)
-
-folder = os.path.dirname(os.path.abspath(__file__))
-folder = os.path.dirname(folder)
-folder = os.path.dirname(folder)
-folder = os.path.join(folder, "assets", "themes")
-
-import sys
-sys.path.append(folder)
-
-def get_class(file_name, class_name):
- with open(file_name, 'r') as file:
- content = file.read()
- syntax_tree = ast.parse(content)
-
- for node in ast.walk(syntax_tree):
- if isinstance(node, ast.ClassDef) and node.name == class_name:
- return node
-
- return None
-
-def get_list():
- themes_list = [
- os.path.splitext(name)[0]
- for root, _, files in os.walk(folder, topdown=False)
- for name in files
- if name.endswith(".py") and root == folder
- ]
- return themes_list
-
-def select_theme(name):
- selected_file = name + ".py"
- class_name = name
- full_path = os.path.join(folder, selected_file)
- class_found = get_class(full_path, class_name)
- if class_found:
- with open(os.path.join(folder, 'theme.json'), 'w') as json_file:
- json.dump({"file": selected_file, "class": class_name}, json_file)
- logger.info(f"Theme {class_name} successfully selected, restart applio.")
- else:
- logger.warn(f"Theme {class_name} was not found.")
-
-def read_json():
- json_file_name = os.path.join(folder, 'theme.json')
- try:
- with open(json_file_name, 'r') as json_file:
- data = json.load(json_file)
- selected_file = data.get("file")
- class_name = data.get("class")
- if selected_file and class_name:
- return class_name
- else:
- return ""
- except:
- return "applio"
-
-def load_json():
- json_file_name = os.path.join(folder, 'theme.json')
- try:
- with open(json_file_name, 'r') as json_file:
- data = json.load(json_file)
- selected_file = data.get("file")
- class_name = data.get("class")
- if selected_file and class_name:
- module = importlib.import_module(selected_file[:-3])
- obtained_class = getattr(module, class_name)
- instance = obtained_class()
- logger.info(f"Theme Loaded: {class_name}")
- return instance
- else:
- logger.warn("The theme is incorrect.")
- return None
- except Exception as e:
- logger.warning(f"Error Loading: {str(e)}")
- return None
diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/ner/bert_softmax/README.md b/spaces/Loren/Streamlit_OCR_comparator/configs/ner/bert_softmax/README.md
deleted file mode 100644
index 650d18c4d56406e5f064085229f49875f5b4aea5..0000000000000000000000000000000000000000
--- a/spaces/Loren/Streamlit_OCR_comparator/configs/ner/bert_softmax/README.md
+++ /dev/null
@@ -1,47 +0,0 @@
-# Bert
-
-> [Bert: Pre-training of deep bidirectional transformers for language understanding](https://arxiv.org/abs/1810.04805)
-
-
-
-## Abstract
-
-We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.
-BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
-
-
-
-
-
-
-
-## Dataset
-
-### Train Dataset
-
-| trainset | text_num | entity_num |
-| :---------: | :------: | :--------: |
-| CLUENER2020 | 10748 | 23338 |
-
-### Test Dataset
-
-| testset | text_num | entity_num |
-| :---------: | :------: | :--------: |
-| CLUENER2020 | 1343 | 2982 |
-
-## Results and models
-
-| Method | Pretrain | Precision | Recall | F1-Score | Download |
-| :-------------------------------------------------------: | :----------------------------------------------------------: | :-------: | :----: | :------: | :----------------------------------------------------------: |
-| [bert_softmax](/configs/ner/bert_softmax/bert_softmax_cluener_18e.py) | [pretrain](https://download.openmmlab.com/mmocr/ner/bert_softmax/bert_pretrain.pth) | 0.7885 | 0.7998 | 0.7941 | [model](https://download.openmmlab.com/mmocr/ner/bert_softmax/bert_softmax_cluener-eea70ea2.pth) \| [log](https://download.openmmlab.com/mmocr/ner/bert_softmax/20210514_172645.log.json) |
-
-## Citation
-
-```bibtex
-@article{devlin2018bert,
- title={Bert: Pre-training of deep bidirectional transformers for language understanding},
- author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
- journal={arXiv preprint arXiv:1810.04805},
- year={2018}
-}
-```
diff --git a/spaces/Luelll/ChuanhuChatGPT/modules/models/tokenization_moss.py b/spaces/Luelll/ChuanhuChatGPT/modules/models/tokenization_moss.py
deleted file mode 100644
index 626315eb9e429ada99a15b04b9736c05e6743ffe..0000000000000000000000000000000000000000
--- a/spaces/Luelll/ChuanhuChatGPT/modules/models/tokenization_moss.py
+++ /dev/null
@@ -1,368 +0,0 @@
-"""Tokenization classes for Moss"""
-
-import json
-import os
-import numpy as np
-import regex as re
-
-from functools import lru_cache
-from typing import TYPE_CHECKING, List, Optional, Tuple, Union
-
-from transformers.utils import is_tf_available, is_torch_available, logging
-from transformers.tokenization_utils import AddedToken, PreTrainedTokenizer
-
-
-if TYPE_CHECKING:
- if is_torch_available():
- import torch
- if is_tf_available():
- import tensorflow as tf
-
-
-logger = logging.get_logger(__name__)
-
-VOCAB_FILES_NAMES = {
- "vocab_file": "vocab.json",
- "merges_file": "merges.txt",
-}
-
-PRETRAINED_VOCAB_FILES_MAP = {
- "vocab_file": {
- "fnlp/moss-moon-003-base": "https://huggingface.co/fnlp/moss-moon-003-base/resolve/main/vocab.json",
- "fnlp/moss-moon-003-sft": "https://huggingface.co/fnlp/moss-moon-003-sft/resolve/main/vocab.json",
- "fnlp/moss-moon-003-sft-plugin": "https://huggingface.co/fnlp/moss-moon-003-sft-plugin/resolve/main/vocab.json",
- },
- "merges_file": {
- "fnlp/moss-moon-003-base": "https://huggingface.co/fnlp/moss-moon-003-base/resolve/main/merges.txt",
- "fnlp/moss-moon-003-sft": "https://huggingface.co/fnlp/moss-moon-003-sft/resolve/main/merges.txt",
- "fnlp/moss-moon-003-sft-plugin": "https://huggingface.co/fnlp/moss-moon-003-sft-plugin/resolve/main/merges.txt",
- },
-}
-
-PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
- "fnlp/moss-moon-003-base": 2048,
- "fnlp/moss-moon-003-sft": 2048,
- "fnlp/moss-moon-003-sft-plugin": 2048,
-}
-
-
-@lru_cache()
-def bytes_to_unicode():
- """
- Returns list of utf-8 byte and a mapping to unicode strings. We specifically avoids mapping to whitespace/control
- characters the bpe code barfs on.
-
- The reversible bpe codes work on unicode strings. This means you need a large # of unicode characters in your vocab
- if you want to avoid UNKs. When you're at something like a 10B token dataset you end up needing around 5K for
- decent coverage. This is a significant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup
- tables between utf-8 bytes and unicode strings.
- """
- bs = (
- list(range(ord("!"), ord("~") + 1)) + list(range(ord("¡"), ord("¬") + 1)) + list(range(ord("®"), ord("ÿ") + 1))
- )
- cs = bs[:]
- n = 0
- for b in range(2**8):
- if b not in bs:
- bs.append(b)
- cs.append(2**8 + n)
- n += 1
- cs = [chr(n) for n in cs]
- return dict(zip(bs, cs))
-
-
-def get_pairs(word):
- """
- Return set of symbol pairs in a word.
-
- Word is represented as tuple of symbols (symbols being variable-length strings).
- """
- pairs = set()
- prev_char = word[0]
- for char in word[1:]:
- pairs.add((prev_char, char))
- prev_char = char
- return pairs
-
-
-class MossTokenizer(PreTrainedTokenizer):
- """
- Construct a Moss tokenizer. Based on byte-level Byte-Pair-Encoding.
-
- This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
- be encoded differently whether it is at the beginning of the sentence (without space) or not:
-
- You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you
- call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
-
-
-
- When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one).
-
-
-
- This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
- this superclass for more information regarding those methods.
-
- Args:
- vocab_file (`str`):
- Path to the vocabulary file.
- merges_file (`str`):
- Path to the merges file.
- errors (`str`, *optional*, defaults to `"replace"`):
- Paradigm to follow when decoding bytes to UTF-8. See
- [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
- unk_token (`str`, *optional*, defaults to `<|endoftext|>`):
- The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
- token instead.
- bos_token (`str`, *optional*, defaults to `<|endoftext|>`):
- The beginning of sequence token.
- eos_token (`str`, *optional*, defaults to `<|endoftext|>`):
- The end of sequence token.
- add_prefix_space (`bool`, *optional*, defaults to `False`):
- Whether or not to add an initial space to the input. This allows to treat the leading word just as any
- other word. (Moss tokenizer detect beginning of words by the preceding space).
- """
-
- vocab_files_names = VOCAB_FILES_NAMES
- pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
- max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
- model_input_names = ["input_ids", "attention_mask"]
-
- def __init__(
- self,
- vocab_file,
- merges_file,
- errors="replace",
- unk_token="<|endoftext|>",
- bos_token="<|endoftext|>",
- eos_token="",
- pad_token=None,
- add_prefix_space=False,
- add_bos_token=False,
- **kwargs,
- ):
- bos_token = AddedToken(bos_token, lstrip=False, rstrip=False) if isinstance(bos_token, str) else bos_token
- eos_token = AddedToken(eos_token, lstrip=False, rstrip=False) if isinstance(eos_token, str) else eos_token
- unk_token = AddedToken(unk_token, lstrip=False, rstrip=False) if isinstance(unk_token, str) else unk_token
- pad_token = AddedToken(pad_token, lstrip=False, rstrip=False) if isinstance(pad_token, str) else pad_token
- super().__init__(
- errors=errors,
- unk_token=unk_token,
- bos_token=bos_token,
- eos_token=eos_token,
- pad_token=pad_token,
- add_prefix_space=add_prefix_space,
- add_bos_token=add_bos_token,
- **kwargs,
- )
- self.add_bos_token = add_bos_token
-
- with open(vocab_file, encoding="utf-8") as vocab_handle:
- self.encoder = json.load(vocab_handle)
- self.decoder = {v: k for k, v in self.encoder.items()}
- self.errors = errors # how to handle errors in decoding
- self.byte_encoder = bytes_to_unicode()
- self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
- with open(merges_file, encoding="utf-8") as merges_handle:
- bpe_merges = merges_handle.read().split("\n")[1:-1]
- bpe_merges = [tuple(merge.split()) for merge in bpe_merges]
- self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges))))
- self.cache = {}
- self.add_prefix_space = add_prefix_space
-
- # Should have added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions
- self.pat = re.compile(r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+""")
-
- @property
- def vocab_size(self):
- return len(self.encoder)
-
- def get_vocab(self):
- return dict(self.encoder, **self.added_tokens_encoder)
-
- def bpe(self, token):
- if token in self.cache:
- return self.cache[token]
- word = tuple(token)
- pairs = get_pairs(word)
-
- if not pairs:
- return token
-
- while True:
- bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf")))
- if bigram not in self.bpe_ranks:
- break
- first, second = bigram
- new_word = []
- i = 0
- while i < len(word):
- try:
- j = word.index(first, i)
- except ValueError:
- new_word.extend(word[i:])
- break
- else:
- new_word.extend(word[i:j])
- i = j
-
- if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
- new_word.append(first + second)
- i += 2
- else:
- new_word.append(word[i])
- i += 1
- new_word = tuple(new_word)
- word = new_word
- if len(word) == 1:
- break
- else:
- pairs = get_pairs(word)
- word = " ".join(word)
- self.cache[token] = word
- return word
-
- def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
- if self.add_bos_token:
- bos_token_ids = [self.bos_token_id]
- else:
- bos_token_ids = []
-
- output = bos_token_ids + token_ids_0
-
- if token_ids_1 is None:
- return output
-
- return output + bos_token_ids + token_ids_1
-
- def _tokenize(self, text):
- """Tokenize a string."""
- bpe_tokens = []
- for token in re.findall(self.pat, text):
- token = "".join(
- self.byte_encoder[b] for b in token.encode("utf-8")
- ) # Maps all our bytes to unicode strings, avoiding control tokens of the BPE (spaces in our case)
- bpe_tokens.extend(bpe_token for bpe_token in self.bpe(token).split(" "))
- return bpe_tokens
-
- def _convert_token_to_id(self, token):
- """Converts a token (str) in an id using the vocab."""
- return self.encoder.get(token, self.encoder.get(self.unk_token))
-
- def _convert_id_to_token(self, index):
- """Converts an index (integer) in a token (str) using the vocab."""
- return self.decoder.get(index)
-
- def convert_tokens_to_string(self, tokens):
- """Converts a sequence of tokens (string) in a single string."""
- text = "".join(tokens)
- text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors)
- return text
-
- def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
- if not os.path.isdir(save_directory):
- logger.error(f"Vocabulary path ({save_directory}) should be a directory")
- return
- vocab_file = os.path.join(
- save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
- )
- merge_file = os.path.join(
- save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["merges_file"]
- )
-
- with open(vocab_file, "w", encoding="utf-8") as f:
- f.write(json.dumps(self.encoder, indent=2, sort_keys=True, ensure_ascii=False) + "\n")
-
- index = 0
- with open(merge_file, "w", encoding="utf-8") as writer:
- writer.write("#version: 0.2\n")
- for bpe_tokens, token_index in sorted(self.bpe_ranks.items(), key=lambda kv: kv[1]):
- if index != token_index:
- logger.warning(
- f"Saving vocabulary to {merge_file}: BPE merge indices are not consecutive."
- " Please check that the tokenizer is not corrupted!"
- )
- index = token_index
- writer.write(" ".join(bpe_tokens) + "\n")
- index += 1
-
- return vocab_file, merge_file
-
- def prepare_for_tokenization(self, text, is_split_into_words=False, **kwargs):
- add_prefix_space = kwargs.pop("add_prefix_space", self.add_prefix_space)
- if is_split_into_words or add_prefix_space:
- text = " " + text
- return (text, kwargs)
-
- def decode(
- self,
- token_ids: Union[int, List[int], "np.ndarray", "torch.Tensor", "tf.Tensor"],
- skip_special_tokens: bool = False,
- clean_up_tokenization_spaces: bool = None,
- truncate_before_pattern: Optional[List[str]] = None,
- **kwargs,
- ) -> str:
- """
- Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special
- tokens and clean up tokenization spaces.
-
- Similar to doing `self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))`.
-
- Args:
- token_ids (`Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]`):
- List of tokenized input ids. Can be obtained using the `__call__` method.
- skip_special_tokens (`bool`, *optional*, defaults to `False`):
- Whether or not to remove special tokens in the decoding.
- clean_up_tokenization_spaces (`bool`, *optional*):
- Whether or not to clean up the tokenization spaces. If `None`, will default to
- `self.clean_up_tokenization_spaces` (available in the `tokenizer_config`).
- truncate_before_pattern (`List[str]`, *optional*, defaults to `None`):
- A list of regular expression strings that will be used to truncate the returned string. This can be
- used to remove extra pieces of code (e.g. truncate if observing a comment symbol "#" at the beginning
- of a new line). An example pattern could be `["^#", re.escape("<|endoftext|>"), "^'''", "\n\n\n"]`.
- kwargs (additional keyword arguments, *optional*):
- Will be passed to the underlying model specific decode method.
-
- Returns:
- `str`: The decoded sentence.
- """
- decoded_text = super()._decode(
- token_ids=token_ids,
- skip_special_tokens=skip_special_tokens,
- clean_up_tokenization_spaces=clean_up_tokenization_spaces,
- **kwargs,
- )
-
- if truncate_before_pattern is not None and len(truncate_before_pattern) > 0:
- decoded_text = self.truncate(decoded_text, truncate_before_pattern)
-
- return decoded_text
-
- def truncate(self, completion, truncate_before_pattern):
- def find_re(string, pattern, start_pos):
- m = pattern.search(string, start_pos)
- return m.start() if m else -1
-
- terminals = [re.compile(pattern, re.MULTILINE) for pattern in truncate_before_pattern]
-
- prints = list(re.finditer("^print", completion, re.MULTILINE))
-
- if len(prints) > 1:
- completion = completion[: prints[1].start()]
-
- defs = list(re.finditer("^def", completion, re.MULTILINE))
-
- if len(defs) > 1:
- completion = completion[: defs[1].start()]
-
- start_pos = 0
-
- terminals_pos = [
- pos for pos in [find_re(completion, terminal, start_pos) for terminal in terminals] if pos != -1
- ]
-
- if len(terminals_pos) > 0:
- return completion[: min(terminals_pos)]
- else:
- return completion
diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/sync_batchnorm/unittest.py b/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/sync_batchnorm/unittest.py
deleted file mode 100644
index 998223a0e0242dc4a5b2fcd74af79dc7232794da..0000000000000000000000000000000000000000
--- a/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/sync_batchnorm/unittest.py
+++ /dev/null
@@ -1,29 +0,0 @@
-# -*- coding: utf-8 -*-
-# File : unittest.py
-# Author : Jiayuan Mao
-# Email : maojiayuan@gmail.com
-# Date : 27/01/2018
-#
-# This file is part of Synchronized-BatchNorm-PyTorch.
-# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
-# Distributed under MIT License.
-
-import unittest
-import torch
-
-
-class TorchTestCase(unittest.TestCase):
- def assertTensorClose(self, x, y):
- adiff = float((x - y).abs().max())
- if (y == 0).all():
- rdiff = 'NaN'
- else:
- rdiff = float((adiff / y).abs().max())
-
- message = (
- 'Tensor close check failed\n'
- 'adiff={}\n'
- 'rdiff={}\n'
- ).format(adiff, rdiff)
- self.assertTrue(torch.allclose(x, y, atol=1e-5, rtol=1e-3), message)
-
diff --git a/spaces/MLIFY/Chatter/index.html b/spaces/MLIFY/Chatter/index.html
deleted file mode 100644
index 5ca64522e35450606f474de92d270781e67609f9..0000000000000000000000000000000000000000
--- a/spaces/MLIFY/Chatter/index.html
+++ /dev/null
@@ -1,12 +0,0 @@
-
-
-
-
-
- Chatter
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/model/__init__.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/model/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Makiing/coolb-in-gtest/src/components/header.tsx b/spaces/Makiing/coolb-in-gtest/src/components/header.tsx
deleted file mode 100644
index dc298b722154d1ac6d7a7e148204605562d6cc58..0000000000000000000000000000000000000000
--- a/spaces/Makiing/coolb-in-gtest/src/components/header.tsx
+++ /dev/null
@@ -1,12 +0,0 @@
-import * as React from 'react'
-import { UserMenu } from './user-menu'
-
-export async function Header() {
- return (
-
-
-
-
-
- )
-}
diff --git a/spaces/MaplePanda/Gstable-diffusion-2-1/README.md b/spaces/MaplePanda/Gstable-diffusion-2-1/README.md
deleted file mode 100644
index dbef72e608ce093c229586f446d9c7d6db07bd47..0000000000000000000000000000000000000000
--- a/spaces/MaplePanda/Gstable-diffusion-2-1/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Gstable Diffusion 2 1
-emoji: 🦀
-colorFrom: yellow
-colorTo: purple
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/roi_align_rotated.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/roi_align_rotated.py
deleted file mode 100644
index 0ce4961a3555d4da8bc3e32f1f7d5ad50036587d..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/roi_align_rotated.py
+++ /dev/null
@@ -1,177 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch.nn as nn
-from torch.autograd import Function
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext(
- '_ext', ['roi_align_rotated_forward', 'roi_align_rotated_backward'])
-
-
-class RoIAlignRotatedFunction(Function):
-
- @staticmethod
- def symbolic(g, features, rois, out_size, spatial_scale, sample_num,
- aligned, clockwise):
- if isinstance(out_size, int):
- out_h = out_size
- out_w = out_size
- elif isinstance(out_size, tuple):
- assert len(out_size) == 2
- assert isinstance(out_size[0], int)
- assert isinstance(out_size[1], int)
- out_h, out_w = out_size
- else:
- raise TypeError(
- '"out_size" must be an integer or tuple of integers')
- return g.op(
- 'mmcv::MMCVRoIAlignRotated',
- features,
- rois,
- output_height_i=out_h,
- output_width_i=out_h,
- spatial_scale_f=spatial_scale,
- sampling_ratio_i=sample_num,
- aligned_i=aligned,
- clockwise_i=clockwise)
-
- @staticmethod
- def forward(ctx,
- features,
- rois,
- out_size,
- spatial_scale,
- sample_num=0,
- aligned=True,
- clockwise=False):
- if isinstance(out_size, int):
- out_h = out_size
- out_w = out_size
- elif isinstance(out_size, tuple):
- assert len(out_size) == 2
- assert isinstance(out_size[0], int)
- assert isinstance(out_size[1], int)
- out_h, out_w = out_size
- else:
- raise TypeError(
- '"out_size" must be an integer or tuple of integers')
- ctx.spatial_scale = spatial_scale
- ctx.sample_num = sample_num
- ctx.aligned = aligned
- ctx.clockwise = clockwise
- ctx.save_for_backward(rois)
- ctx.feature_size = features.size()
-
- batch_size, num_channels, data_height, data_width = features.size()
- num_rois = rois.size(0)
-
- output = features.new_zeros(num_rois, num_channels, out_h, out_w)
- ext_module.roi_align_rotated_forward(
- features,
- rois,
- output,
- pooled_height=out_h,
- pooled_width=out_w,
- spatial_scale=spatial_scale,
- sample_num=sample_num,
- aligned=aligned,
- clockwise=clockwise)
- return output
-
- @staticmethod
- def backward(ctx, grad_output):
- feature_size = ctx.feature_size
- spatial_scale = ctx.spatial_scale
- aligned = ctx.aligned
- clockwise = ctx.clockwise
- sample_num = ctx.sample_num
- rois = ctx.saved_tensors[0]
- assert feature_size is not None
- batch_size, num_channels, data_height, data_width = feature_size
-
- out_w = grad_output.size(3)
- out_h = grad_output.size(2)
-
- grad_input = grad_rois = None
-
- if ctx.needs_input_grad[0]:
- grad_input = rois.new_zeros(batch_size, num_channels, data_height,
- data_width)
- ext_module.roi_align_rotated_backward(
- grad_output.contiguous(),
- rois,
- grad_input,
- pooled_height=out_h,
- pooled_width=out_w,
- spatial_scale=spatial_scale,
- sample_num=sample_num,
- aligned=aligned,
- clockwise=clockwise)
- return grad_input, grad_rois, None, None, None, None, None
-
-
-roi_align_rotated = RoIAlignRotatedFunction.apply
-
-
-class RoIAlignRotated(nn.Module):
- """RoI align pooling layer for rotated proposals.
-
- It accepts a feature map of shape (N, C, H, W) and rois with shape
- (n, 6) with each roi decoded as (batch_index, center_x, center_y,
- w, h, angle). The angle is in radian.
-
- Args:
- out_size (tuple): h, w
- spatial_scale (float): scale the input boxes by this number
- sample_num (int): number of inputs samples to take for each
- output sample. 0 to take samples densely for current models.
- aligned (bool): if False, use the legacy implementation in
- MMDetection. If True, align the results more perfectly.
- Default: True.
- clockwise (bool): If True, the angle in each proposal follows a
- clockwise fashion in image space, otherwise, the angle is
- counterclockwise. Default: False.
-
- Note:
- The implementation of RoIAlign when aligned=True is modified from
- https://github.com/facebookresearch/detectron2/
-
- The meaning of aligned=True:
-
- Given a continuous coordinate c, its two neighboring pixel
- indices (in our pixel model) are computed by floor(c - 0.5) and
- ceil(c - 0.5). For example, c=1.3 has pixel neighbors with discrete
- indices [0] and [1] (which are sampled from the underlying signal
- at continuous coordinates 0.5 and 1.5). But the original roi_align
- (aligned=False) does not subtract the 0.5 when computing
- neighboring pixel indices and therefore it uses pixels with a
- slightly incorrect alignment (relative to our pixel model) when
- performing bilinear interpolation.
-
- With `aligned=True`,
- we first appropriately scale the ROI and then shift it by -0.5
- prior to calling roi_align. This produces the correct neighbors;
-
- The difference does not make a difference to the model's
- performance if ROIAlign is used together with conv layers.
- """
-
- def __init__(self,
- out_size,
- spatial_scale,
- sample_num=0,
- aligned=True,
- clockwise=False):
- super(RoIAlignRotated, self).__init__()
-
- self.out_size = out_size
- self.spatial_scale = float(spatial_scale)
- self.sample_num = int(sample_num)
- self.aligned = aligned
- self.clockwise = clockwise
-
- def forward(self, features, rois):
- return RoIAlignRotatedFunction.apply(features, rois, self.out_size,
- self.spatial_scale,
- self.sample_num, self.aligned,
- self.clockwise)
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/decode_heads/dm_head.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/decode_heads/dm_head.py
deleted file mode 100644
index 19c963923126b53ce22f60813540a35badf24b3d..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/decode_heads/dm_head.py
+++ /dev/null
@@ -1,140 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from annotator.uniformer.mmcv.cnn import ConvModule, build_activation_layer, build_norm_layer
-
-from ..builder import HEADS
-from .decode_head import BaseDecodeHead
-
-
-class DCM(nn.Module):
- """Dynamic Convolutional Module used in DMNet.
-
- Args:
- filter_size (int): The filter size of generated convolution kernel
- used in Dynamic Convolutional Module.
- fusion (bool): Add one conv to fuse DCM output feature.
- in_channels (int): Input channels.
- channels (int): Channels after modules, before conv_seg.
- conv_cfg (dict | None): Config of conv layers.
- norm_cfg (dict | None): Config of norm layers.
- act_cfg (dict): Config of activation layers.
- """
-
- def __init__(self, filter_size, fusion, in_channels, channels, conv_cfg,
- norm_cfg, act_cfg):
- super(DCM, self).__init__()
- self.filter_size = filter_size
- self.fusion = fusion
- self.in_channels = in_channels
- self.channels = channels
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.act_cfg = act_cfg
- self.filter_gen_conv = nn.Conv2d(self.in_channels, self.channels, 1, 1,
- 0)
-
- self.input_redu_conv = ConvModule(
- self.in_channels,
- self.channels,
- 1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
-
- if self.norm_cfg is not None:
- self.norm = build_norm_layer(self.norm_cfg, self.channels)[1]
- else:
- self.norm = None
- self.activate = build_activation_layer(self.act_cfg)
-
- if self.fusion:
- self.fusion_conv = ConvModule(
- self.channels,
- self.channels,
- 1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
-
- def forward(self, x):
- """Forward function."""
- generated_filter = self.filter_gen_conv(
- F.adaptive_avg_pool2d(x, self.filter_size))
- x = self.input_redu_conv(x)
- b, c, h, w = x.shape
- # [1, b * c, h, w], c = self.channels
- x = x.view(1, b * c, h, w)
- # [b * c, 1, filter_size, filter_size]
- generated_filter = generated_filter.view(b * c, 1, self.filter_size,
- self.filter_size)
- pad = (self.filter_size - 1) // 2
- if (self.filter_size - 1) % 2 == 0:
- p2d = (pad, pad, pad, pad)
- else:
- p2d = (pad + 1, pad, pad + 1, pad)
- x = F.pad(input=x, pad=p2d, mode='constant', value=0)
- # [1, b * c, h, w]
- output = F.conv2d(input=x, weight=generated_filter, groups=b * c)
- # [b, c, h, w]
- output = output.view(b, c, h, w)
- if self.norm is not None:
- output = self.norm(output)
- output = self.activate(output)
-
- if self.fusion:
- output = self.fusion_conv(output)
-
- return output
-
-
-@HEADS.register_module()
-class DMHead(BaseDecodeHead):
- """Dynamic Multi-scale Filters for Semantic Segmentation.
-
- This head is the implementation of
- `DMNet `_.
-
- Args:
- filter_sizes (tuple[int]): The size of generated convolutional filters
- used in Dynamic Convolutional Module. Default: (1, 3, 5, 7).
- fusion (bool): Add one conv to fuse DCM output feature.
- """
-
- def __init__(self, filter_sizes=(1, 3, 5, 7), fusion=False, **kwargs):
- super(DMHead, self).__init__(**kwargs)
- assert isinstance(filter_sizes, (list, tuple))
- self.filter_sizes = filter_sizes
- self.fusion = fusion
- dcm_modules = []
- for filter_size in self.filter_sizes:
- dcm_modules.append(
- DCM(filter_size,
- self.fusion,
- self.in_channels,
- self.channels,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg))
- self.dcm_modules = nn.ModuleList(dcm_modules)
- self.bottleneck = ConvModule(
- self.in_channels + len(filter_sizes) * self.channels,
- self.channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
-
- def forward(self, inputs):
- """Forward function."""
- x = self._transform_inputs(inputs)
- dcm_outs = [x]
- for dcm_module in self.dcm_modules:
- dcm_outs.append(dcm_module(x))
- dcm_outs = torch.cat(dcm_outs, dim=1)
- output = self.bottleneck(dcm_outs)
- output = self.cls_seg(output)
- return output
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/app.py b/spaces/Mellow-ai/PhotoAI_Mellow/app.py
deleted file mode 100644
index 631167c02edb117695a48d7ca0ef3660505f85ef..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/app.py
+++ /dev/null
@@ -1,877 +0,0 @@
-from share import *
-import config
-import cv2
-import einops
-import gradio as gr
-import numpy as np
-import torch
-import random
-
-###
-import cv2
-import gradio as gr
-import os
-from PIL import Image
-import numpy as np
-import torch
-from torch.autograd import Variable
-from torchvision import transforms
-import torch.nn.functional as F
-import gdown
-import matplotlib.pyplot as plt
-import warnings
-
-###
-
-from pytorch_lightning import seed_everything
-from annotator.util import resize_image, HWC3
-from annotator.hed import HEDdetector, nms
-from cldm.model import create_model, load_state_dict
-from cldm.ddim_hacked import DDIMSampler
-
-apply_hed = HEDdetector()
-model = create_model('./models/cldm_v15.yaml').cpu()
-#model.load_state_dict(load_state_dict('./control_sd15_scribble.pth', location='cuda'))
-ddim_sampler = DDIMSampler(model)
-from safetensors.torch import load_file as safe_load_file #add
-pl_sd = safe_load_file('./Realistic_Vision_V2.0.safetensors') #add
-model.load_state_dict(load_state_dict('./Realistic_Vision_V2.0.safetensors', location='cuda'),strict=False) #add
-model.control_model.load_state_dict(load_state_dict('./control_scribble-fp16.safetensors',location='cuda'))
-
-#model.load_state_dict(load_state_dict(pl_sd, strict=False)) #add
-model = model.cuda()
-
-#########
-########
-import torch
-#
-import torch.nn as nn
-from torchvision import models
-import torch.nn.functional as F
-
-
-
-
-bce_loss = nn.BCELoss(size_average=True)
-def muti_loss_fusion(preds, target):
- loss0 = 0.0
- loss = 0.0
- for i in range(0,len(preds)):
- # print("i: ", i, preds[i].shape)
- if(preds[i].shape[2]!=target.shape[2] or preds[i].shape[3]!=target.shape[3]):
- # tmp_target = _upsample_like(target,preds[i])
- tmp_target = F.interpolate(target, size=preds[i].size()[2:], mode='bilinear', align_corners=True)
- loss = loss + bce_loss(preds[i],tmp_target)
- else:
- loss = loss + bce_loss(preds[i],target)
- if(i==0):
- loss0 = loss
- return loss0, loss
-
-
-fea_loss = nn.MSELoss(size_average=True)
-kl_loss = nn.KLDivLoss(size_average=True)
-l1_loss = nn.L1Loss(size_average=True)
-smooth_l1_loss = nn.SmoothL1Loss(size_average=True)
-def muti_loss_fusion_kl(preds, target, dfs, fs, mode='MSE'):
- loss0 = 0.0
- loss = 0.0
- for i in range(0,len(preds)):
- # print("i: ", i, preds[i].shape)
- if(preds[i].shape[2]!=target.shape[2] or preds[i].shape[3]!=target.shape[3]):
- # tmp_target = _upsample_like(target,preds[i])
- tmp_target = F.interpolate(target, size=preds[i].size()[2:], mode='bilinear', align_corners=True)
- loss = loss + bce_loss(preds[i],tmp_target)
- else:
- loss = loss + bce_loss(preds[i],target)
- if(i==0):
- loss0 = loss
- for i in range(0,len(dfs)):
- if(mode=='MSE'):
- loss = loss + fea_loss(dfs[i],fs[i]) ### add the mse loss of features as additional constraints
- # print("fea_loss: ", fea_loss(dfs[i],fs[i]).item())
- elif(mode=='KL'):
- loss = loss + kl_loss(F.log_softmax(dfs[i],dim=1),F.softmax(fs[i],dim=1))
- # print("kl_loss: ", kl_loss(F.log_softmax(dfs[i],dim=1),F.softmax(fs[i],dim=1)).item())
- elif(mode=='MAE'):
- loss = loss + l1_loss(dfs[i],fs[i])
- # print("ls_loss: ", l1_loss(dfs[i],fs[i]))
- elif(mode=='SmoothL1'):
- loss = loss + smooth_l1_loss(dfs[i],fs[i])
- # print("SmoothL1: ", smooth_l1_loss(dfs[i],fs[i]).item())
- return loss0, loss
-
-
-class REBNCONV(nn.Module):
- def __init__(self,in_ch=3,out_ch=3,dirate=1,stride=1):
- super(REBNCONV,self).__init__()
- self.conv_s1 = nn.Conv2d(in_ch,out_ch,3,padding=1*dirate,dilation=1*dirate,stride=stride)
- self.bn_s1 = nn.BatchNorm2d(out_ch)
- self.relu_s1 = nn.ReLU(inplace=True)
-
- def forward(self,x):
- hx = x
- xout = self.relu_s1(self.bn_s1(self.conv_s1(hx)))
- return xout
-
-
-## upsample tensor 'src' to have the same spatial size with tensor 'tar'
-def _upsample_like(src,tar):
- src = F.upsample(src,size=tar.shape[2:],mode='bilinear')
- return src
-
-### RSU-7 ###
-class RSU7(nn.Module):
- def __init__(self, in_ch=3, mid_ch=12, out_ch=3, img_size=512):
- super(RSU7,self).__init__()
- self.in_ch = in_ch
- self.mid_ch = mid_ch
- self.out_ch = out_ch
- self.rebnconvin = REBNCONV(in_ch,out_ch,dirate=1) ## 1 -> 1/2
- self.rebnconv1 = REBNCONV(out_ch,mid_ch,dirate=1)
- self.pool1 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.rebnconv2 = REBNCONV(mid_ch,mid_ch,dirate=1)
- self.pool2 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.rebnconv3 = REBNCONV(mid_ch,mid_ch,dirate=1)
- self.pool3 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.rebnconv4 = REBNCONV(mid_ch,mid_ch,dirate=1)
- self.pool4 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.rebnconv5 = REBNCONV(mid_ch,mid_ch,dirate=1)
- self.pool5 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.rebnconv6 = REBNCONV(mid_ch,mid_ch,dirate=1)
- self.rebnconv7 = REBNCONV(mid_ch,mid_ch,dirate=2)
- self.rebnconv6d = REBNCONV(mid_ch*2,mid_ch,dirate=1)
- self.rebnconv5d = REBNCONV(mid_ch*2,mid_ch,dirate=1)
- self.rebnconv4d = REBNCONV(mid_ch*2,mid_ch,dirate=1)
- self.rebnconv3d = REBNCONV(mid_ch*2,mid_ch,dirate=1)
- self.rebnconv2d = REBNCONV(mid_ch*2,mid_ch,dirate=1)
- self.rebnconv1d = REBNCONV(mid_ch*2,out_ch,dirate=1)
-
- def forward(self,x):
- b, c, h, w = x.shape
- hx = x
- hxin = self.rebnconvin(hx)
- hx1 = self.rebnconv1(hxin)
- hx = self.pool1(hx1)
- hx2 = self.rebnconv2(hx)
- hx = self.pool2(hx2)
- hx3 = self.rebnconv3(hx)
- hx = self.pool3(hx3)
- hx4 = self.rebnconv4(hx)
- hx = self.pool4(hx4)
- hx5 = self.rebnconv5(hx)
- hx = self.pool5(hx5)
- hx6 = self.rebnconv6(hx)
- hx7 = self.rebnconv7(hx6)
- hx6d = self.rebnconv6d(torch.cat((hx7,hx6),1))
- hx6dup = _upsample_like(hx6d,hx5)
- hx5d = self.rebnconv5d(torch.cat((hx6dup,hx5),1))
- hx5dup = _upsample_like(hx5d,hx4)
- hx4d = self.rebnconv4d(torch.cat((hx5dup,hx4),1))
- hx4dup = _upsample_like(hx4d,hx3)
- hx3d = self.rebnconv3d(torch.cat((hx4dup,hx3),1))
- hx3dup = _upsample_like(hx3d,hx2)
- hx2d = self.rebnconv2d(torch.cat((hx3dup,hx2),1))
- hx2dup = _upsample_like(hx2d,hx1)
- hx1d = self.rebnconv1d(torch.cat((hx2dup,hx1),1))
- return hx1d + hxin
-
-### RSU-6 ###
-class RSU6(nn.Module):
- def __init__(self, in_ch=3, mid_ch=12, out_ch=3):
- super(RSU6,self).__init__()
- self.rebnconvin = REBNCONV(in_ch,out_ch,dirate=1)
- self.rebnconv1 = REBNCONV(out_ch,mid_ch,dirate=1)
- self.pool1 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.rebnconv2 = REBNCONV(mid_ch,mid_ch,dirate=1)
- self.pool2 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.rebnconv3 = REBNCONV(mid_ch,mid_ch,dirate=1)
- self.pool3 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.rebnconv4 = REBNCONV(mid_ch,mid_ch,dirate=1)
- self.pool4 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.rebnconv5 = REBNCONV(mid_ch,mid_ch,dirate=1)
- self.rebnconv6 = REBNCONV(mid_ch,mid_ch,dirate=2)
- self.rebnconv5d = REBNCONV(mid_ch*2,mid_ch,dirate=1)
- self.rebnconv4d = REBNCONV(mid_ch*2,mid_ch,dirate=1)
- self.rebnconv3d = REBNCONV(mid_ch*2,mid_ch,dirate=1)
- self.rebnconv2d = REBNCONV(mid_ch*2,mid_ch,dirate=1)
- self.rebnconv1d = REBNCONV(mid_ch*2,out_ch,dirate=1)
-
- def forward(self,x):
- hx = x
- hxin = self.rebnconvin(hx)
- hx1 = self.rebnconv1(hxin)
- hx = self.pool1(hx1)
- hx2 = self.rebnconv2(hx)
- hx = self.pool2(hx2)
- hx3 = self.rebnconv3(hx)
- hx = self.pool3(hx3)
- hx4 = self.rebnconv4(hx)
- hx = self.pool4(hx4)
- hx5 = self.rebnconv5(hx)
- hx6 = self.rebnconv6(hx5)
- hx5d = self.rebnconv5d(torch.cat((hx6,hx5),1))
- hx5dup = _upsample_like(hx5d,hx4)
- hx4d = self.rebnconv4d(torch.cat((hx5dup,hx4),1))
- hx4dup = _upsample_like(hx4d,hx3)
- hx3d = self.rebnconv3d(torch.cat((hx4dup,hx3),1))
- hx3dup = _upsample_like(hx3d,hx2)
- hx2d = self.rebnconv2d(torch.cat((hx3dup,hx2),1))
- hx2dup = _upsample_like(hx2d,hx1)
- hx1d = self.rebnconv1d(torch.cat((hx2dup,hx1),1))
- return hx1d + hxin
-
-### RSU-5 ###
-class RSU5(nn.Module):
- def __init__(self, in_ch=3, mid_ch=12, out_ch=3):
- super(RSU5,self).__init__()
- self.rebnconvin = REBNCONV(in_ch,out_ch,dirate=1)
- self.rebnconv1 = REBNCONV(out_ch,mid_ch,dirate=1)
- self.pool1 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.rebnconv2 = REBNCONV(mid_ch,mid_ch,dirate=1)
- self.pool2 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.rebnconv3 = REBNCONV(mid_ch,mid_ch,dirate=1)
- self.pool3 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.rebnconv4 = REBNCONV(mid_ch,mid_ch,dirate=1)
- self.rebnconv5 = REBNCONV(mid_ch,mid_ch,dirate=2)
- self.rebnconv4d = REBNCONV(mid_ch*2,mid_ch,dirate=1)
- self.rebnconv3d = REBNCONV(mid_ch*2,mid_ch,dirate=1)
- self.rebnconv2d = REBNCONV(mid_ch*2,mid_ch,dirate=1)
- self.rebnconv1d = REBNCONV(mid_ch*2,out_ch,dirate=1)
-
- def forward(self,x):
- hx = x
- hxin = self.rebnconvin(hx)
- hx1 = self.rebnconv1(hxin)
- hx = self.pool1(hx1)
- hx2 = self.rebnconv2(hx)
- hx = self.pool2(hx2)
- hx3 = self.rebnconv3(hx)
- hx = self.pool3(hx3)
- hx4 = self.rebnconv4(hx)
- hx5 = self.rebnconv5(hx4)
- hx4d = self.rebnconv4d(torch.cat((hx5,hx4),1))
- hx4dup = _upsample_like(hx4d,hx3)
- hx3d = self.rebnconv3d(torch.cat((hx4dup,hx3),1))
- hx3dup = _upsample_like(hx3d,hx2)
- hx2d = self.rebnconv2d(torch.cat((hx3dup,hx2),1))
- hx2dup = _upsample_like(hx2d,hx1)
- hx1d = self.rebnconv1d(torch.cat((hx2dup,hx1),1))
- return hx1d + hxin
-
-### RSU-4 ###
-class RSU4(nn.Module):
- def __init__(self, in_ch=3, mid_ch=12, out_ch=3):
- super(RSU4,self).__init__()
- self.rebnconvin = REBNCONV(in_ch,out_ch,dirate=1)
- self.rebnconv1 = REBNCONV(out_ch,mid_ch,dirate=1)
- self.pool1 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.rebnconv2 = REBNCONV(mid_ch,mid_ch,dirate=1)
- self.pool2 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.rebnconv3 = REBNCONV(mid_ch,mid_ch,dirate=1)
- self.rebnconv4 = REBNCONV(mid_ch,mid_ch,dirate=2)
- self.rebnconv3d = REBNCONV(mid_ch*2,mid_ch,dirate=1)
- self.rebnconv2d = REBNCONV(mid_ch*2,mid_ch,dirate=1)
- self.rebnconv1d = REBNCONV(mid_ch*2,out_ch,dirate=1)
-
- def forward(self,x):
- hx = x
- hxin = self.rebnconvin(hx)
- hx1 = self.rebnconv1(hxin)
- hx = self.pool1(hx1)
- hx2 = self.rebnconv2(hx)
- hx = self.pool2(hx2)
- hx3 = self.rebnconv3(hx)
- hx4 = self.rebnconv4(hx3)
- hx3d = self.rebnconv3d(torch.cat((hx4,hx3),1))
- hx3dup = _upsample_like(hx3d,hx2)
- hx2d = self.rebnconv2d(torch.cat((hx3dup,hx2),1))
- hx2dup = _upsample_like(hx2d,hx1)
- hx1d = self.rebnconv1d(torch.cat((hx2dup,hx1),1))
- return hx1d + hxin
-
-
-### RSU-4F ###
-class RSU4F(nn.Module):
- def __init__(self, in_ch=3, mid_ch=12, out_ch=3):
- super(RSU4F,self).__init__()
- self.rebnconvin = REBNCONV(in_ch,out_ch,dirate=1)
- self.rebnconv1 = REBNCONV(out_ch,mid_ch,dirate=1)
- self.rebnconv2 = REBNCONV(mid_ch,mid_ch,dirate=2)
- self.rebnconv3 = REBNCONV(mid_ch,mid_ch,dirate=4)
- self.rebnconv4 = REBNCONV(mid_ch,mid_ch,dirate=8)
- self.rebnconv3d = REBNCONV(mid_ch*2,mid_ch,dirate=4)
- self.rebnconv2d = REBNCONV(mid_ch*2,mid_ch,dirate=2)
- self.rebnconv1d = REBNCONV(mid_ch*2,out_ch,dirate=1)
-
- def forward(self,x):
- hx = x
- hxin = self.rebnconvin(hx)
- hx1 = self.rebnconv1(hxin)
- hx2 = self.rebnconv2(hx1)
- hx3 = self.rebnconv3(hx2)
- hx4 = self.rebnconv4(hx3)
- hx3d = self.rebnconv3d(torch.cat((hx4,hx3),1))
- hx2d = self.rebnconv2d(torch.cat((hx3d,hx2),1))
- hx1d = self.rebnconv1d(torch.cat((hx2d,hx1),1))
- return hx1d + hxin
-
-class myrebnconv(nn.Module):
- def __init__(self, in_ch=3,
- out_ch=1,
- kernel_size=3,
- stride=1,
- padding=1,
- dilation=1,
- groups=1):
- super(myrebnconv,self).__init__()
- self.conv = nn.Conv2d(in_ch,
- out_ch,
- kernel_size=kernel_size,
- stride=stride,
- padding=padding,
- dilation=dilation,
- groups=groups)
- self.bn = nn.BatchNorm2d(out_ch)
- self.rl = nn.ReLU(inplace=True)
-
- def forward(self,x):
- return self.rl(self.bn(self.conv(x)))
-
-
-class ISNetGTEncoder(nn.Module):
- def __init__(self,in_ch=1,out_ch=1):
- super(ISNetGTEncoder,self).__init__()
- self.conv_in = myrebnconv(in_ch,16,3,stride=2,padding=1) # nn.Conv2d(in_ch,64,3,stride=2,padding=1)
- self.stage1 = RSU7(16,16,64)
- self.pool12 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.stage2 = RSU6(64,16,64)
- self.pool23 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.stage3 = RSU5(64,32,128)
- self.pool34 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.stage4 = RSU4(128,32,256)
- self.pool45 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.stage5 = RSU4F(256,64,512)
- self.pool56 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.stage6 = RSU4F(512,64,512)
- self.side1 = nn.Conv2d(64,out_ch,3,padding=1)
- self.side2 = nn.Conv2d(64,out_ch,3,padding=1)
- self.side3 = nn.Conv2d(128,out_ch,3,padding=1)
- self.side4 = nn.Conv2d(256,out_ch,3,padding=1)
- self.side5 = nn.Conv2d(512,out_ch,3,padding=1)
- self.side6 = nn.Conv2d(512,out_ch,3,padding=1)
-
- def compute_loss(self, preds, targets):
- return muti_loss_fusion(preds,targets)
-
- def forward(self,x):
- hx = x
- hxin = self.conv_in(hx)
- # hx = self.pool_in(hxin)
-
- #stage 1
- hx1 = self.stage1(hxin)
- hx = self.pool12(hx1)
-
-
- #stage 2
- hx2 = self.stage2(hx)
- hx = self.pool23(hx2)
-
- #stage 3
- hx3 = self.stage3(hx)
- hx = self.pool34(hx3)
-
- #stage 4
- hx4 = self.stage4(hx)
- hx = self.pool45(hx4)
-
- #stage 5
- hx5 = self.stage5(hx)
- hx = self.pool56(hx5)
-
- #stage 6
- hx6 = self.stage6(hx)
-
- #side output
- d1 = self.side1(hx1)
- d1 = _upsample_like(d1,x)
- d2 = self.side2(hx2)
- d2 = _upsample_like(d2,x)
- d3 = self.side3(hx3)
- d3 = _upsample_like(d3,x)
- d4 = self.side4(hx4)
- d4 = _upsample_like(d4,x)
- d5 = self.side5(hx5)
- d5 = _upsample_like(d5,x)
- d6 = self.side6(hx6)
- d6 = _upsample_like(d6,x)
-
- # d0 = self.outconv(torch.cat((d1,d2,d3,d4,d5,d6),1))
-
- return [F.sigmoid(d1), F.sigmoid(d2), F.sigmoid(d3), F.sigmoid(d4), F.sigmoid(d5), F.sigmoid(d6)], [hx1,hx2,hx3,hx4,hx5,hx6]
-
-
-class ISNetDIS(nn.Module):
- def __init__(self,in_ch=3,out_ch=1):
- super(ISNetDIS,self).__init__()
- self.conv_in = nn.Conv2d(in_ch,64,3,stride=2,padding=1)
- self.pool_in = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.stage1 = RSU7(64,32,64)
- self.pool12 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.stage2 = RSU6(64,32,128)
- self.pool23 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.stage3 = RSU5(128,64,256)
- self.pool34 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.stage4 = RSU4(256,128,512)
- self.pool45 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.stage5 = RSU4F(512,256,512)
- self.pool56 = nn.MaxPool2d(2,stride=2,ceil_mode=True)
- self.stage6 = RSU4F(512,256,512)
-
- # decoder
- self.stage5d = RSU4F(1024,256,512)
- self.stage4d = RSU4(1024,128,256)
- self.stage3d = RSU5(512,64,128)
- self.stage2d = RSU6(256,32,64)
- self.stage1d = RSU7(128,16,64)
- self.side1 = nn.Conv2d(64,out_ch,3,padding=1)
- self.side2 = nn.Conv2d(64,out_ch,3,padding=1)
- self.side3 = nn.Conv2d(128,out_ch,3,padding=1)
- self.side4 = nn.Conv2d(256,out_ch,3,padding=1)
- self.side5 = nn.Conv2d(512,out_ch,3,padding=1)
- self.side6 = nn.Conv2d(512,out_ch,3,padding=1)
-
- # self.outconv = nn.Conv2d(6*out_ch,out_ch,1)
-
- def compute_loss_kl(self, preds, targets, dfs, fs, mode='MSE'):
- # return muti_loss_fusion(preds,targets)
- return muti_loss_fusion_kl(preds, targets, dfs, fs, mode=mode)
-
- def compute_loss(self, preds, targets):
- # return muti_loss_fusion(preds,targets)
- return muti_loss_fusion(preds, targets)
-
- def forward(self,x):
- hx = x
- hxin = self.conv_in(hx)
- #hx = self.pool_in(hxin)
-
- #stage 1
- hx1 = self.stage1(hxin)
- hx = self.pool12(hx1)
-
- #stage 2
- hx2 = self.stage2(hx)
- hx = self.pool23(hx2)
-
- #stage 3
- hx3 = self.stage3(hx)
- hx = self.pool34(hx3)
-
- #stage 4
- hx4 = self.stage4(hx)
- hx = self.pool45(hx4)
-
- #stage 5
- hx5 = self.stage5(hx)
- hx = self.pool56(hx5)
-
- #stage 6
- hx6 = self.stage6(hx)
- hx6up = _upsample_like(hx6,hx5)
-
-
- #-------------------- decoder --------------------
- hx5d = self.stage5d(torch.cat((hx6up,hx5),1))
- hx5dup = _upsample_like(hx5d,hx4)
- hx4d = self.stage4d(torch.cat((hx5dup,hx4),1))
- hx4dup = _upsample_like(hx4d,hx3)
- hx3d = self.stage3d(torch.cat((hx4dup,hx3),1))
- hx3dup = _upsample_like(hx3d,hx2)
- hx2d = self.stage2d(torch.cat((hx3dup,hx2),1))
- hx2dup = _upsample_like(hx2d,hx1)
- hx1d = self.stage1d(torch.cat((hx2dup,hx1),1))
-
- #side output
- d1 = self.side1(hx1d)
- d1 = _upsample_like(d1,x)
- d2 = self.side2(hx2d)
- d2 = _upsample_like(d2,x)
- d3 = self.side3(hx3d)
- d3 = _upsample_like(d3,x)
- d4 = self.side4(hx4d)
- d4 = _upsample_like(d4,x)
- d5 = self.side5(hx5d)
- d5 = _upsample_like(d5,x)
- d6 = self.side6(hx6)
- d6 = _upsample_like(d6,x)
-
- # d0 = self.outconv(torch.cat((d1,d2,d3,d4,d5,d6),1))
-
- return [F.sigmoid(d1), F.sigmoid(d2), F.sigmoid(d3), F.sigmoid(d4), F.sigmoid(d5), F.sigmoid(d6)],[hx1d,hx2d,hx3d,hx4d,hx5d,hx6]
-
-
-###
-##
-######
-warnings.filterwarnings("ignore")
-
-from data_loader_cache import normalize, im_reader, im_preprocess
-
-from models import *
-import torch.nn as nn
-
-device = 'cuda' if torch.cuda.is_available() else 'cpu'
-class GOSNormalize(object):
- '''
- Normalize the Image using torch.transforms
- '''
- def __init__(self, mean=[0.485,0.456,0.406], std=[0.229,0.224,0.225]):
- self.mean = mean
- self.std = std
-
- def __call__(self,image):
- image = normalize(image,self.mean,self.std)
- return image
-
-transform = transforms.Compose([GOSNormalize([0.5,0.5,0.5],[1.0,1.0,1.0])])
-
-def load_image(im_path, hypar):
- #im = im_reader(im_path)
- im, im_shp = im_preprocess(im_path, hypar["cache_size"])
- im = torch.divide(im,255.0)
- shape = torch.from_numpy(np.array(im_shp))
- return transform(im).unsqueeze(0), shape.unsqueeze(0) # make a batch of image, shape
-
-def build_model(hypar,device):
- net = hypar["model"]#GOSNETINC(3,1)
-
- # convert to half precision
- if(hypar["model_digit"]=="half"):
- net.half()
- for layer in net.modules():
- if isinstance(layer, nn.BatchNorm2d):
- layer.float()
- net.to(device)
- if(hypar["restore_model"]!=""):
- net.load_state_dict(torch.load(hypar["model_path"]+"/"+hypar["restore_model"], map_location=device))
- net.to(device)
- net.eval()
- return net
-
-def predict(net, inputs_val, shapes_val, hypar, device):
- '''
- Given an Image, predict the mask
- '''
- net.eval()
- if(hypar["model_digit"]=="full"):
- inputs_val = inputs_val.type(torch.FloatTensor)
- else:
- inputs_val = inputs_val.type(torch.HalfTensor)
- inputs_val_v = Variable(inputs_val, requires_grad=False).to(device) # wrap inputs in Variable
-
- ds_val = net(inputs_val_v)[0] # list of 6 results
- pred_val = ds_val[0][0,:,:,:] # B x 1 x H x W # we want the first one which is the most accurate prediction
-
- ## recover the prediction spatial size to the orignal image size
- pred_val = torch.squeeze(F.upsample(torch.unsqueeze(pred_val,0),(shapes_val[0][0],shapes_val[0][1]),mode='bilinear'))
-
- ma = torch.max(pred_val)
- mi = torch.min(pred_val)
- pred_val = (pred_val-mi)/(ma-mi) # max = 1
- if device == 'cuda': torch.cuda.empty_cache()
- return (pred_val.detach().cpu().numpy()*255).astype(np.uint8) # it is the mask we need
-
-# Set Parameters
-
-hypar = {} # paramters for inferencing
-hypar["model_path"] ="./model" ## load trained weights from this path
-hypar["restore_model"] = "isnet.pth" ## name of the to-be-loaded weights
-hypar["interm_sup"] = False ## indicate if activate intermediate feature supervision
-
-## choose floating point accuracy --
-hypar["model_digit"] = "full" ## indicates "half" or "full" accuracy of float number
-hypar["seed"] = 0
-hypar["cache_size"] = [1024, 1024] ## cached input spatial resolution, can be configured into different size
-
-## data augmentation parameters ---
-hypar["input_size"] = [1024, 1024] ## mdoel input spatial size, usually use the same value hypar["cache_size"], which means we don't further resize the images
-hypar["crop_size"] = [1024, 1024] ## random crop size from the input, it is usually set as smaller than hypar["cache_size"], e.g., [920,920] for data augmentation
-hypar["model"] = ISNetDIS()
-
- # Build Model
-net = build_model(hypar, device)
-
-
-######
-from numpy import asarray
-from PIL import Image, ImageEnhance, ImageFilter
-
-########
-from diffusers import (ControlNetModel, DiffusionPipeline,
- StableDiffusionControlNetPipeline,
- UniPCMultistepScheduler)
-import gc
-######
-from rembg import remove
-from PIL import Image
-
-def process(input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta):
- with torch.no_grad():
- image = input_image
- w, h = 512, 512
- data = np.zeros((h, w, 3), dtype=np.uint8)
- data[0:256, 0:256] = [255, 0, 0] # red patch in upper left
-
-
- img = Image.fromarray(input_image)
- kmg = Image.fromarray(input_image)
-
- # image_tensor, orig_size = load_image(input_image, hypar)
- # mask = predict(net, image_tensor, orig_size, hypar, device)
- # pil_mask = Image.fromarray(mask).convert('L')
- # pil_mask1=pil_mask.copy()
-####
- # pil_mask1=asarray(pil_mask1)
- # pil_mask1[pil_mask1>0]=255
- # pil_mask1=Image.fromarray(pil_mask1).convert('L')
- # pil_mask1 = pil_mask1.filter(ImageFilter.GaussianBlur(radius=1))
-
-
-##dis
- output = remove(img)
- im_rgb = output #img.convert('RGB')
- im_rgx = output #img.convert('RGB')
- img_enhancer = ImageEnhance.Brightness(im_rgb)
- factor = 0.09
- im_rgb = img_enhancer.enhance(factor)
- im_rgba = im_rgb.copy()
- im_rgbx=im_rgx.copy()
- # im_rgba.putalpha(pil_mask)
- # im_rgbx.putalpha(pil_mask1)
-#dis end
-# img=asarray(im_rgx.copy())
-
-# # Find the contours of the masked object
-# contours, hierarchy = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
-
-# # Find the bounding box of the masked object
-# x, y, w, h = cv2.boundingRect(contours[0])
-
-# # Create a mask for the background
-# bg_mask = np.zeros(img.shape[:2], dtype=np.uint8)
-# bg_mask[y:y+h, x:x+w] = 255
-
-# # Create a blurred version of the mask
-# blur_mask = cv2.GaussianBlur(mask, (15, 15), 0)
-
-# # Perform seamless cloning
-# im_rgbx = cv2.seamlessClone(img, img, blur_mask, (x + w // 2, y + h // 2), cv2.NORMAL_CLONE)
-
-
- input_image = asarray(im_rgba)
- # input_image = asarray(img_rembg)
-
- ###############
- inp_img=asarray(im_rgbx)
- inp_img = HWC3(inp_img)
- detected_map = apply_hed(resize_image(inp_img, detect_resolution))
- detected_map = HWC3(detected_map)
- img_x = resize_image(inp_img, image_resolution)
- ############
- input_image = HWC3(input_image)
- detected_map = apply_hed(resize_image(input_image, detect_resolution))
- detected_map = HWC3(detected_map)
- img = resize_image(input_image, image_resolution)
- H, W, C = img.shape
-
-
-#####
-
- # control_image = np.zeros_like(img, dtype=np.uint8)
- # control_image[np.min(img, axis=2) < 127] = 255
- # vis_control_image = 255 - control_image
- # control_image, vis_control_image= Image.fromarray(control_image),Image.fromarray(vis_control_image)
- # model_id = '/content/drive/MyDrive/sasha/control_sd15_scribble.pth'
- # controlnet = ControlNetModel.from_pretrained(model_id,
- # torch_dtype=torch.float16)
- # base_model_id='/content/drive/MyDrive/sasha/Realistic_Vision_V1.3.safetensors'
- # pipe = StableDiffusionControlNetPipeline.from_pretrained(
- # base_model_id,
- # safety_checker=None,
- # controlnet=controlnet,
- # torch_dtype=torch.float16)
- # pipe.scheduler = UniPCMultistepScheduler.from_config(
- # pipe.scheduler.config)
- # pipe.enable_xformers_memory_efficient_attention()
- # pipe.to(device)
- # torch.cuda.empty_cache()
- # gc.collect()
- # if seed == -1:
- # seed = np.random.randint(0, np.iinfo(np.int64).max)
- # generator = torch.Generator().manual_seed(seed)
-
- # resolt= pipe(prompt=prompt,
- # negative_prompt=n_prompt,
- # guidance_scale=scale,
- # num_images_per_prompt=num_samples,
- # num_inference_steps=ddim_steps,
- # generator=generator,
- # image=control_image).images
-
-
-#####################################
-
- detected_map = cv2.resize(detected_map, (W, H), interpolation=cv2.INTER_LINEAR)
- detected_map = nms(detected_map, 127, 3.0)
- detected_map = cv2.GaussianBlur(detected_map, (0, 0), 3.0)
- detected_map[detected_map > 4] = 255
- detected_map[detected_map < 255] = 0
- control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
- control = torch.stack([control for _ in range(num_samples)], dim=0)
- control = einops.rearrange(control, 'b h w c -> b c h w').clone()
- if seed == -1:
- seed = random.randint(0, 65535)
- seed_everything(seed)
- if config.save_memory:
- model.low_vram_shift(is_diffusing=False)
- cond = {"c_concat": [control], "c_crossattn": [model.get_learned_conditioning(['RAW photo,'+prompt +', '+', minimal product photo, In the style of David Newton, Helen Koker, Aneta Laura, Nikki Astwood, Amy Shamblen, Hyperrealism, soft smooth lighting, luxury, pinterest, Product photography, product studio, sharp focus, digital art, hyper-realistic, 4K, Unreal Engine, Highly Detailed, HD, Dramatic Lighting by Brom, trending on Artstation' +', '+ a_prompt] * num_samples)]}
- un_cond = {"c_concat": None if guess_mode else [control], "c_crossattn": [model.get_learned_conditioning([n_prompt] * num_samples)]}
- shape = (4, H // 8, W // 8)
- if config.save_memory:
- model.low_vram_shift(is_diffusing=True)
- model.control_scales = [strength * (0.825 ** float(12 - i)) for i in range(13)] if guess_mode else ([strength] * 13) # Magic number. IDK why. Perhaps because 0.825**12<0.01 but 0.826**12>0.01
- samples, intermediates = ddim_sampler.sample(ddim_steps, num_samples,
- shape, cond, verbose=False, eta=eta,
- unconditional_guidance_scale=scale,
- unconditional_conditioning=un_cond)
-
- if config.save_memory:
- model.low_vram_shift(is_diffusing=False)
-
- x_samples = model.decode_first_stage(samples)
- x_samples = (einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
-
- results = np.array([x_samples[i] for i in range(num_samples)])
- #img_x= Image.fromarray(img_x)
- #results=Image.fromarray(results)
-
- # img_rembg=Image.fromarray(img_rembg)
- # img_rembg=img_rembg.convert("RGBA")
- in_img=im_rgbx.copy()
- im_img=im_rgbx.copy()
- # width, height = in_img.size
- # print(img_rembg)
-
-
- # alpha = in_img.split()[-1]
- # in_img = Image.merge('RGBA', [in_img.split()[0], in_img.split()[1], in_img.split()[2], alpha.point(lambda x: 255 if x > 0 else 0)])
- background = Image.new("RGBA", in_img.size, (0, 0, 0,0))
- # in_img = Image.alpha_composite(background, in_img)
- background.paste(in_img, in_img)
-
- # Convert the transparent background to an RGB mode
- # rgb_bg_img = bg_img.convert('RGB')
- in_img = background.convert("RGB")
-
-
- in_img=asarray(in_img)
- im_img=asarray(im_img)
-
- in_img = resize_image(in_img, image_resolution)
- im_img = resize_image(im_img, image_resolution)
- im_img=Image.fromarray(im_img)
-
-
-
- #in_img=in_img.resize(512,512)
-
- # umg_y_k=asarray(in_img)
- in_img=Image.fromarray(in_img)
- umg_y_k=in_img.copy()
- img_x_r=in_img.copy()
-
-
- umg_y_k=asarray(umg_y_k)
- img_x_r=asarray(img_x_r)
-
-
- # for x in range(512):
- # for y in range(512):
-
- # # Get the pixel value as a tuple (R,G,B)
- # pixel = img_x_r[x,y]
-
- # # Check each channel and change any pixel with a value of 253 to 255
- # if pixel[0] == 253 or pixel[0]==254:
- # pixel = (255, pixel[1], pixel[2])
- # if pixel[1] == 253 or pixel[1] == 254:
- # pixel = (pixel[0], 255, pixel[2])
- # if pixel[2] == 253 or pixel[2] == 254:
- # pixel = (pixel[0], pixel[1], 255)
-
- # # Update the pixel value in the image
- # img_x_r[x,y]=pixel
-
-
-
- # results=cv2.imread(results)
- xxsample=[]
- # Y,X=np.where(np.all(img_x_r==[0,0,0],axis=2))
- # Y, X = np.where(np.all((img_x_r < 8) & (img_x_r == img_x_r[:,:,0][:,:,np.newaxis]), axis=2))
-
- # p,q=np.where(np.all(img_x_r==[254,254,254],axis=2))
-
-
- for i in range(num_samples):
- results=results[i]
- # img_x_r[np.where(np.all((img_x_r < 8) & (img_x_r == img_x_r[:,:,0][:,:,np.newaxis]), axis=2))]=results[Y,X]
- # img_x_r[np.where(np.all(img_x_r==[0,0,0],axis=2))]=results[Y,X]
- results = resize_image(results, image_resolution)
- results=Image.fromarray(results)
- results.paste(im_img, im_img)
- img_x_r=asarray(results)
-
-
- xxsample.append(img_x_r)
- # print(results.shape)
- print(img_x_r.shape)
- img_txx=[xxsample[i] for i in range (num_samples)]
- #img_x=asarray(img_x)
- #return [detected_map] + img_txx
- return img_x_r
-
-
-
-
-block = gr.Blocks().queue()
-with block:
- with gr.Row():
- gr.Markdown("## Background Generator")
- with gr.Row():
- with gr.Column():
- input_image = gr.Image(source='upload', type="numpy")
- prompt = gr.Textbox(label="Prompt")
- run_button = gr.Button(label="Run")
- with gr.Accordion("Advanced options", open=False):
- num_samples = gr.Slider(label="Images", minimum=1, maximum=12, value=1, step=1)
- image_resolution = gr.Slider(label="Image Resolution", minimum=256, maximum=768, value=512, step=64)
- strength = gr.Slider(label="Control Strength", minimum=0.0, maximum=2.0, value=1.0, step=0.01)
- guess_mode = gr.Checkbox(label='Guess Mode', value=False)
- detect_resolution = gr.Slider(label="HED Resolution", minimum=128, maximum=1024, value=512, step=1)
- ddim_steps = gr.Slider(label="Steps", minimum=1, maximum=100, value=20, step=1)
- scale = gr.Slider(label="Guidance Scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1)
- seed = gr.Slider(label="Seed", minimum=-1, maximum=2147483647, step=1, randomize=True)
- eta = gr.Number(label="eta (DDIM)", value=0.0)
- a_prompt = gr.Textbox(label="Added Prompt", value='best quality, extremely detailed')
- n_prompt = gr.Textbox(label="Negative Prompt",
- value='longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality')
- with gr.Column():
- #result_gallery = gr.Textbox()
- #result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto')
- result_gallery = gr.Image(label="Result I")
- ips = [input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta]
- run_button.click(fn=process, inputs=ips, outputs=result_gallery,api_name="process")
-
-block.launch(show_api=True, show_error=True,enable_queue=True, debug=True)
diff --git a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/scripts/script_preprocess_annoations_S3DIS.py b/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/scripts/script_preprocess_annoations_S3DIS.py
deleted file mode 100644
index 58f32d121acf4c638625079907b02161e808af68..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/scripts/script_preprocess_annoations_S3DIS.py
+++ /dev/null
@@ -1,197 +0,0 @@
-# Copyright 2016 The TensorFlow Authors All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-
-import os
-import glob
-import numpy as np
-import logging
-import cPickle
-from datasets import nav_env
-from datasets import factory
-from src import utils
-from src import map_utils as mu
-
-logging.basicConfig(level=logging.INFO)
-DATA_DIR = 'data/stanford_building_parser_dataset_raw/'
-
-mkdir_if_missing = utils.mkdir_if_missing
-save_variables = utils.save_variables
-
-def _get_semantic_maps(building_name, transform, map_, flip, cats):
- rooms = get_room_in_building(building_name)
- maps = []
- for cat in cats:
- maps.append(np.zeros((map_.size[1], map_.size[0])))
-
- for r in rooms:
- room = load_room(building_name, r, category_list=cats)
- classes = room['class_id']
- for i, cat in enumerate(cats):
- c_ind = cats.index(cat)
- ind = [_ for _, c in enumerate(classes) if c == c_ind]
- if len(ind) > 0:
- vs = [room['vertexs'][x]*1 for x in ind]
- vs = np.concatenate(vs, axis=0)
- if transform:
- vs = np.array([vs[:,1], vs[:,0], vs[:,2]]).T
- vs[:,0] = -vs[:,0]
- vs[:,1] += 4.20
- vs[:,0] += 6.20
- vs = vs*100.
- if flip:
- vs[:,1] = -vs[:,1]
- maps[i] = maps[i] + \
- mu._project_to_map(map_, vs, ignore_points_outside_map=True)
- return maps
-
-def _map_building_name(building_name):
- b = int(building_name.split('_')[0][4])
- out_name = 'Area_{:d}'.format(b)
- if b == 5:
- if int(building_name.split('_')[0][5]) == 1:
- transform = True
- else:
- transform = False
- else:
- transform = False
- return out_name, transform
-
-def get_categories():
- cats = ['beam', 'board', 'bookcase', 'ceiling', 'chair', 'clutter', 'column',
- 'door', 'floor', 'sofa', 'table', 'wall', 'window']
- return cats
-
-def _write_map_files(b_in, b_out, transform):
- cats = get_categories()
-
- env = utils.Foo(padding=10, resolution=5, num_point_threshold=2,
- valid_min=-10, valid_max=200, n_samples_per_face=200)
- robot = utils.Foo(radius=15, base=10, height=140, sensor_height=120,
- camera_elevation_degree=-15)
-
- building_loader = factory.get_dataset('sbpd')
- for flip in [False, True]:
- b = nav_env.Building(b_out, robot, env, flip=flip,
- building_loader=building_loader)
- logging.info("building_in: %s, building_out: %s, transform: %d", b_in,
- b_out, transform)
- maps = _get_semantic_maps(b_in, transform, b.map, flip, cats)
- maps = np.transpose(np.array(maps), axes=[1,2,0])
-
- # Load file from the cache.
- file_name = '{:s}_{:d}_{:d}_{:d}_{:d}_{:d}_{:d}.pkl'
- file_name = file_name.format(b.building_name, b.map.size[0], b.map.size[1],
- b.map.origin[0], b.map.origin[1],
- b.map.resolution, flip)
- out_file = os.path.join(DATA_DIR, 'processing', 'class-maps', file_name)
- logging.info('Writing semantic maps to %s.', out_file)
- save_variables(out_file, [maps, cats], ['maps', 'cats'], overwrite=True)
-
-def _transform_area5b(room_dimension):
- for a in room_dimension.keys():
- r = room_dimension[a]*1
- r[[0,1,3,4]] = r[[1,0,4,3]]
- r[[0,3]] = -r[[3,0]]
- r[[1,4]] += 4.20
- r[[0,3]] += 6.20
- room_dimension[a] = r
- return room_dimension
-
-def collect_room(building_name, room_name):
- room_dir = os.path.join(DATA_DIR, 'Stanford3dDataset_v1.2', building_name,
- room_name, 'Annotations')
- files = glob.glob1(room_dir, '*.txt')
- files = sorted(files, key=lambda s: s.lower())
- vertexs = []; colors = [];
- for f in files:
- file_name = os.path.join(room_dir, f)
- logging.info(' %s', file_name)
- a = np.loadtxt(file_name)
- vertex = a[:,:3]*1.
- color = a[:,3:]*1
- color = color.astype(np.uint8)
- vertexs.append(vertex)
- colors.append(color)
- files = [f.split('.')[0] for f in files]
- out = {'vertexs': vertexs, 'colors': colors, 'names': files}
- return out
-
-def load_room(building_name, room_name, category_list=None):
- room = collect_room(building_name, room_name)
- room['building_name'] = building_name
- room['room_name'] = room_name
- instance_id = range(len(room['names']))
- room['instance_id'] = instance_id
- if category_list is not None:
- name = [r.split('_')[0] for r in room['names']]
- class_id = []
- for n in name:
- if n in category_list:
- class_id.append(category_list.index(n))
- else:
- class_id.append(len(category_list))
- room['class_id'] = class_id
- room['category_list'] = category_list
- return room
-
-def get_room_in_building(building_name):
- building_dir = os.path.join(DATA_DIR, 'Stanford3dDataset_v1.2', building_name)
- rn = os.listdir(building_dir)
- rn = [x for x in rn if os.path.isdir(os.path.join(building_dir, x))]
- rn = sorted(rn, key=lambda s: s.lower())
- return rn
-
-def write_room_dimensions(b_in, b_out, transform):
- rooms = get_room_in_building(b_in)
- room_dimension = {}
- for r in rooms:
- room = load_room(b_in, r, category_list=None)
- vertex = np.concatenate(room['vertexs'], axis=0)
- room_dimension[r] = np.concatenate((np.min(vertex, axis=0), np.max(vertex, axis=0)), axis=0)
- if transform == 1:
- room_dimension = _transform_area5b(room_dimension)
-
- out_file = os.path.join(DATA_DIR, 'processing', 'room-dimension', b_out+'.pkl')
- save_variables(out_file, [room_dimension], ['room_dimension'], overwrite=True)
-
-def write_room_dimensions_all(I):
- mkdir_if_missing(os.path.join(DATA_DIR, 'processing', 'room-dimension'))
- bs_in = ['Area_1', 'Area_2', 'Area_3', 'Area_4', 'Area_5', 'Area_5', 'Area_6']
- bs_out = ['area1', 'area2', 'area3', 'area4', 'area5a', 'area5b', 'area6']
- transforms = [0, 0, 0, 0, 0, 1, 0]
-
- for i in I:
- b_in = bs_in[i]
- b_out = bs_out[i]
- t = transforms[i]
- write_room_dimensions(b_in, b_out, t)
-
-def write_class_maps_all(I):
- mkdir_if_missing(os.path.join(DATA_DIR, 'processing', 'class-maps'))
- bs_in = ['Area_1', 'Area_2', 'Area_3', 'Area_4', 'Area_5', 'Area_5', 'Area_6']
- bs_out = ['area1', 'area2', 'area3', 'area4', 'area5a', 'area5b', 'area6']
- transforms = [0, 0, 0, 0, 0, 1, 0]
-
- for i in I:
- b_in = bs_in[i]
- b_out = bs_out[i]
- t = transforms[i]
- _write_map_files(b_in, b_out, t)
-
-
-if __name__ == '__main__':
- write_room_dimensions_all([0, 2, 3, 4, 5, 6])
- write_class_maps_all([0, 2, 3, 4, 5, 6])
-
diff --git a/spaces/NN520/AI/src/components/welcome-screen.tsx b/spaces/NN520/AI/src/components/welcome-screen.tsx
deleted file mode 100644
index f7449fcbb6c621875e235db98f2790bf7894fb0a..0000000000000000000000000000000000000000
--- a/spaces/NN520/AI/src/components/welcome-screen.tsx
+++ /dev/null
@@ -1,34 +0,0 @@
-import { useBing } from '@/lib/hooks/use-bing'
-
-const exampleMessages = [
- {
- heading: '🧐 提出复杂问题',
- message: `我可以为我挑剔的只吃橙色食物的孩子做什么饭?`
- },
- {
- heading: '🙌 获取更好的答案',
- message: '销量最高的 3 种宠物吸尘器有哪些优点和缺点?'
- },
- {
- heading: '🎨 获得创意灵感',
- message: `以海盗的口吻写一首关于外太空鳄鱼的俳句`
- }
-]
-
-export function WelcomeScreen({ setInput }: Pick, 'setInput'>) {
- return (
-
{most_imp_feat}")
-
- gr.Markdown(" ")
-
- with gr.Box():
- gr.Markdown('''⭐ Note that model accuracy is based on the uploaded data.csv and reflects how well the AI model can give correct recommendations for that dataset. Model accuracy and most important feature can be helpful for understanding how the model works, but should not be considered absolute facts about the real world.''')
-
- with gr.Box():
- with open('info.md') as f:
- f.readline()
- gr.Markdown(f.read())
-
-# show the interface
-block.launch()
\ No newline at end of file
diff --git a/spaces/Salesforce/EDICT/my_half_diffusers/testing_utils.py b/spaces/Salesforce/EDICT/my_half_diffusers/testing_utils.py
deleted file mode 100644
index ff8b6aa9b41c45b0ab77f343904bffc53fa9e9cb..0000000000000000000000000000000000000000
--- a/spaces/Salesforce/EDICT/my_half_diffusers/testing_utils.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import os
-import random
-import unittest
-from distutils.util import strtobool
-
-import torch
-
-from packaging import version
-
-
-global_rng = random.Random()
-torch_device = "cuda" if torch.cuda.is_available() else "cpu"
-is_torch_higher_equal_than_1_12 = version.parse(version.parse(torch.__version__).base_version) >= version.parse("1.12")
-
-if is_torch_higher_equal_than_1_12:
- torch_device = "mps" if torch.backends.mps.is_available() else torch_device
-
-
-def parse_flag_from_env(key, default=False):
- try:
- value = os.environ[key]
- except KeyError:
- # KEY isn't set, default to `default`.
- _value = default
- else:
- # KEY is set, convert it to True or False.
- try:
- _value = strtobool(value)
- except ValueError:
- # More values are supported, but let's keep the message simple.
- raise ValueError(f"If set, {key} must be yes or no.")
- return _value
-
-
-_run_slow_tests = parse_flag_from_env("RUN_SLOW", default=False)
-
-
-def floats_tensor(shape, scale=1.0, rng=None, name=None):
- """Creates a random float32 tensor"""
- if rng is None:
- rng = global_rng
-
- total_dims = 1
- for dim in shape:
- total_dims *= dim
-
- values = []
- for _ in range(total_dims):
- values.append(rng.random() * scale)
-
- return torch.tensor(data=values, dtype=torch.float).view(shape).contiguous()
-
-
-def slow(test_case):
- """
- Decorator marking a test as slow.
-
- Slow tests are skipped by default. Set the RUN_SLOW environment variable to a truthy value to run them.
-
- """
- return unittest.skipUnless(_run_slow_tests, "test is slow")(test_case)
diff --git a/spaces/SarthakSidhant/Go-Cattle/diseases/clostridial enteritis (overeating disease).md b/spaces/SarthakSidhant/Go-Cattle/diseases/clostridial enteritis (overeating disease).md
deleted file mode 100644
index c072d6f0eb88277988b60f76d45c37458f3d373f..0000000000000000000000000000000000000000
--- a/spaces/SarthakSidhant/Go-Cattle/diseases/clostridial enteritis (overeating disease).md
+++ /dev/null
@@ -1,33 +0,0 @@
-## Clostridial enteritis (overeating disease)
-
-**Information:** Clostridial enteritis, also known as **pulpy kidney**, is a bacterial infection that affects cattle. It is caused by a bacterium called **Clostridium perfringens**.
-
-**Symptoms:**
-
-* Rapid onset of fever
-* Depression
-* Sudden death
-
-**Remedies:**
-
-* There is no specific cure for clostridial enteritis.
-* Treatment is usually supportive and may include:
- * Administering antibiotics
- * Providing fluids and electrolytes
- * Treating other underlying conditions
-
-**Causes:**
-
-* Clostridial enteritis is caused by a bacterium called **Clostridium perfringens**.
-* This bacterium is found in the soil and can enter the body through the digestive tract.
-* Clostridial enteritis is more common in cattle that are stressed or malnourished.
-* Clostridial enteritis can also be spread through contact with infected cattle or their feces.
-
-**Prevention:**
-
-* The best way to prevent clostridial enteritis is to:
- * Feed cattle a balanced diet
- * Avoid grazing cattle in areas where the bacteria is common
- * Vaccinate cattle against clostridial enteritis
-
-**Note:** Clostridial enteritis is often referred to as "overeating disease" because it is most common in cattle that have recently been fed a large amount of grain or other high-fiber food.
diff --git a/spaces/Spark808/rvc-demo/infer_pack/models_onnx.py b/spaces/Spark808/rvc-demo/infer_pack/models_onnx.py
deleted file mode 100644
index 3cdae2f7f8591a1e43b1d8520baa37b7e9744d72..0000000000000000000000000000000000000000
--- a/spaces/Spark808/rvc-demo/infer_pack/models_onnx.py
+++ /dev/null
@@ -1,849 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from infer_pack import modules
-from infer_pack import attentions
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from infer_pack.commons import init_weights
-import numpy as np
-from infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder256Sim(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- x = self.proj(x) * x_mask
- return x, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o
-
-
-class SynthesizerTrnMs256NSFsid_sim(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- # hop_length,
- gin_channels=0,
- use_sdp=True,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256Sim(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- is_half=kwargs["is_half"],
- )
-
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, ds, max_len=None
- ): # y是spec不需要了现在
- g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
- x = self.flow(x, x_mask, g=g, reverse=True)
- o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g)
- return o
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/Sreezx/Sentzi/test/cli.py b/spaces/Sreezx/Sentzi/test/cli.py
deleted file mode 100644
index 269b981b61ad236b9f64bf8fb02f6a9d587fbcf2..0000000000000000000000000000000000000000
--- a/spaces/Sreezx/Sentzi/test/cli.py
+++ /dev/null
@@ -1,227 +0,0 @@
-import sys
-try:
- from tqdm import tqdm
- import time
- import pyperclip
- import webbrowser
- import requests
- from test_utils.data import testModel
- import typer
- from enum import Enum
- from rich.console import Console
- from rich.panel import Panel
- from rich.box import SIMPLE_HEAVY
- from pathlib import Path
- import typing
- import subprocess
-except ImportError as e:
- from test_utils.debug import logger
- logger.error(f"Failed importing dependencies ({e})")
- sys.exit(0)
-
-# import logging
-from test_utils.debug import logger
-
-# init rich console
-console = Console()
-
-# change all '--' to '-'
-def token_normalize_func(value):
- if value.startswith('-'):
- return value.lstrip('-')
- return value
-
-CONTEXT_SETTINGS = dict(help_option_names=['-h', '-help'], token_normalize_func=token_normalize_func)
-
-cli = typer.Typer(
- help=' CLI tool 🛠️ to test sentzi backend ',
- add_completion=False,
- rich_markup_mode="rich",
- context_settings=CONTEXT_SETTINGS,
- epilog="Made with ❤️ by [bright_cyan]sreezx[/bright_cyan] [bright_green]@github.com/sreezx[/bright_green]"
- )
-
-# create output mode class
-class OutputMode(str, Enum):
- show = "show"
- hidden = "hidden"
- scroll = "scroll"
-
-# create open class
-class Open(str, Enum):
- st_app_local = "st-app:local"
- st_app_cloud = "st-app:cloud"
- repo = "repo"
- hg_space = "hgf-space"
-
-def opens(
- method : Open,
- log : bool
-) -> (typing.Callable | None):
- def on_enter(link : str, again : bool = False) -> None:
- """ Open webbrowser on enter """
- if not again:
- console.print(
- Panel("To [blue]locate[/blue] the link in your default webbrowser press '[yellow]enter[/yellow]' . "
- "Press '[yellow]q[/yellow]' to exit ."
- ,box=SIMPLE_HEAVY)
- )
- def Prompt() -> str:
- ask = console.input(" : ")
- return ask
- prompt = Prompt()
- if prompt in [""]:
- webbrowser.open(link)
- if log:
- logger.success("Link opened in browser ! ✨")
-
- elif prompt.lower() in ["q"]:
- sys.exit(0)
-
- else:
- on_enter(link,again=True)
- def if_repo() -> None:
- pyperclip.copy("https://github.com/sreezx/Sentzi")
- logger.success("Repo link copied to clipboard [link : https://github.com/sreezx/Sentzi] ✨")
- on_enter("https://github.com/sreezx/Sentzi")
- def if_st_local() -> None:
- logger.debug("Running bat file to connect with 'run.ps1' ... ")
- subprocess.run(f'{Path().cwd() / "bin/do.bat"}') # run the bat file
- def if_st_cloud() -> None:
- pyperclip.copy("https://sentzi.streamlit.app/")
- logger.success("App link copied to clipboard [link : https://sentzi.streamlit.app/] ✨")
- on_enter("https://sentzi.streamlit.app/")
- def HgF() -> None:
- pyperclip.copy("https://huggingface.co/spaces/Sreezx/Sentzi")
- logger.success("Hugging Face Space link copied to clipboard [link : https://huggingface.co/spaces/Sreezx/Sentzi] ✨")
- on_enter("https://huggingface.co/spaces/Sreezx/Sentzi")
-
- FuncsDict = {
- "st-app:local" : lambda : if_st_local(),
- "st-app:cloud" : lambda : if_st_cloud(),
- "repo" : lambda : if_repo(),
- "hgf-space" : lambda : HgF()
- }
- return FuncsDict.get(method.value, lambda : None)
-
-def show_version(
- log : bool,
-):
- version_url = "https://cdn.jsdelivr.net/gh/sreezx/Sentzi/version"
- if log:
- logger.debug(f"Called 'sentzi-test.py {sys.argv[1:]}' ")
- logger.info(f"Getting version info from : '{version_url}'")
-
- # Create a tqdm progress bar
- try:
- version = requests.get(version_url, stream=True)
- except (requests.HTTPError or requests.ConnectionError):
- if log:
- logger.error("Failed connecting to server ! Make sure you have an active internet connection .")
- sys.exit(0)
- total_size = int(version.headers.get('content-length', 0))
-
- with tqdm(total=total_size, unit='B', unit_scale=True, desc="Getting version info",ncols=80) as pbar:
- with open('temp_version.txt', 'wb') as f:
- for data in version.iter_content(chunk_size=1024):
- time.sleep(0.5) # delay the bar
- pbar.update(len(data)) # Update the progress bar
- f.write(data) # write the version
-
- # show as a panel
- console.print(
- Panel(
- f"[blue]sentzi[/blue] 🏷️ [yellow]{Path('temp_version.txt').read_text(encoding='utf-8')}[/yellow] "
- ,expand=False,box=SIMPLE_HEAVY)
- )
- if log:
- logger.info('Deleting the temporary version file (temp_version.txt)')
- # delete the file
- Path('temp_version.txt').unlink(missing_ok=True)
-
-
-# flags
-@cli.callback(invoke_without_command=True,no_args_is_help=True)
-def no_cmds(
- version : typing.Optional[bool] = typer.Option(
- None,
- '-version',
- '-v',
- is_eager=True,
- is_flag=True,
- help='Show version and exit .'
- ),
- log : typing.Optional[bool] = typer.Option(
- True,
- '-log/-no-log','-L/-nL',
- is_eager=True,
- is_flag=True,
- help="Enable or disable logging .",
- show_default=True
- ),
- With : typing.Optional[str] = typer.Option(
- None,
- '-with',
- '-W',
- help="Get the sentiment of a text or from a text file. To analyze "
- "external datasets enter '[magenta]ext.data[/magenta]'",
- show_default=False,
- metavar=" PATH | STR | 'ext.data' ",
- rich_help_panel="'With' Options"
- ),
- save_json : typing.Optional[bool] = typer.Option(
- None,
- '-save',
- '-S',
- is_eager=False,
- is_flag=True,
- help="Save '[blue]With[/blue]' result to a '[magenta]json[/magenta]' file .",
- rich_help_panel="'With' Options"
- ),
- output : typing.Optional[OutputMode] = typer.Option(
- OutputMode.show.value,
- '-output',
- '-o',
- case_sensitive=False,
- show_default=True,
- help="Different modes to display the '[blue]With[/blue]' result. "
- "The default way is '[yellow]show[/yellow]'. '[yellow]hidden[/yellow]' hides"
- " the result completely. To view large results give '[yellow]scroll[/yellow]' as the mode . ",
- rich_help_panel="'With' Options"
- ),
- N : typing.Optional[int] = typer.Option(
- 1,
- '-n',
- '-N',
- show_default=True,
- max=20,
- min=1,
- help="Number of reviews to select from the external dataset . Max is '20' and Min '1' .",
- rich_help_panel="'With' Options"
- ),
- _open : typing.Optional[Open] = typer.Option(
- None,
- '-open',
- '-!',
- case_sensitive=False,
- help="To run main application locally just enter '[yellow]-! st-app:local[/yellow]' . "
- " To run from the [magenta]Streamlit[/magenta] cloud use '[yellow]-! st-app:cloud[/yellow]' ."
- "For opening the official [magenta]github[/magenta] repo enter '[yellow]-! repo[/yellow]'"
- ". Another site you can open is the official [magenta]Hugging Face Space[/magenta] of '[cyan]sentzi[/cyan]' , using '[yellow]-! hg-space[/yellow]' ")
-):
- flags = {
- version : lambda : show_version(log),
- With : lambda : testModel(With, log,save_json,output,N),
- _open : lambda : opens(_open, log)(),
- }
- # parse the flags
- for flag in flags.keys():
- if flag:
- flags.get(flag,lambda : None)() # execute the flag
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/SujanMidatani/speechToText/Dockerfile b/spaces/SujanMidatani/speechToText/Dockerfile
deleted file mode 100644
index e6e1cc0418211f2721e63505f63fc34dc4e8dc1b..0000000000000000000000000000000000000000
--- a/spaces/SujanMidatani/speechToText/Dockerfile
+++ /dev/null
@@ -1,11 +0,0 @@
-FROM huggingface/transformers-pytorch-cpu
-
-# Install system-level dependencies
-RUN apt-get update && apt-get install -y \
- libasound2-dev \
- portaudio19-dev \
- libportaudio2 \
- libportaudiocpp0 \
- ffmpeg
-
-
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/_core/_synchronization.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/_core/_synchronization.py
deleted file mode 100644
index cbf7d0f584514d99bd58512d270760cc49e8b690..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/_core/_synchronization.py
+++ /dev/null
@@ -1,596 +0,0 @@
-from __future__ import annotations
-
-from collections import deque
-from dataclasses import dataclass
-from types import TracebackType
-from warnings import warn
-
-from ..lowlevel import cancel_shielded_checkpoint, checkpoint, checkpoint_if_cancelled
-from ._compat import DeprecatedAwaitable
-from ._eventloop import get_asynclib
-from ._exceptions import BusyResourceError, WouldBlock
-from ._tasks import CancelScope
-from ._testing import TaskInfo, get_current_task
-
-
-@dataclass(frozen=True)
-class EventStatistics:
- """
- :ivar int tasks_waiting: number of tasks waiting on :meth:`~.Event.wait`
- """
-
- tasks_waiting: int
-
-
-@dataclass(frozen=True)
-class CapacityLimiterStatistics:
- """
- :ivar int borrowed_tokens: number of tokens currently borrowed by tasks
- :ivar float total_tokens: total number of available tokens
- :ivar tuple borrowers: tasks or other objects currently holding tokens borrowed from this
- limiter
- :ivar int tasks_waiting: number of tasks waiting on :meth:`~.CapacityLimiter.acquire` or
- :meth:`~.CapacityLimiter.acquire_on_behalf_of`
- """
-
- borrowed_tokens: int
- total_tokens: float
- borrowers: tuple[object, ...]
- tasks_waiting: int
-
-
-@dataclass(frozen=True)
-class LockStatistics:
- """
- :ivar bool locked: flag indicating if this lock is locked or not
- :ivar ~anyio.TaskInfo owner: task currently holding the lock (or ``None`` if the lock is not
- held by any task)
- :ivar int tasks_waiting: number of tasks waiting on :meth:`~.Lock.acquire`
- """
-
- locked: bool
- owner: TaskInfo | None
- tasks_waiting: int
-
-
-@dataclass(frozen=True)
-class ConditionStatistics:
- """
- :ivar int tasks_waiting: number of tasks blocked on :meth:`~.Condition.wait`
- :ivar ~anyio.LockStatistics lock_statistics: statistics of the underlying :class:`~.Lock`
- """
-
- tasks_waiting: int
- lock_statistics: LockStatistics
-
-
-@dataclass(frozen=True)
-class SemaphoreStatistics:
- """
- :ivar int tasks_waiting: number of tasks waiting on :meth:`~.Semaphore.acquire`
-
- """
-
- tasks_waiting: int
-
-
-class Event:
- def __new__(cls) -> Event:
- return get_asynclib().Event()
-
- def set(self) -> DeprecatedAwaitable:
- """Set the flag, notifying all listeners."""
- raise NotImplementedError
-
- def is_set(self) -> bool:
- """Return ``True`` if the flag is set, ``False`` if not."""
- raise NotImplementedError
-
- async def wait(self) -> None:
- """
- Wait until the flag has been set.
-
- If the flag has already been set when this method is called, it returns immediately.
-
- """
- raise NotImplementedError
-
- def statistics(self) -> EventStatistics:
- """Return statistics about the current state of this event."""
- raise NotImplementedError
-
-
-class Lock:
- _owner_task: TaskInfo | None = None
-
- def __init__(self) -> None:
- self._waiters: deque[tuple[TaskInfo, Event]] = deque()
-
- async def __aenter__(self) -> None:
- await self.acquire()
-
- async def __aexit__(
- self,
- exc_type: type[BaseException] | None,
- exc_val: BaseException | None,
- exc_tb: TracebackType | None,
- ) -> None:
- self.release()
-
- async def acquire(self) -> None:
- """Acquire the lock."""
- await checkpoint_if_cancelled()
- try:
- self.acquire_nowait()
- except WouldBlock:
- task = get_current_task()
- event = Event()
- token = task, event
- self._waiters.append(token)
- try:
- await event.wait()
- except BaseException:
- if not event.is_set():
- self._waiters.remove(token)
- elif self._owner_task == task:
- self.release()
-
- raise
-
- assert self._owner_task == task
- else:
- try:
- await cancel_shielded_checkpoint()
- except BaseException:
- self.release()
- raise
-
- def acquire_nowait(self) -> None:
- """
- Acquire the lock, without blocking.
-
- :raises ~WouldBlock: if the operation would block
-
- """
- task = get_current_task()
- if self._owner_task == task:
- raise RuntimeError("Attempted to acquire an already held Lock")
-
- if self._owner_task is not None:
- raise WouldBlock
-
- self._owner_task = task
-
- def release(self) -> DeprecatedAwaitable:
- """Release the lock."""
- if self._owner_task != get_current_task():
- raise RuntimeError("The current task is not holding this lock")
-
- if self._waiters:
- self._owner_task, event = self._waiters.popleft()
- event.set()
- else:
- del self._owner_task
-
- return DeprecatedAwaitable(self.release)
-
- def locked(self) -> bool:
- """Return True if the lock is currently held."""
- return self._owner_task is not None
-
- def statistics(self) -> LockStatistics:
- """
- Return statistics about the current state of this lock.
-
- .. versionadded:: 3.0
- """
- return LockStatistics(self.locked(), self._owner_task, len(self._waiters))
-
-
-class Condition:
- _owner_task: TaskInfo | None = None
-
- def __init__(self, lock: Lock | None = None):
- self._lock = lock or Lock()
- self._waiters: deque[Event] = deque()
-
- async def __aenter__(self) -> None:
- await self.acquire()
-
- async def __aexit__(
- self,
- exc_type: type[BaseException] | None,
- exc_val: BaseException | None,
- exc_tb: TracebackType | None,
- ) -> None:
- self.release()
-
- def _check_acquired(self) -> None:
- if self._owner_task != get_current_task():
- raise RuntimeError("The current task is not holding the underlying lock")
-
- async def acquire(self) -> None:
- """Acquire the underlying lock."""
- await self._lock.acquire()
- self._owner_task = get_current_task()
-
- def acquire_nowait(self) -> None:
- """
- Acquire the underlying lock, without blocking.
-
- :raises ~WouldBlock: if the operation would block
-
- """
- self._lock.acquire_nowait()
- self._owner_task = get_current_task()
-
- def release(self) -> DeprecatedAwaitable:
- """Release the underlying lock."""
- self._lock.release()
- return DeprecatedAwaitable(self.release)
-
- def locked(self) -> bool:
- """Return True if the lock is set."""
- return self._lock.locked()
-
- def notify(self, n: int = 1) -> None:
- """Notify exactly n listeners."""
- self._check_acquired()
- for _ in range(n):
- try:
- event = self._waiters.popleft()
- except IndexError:
- break
-
- event.set()
-
- def notify_all(self) -> None:
- """Notify all the listeners."""
- self._check_acquired()
- for event in self._waiters:
- event.set()
-
- self._waiters.clear()
-
- async def wait(self) -> None:
- """Wait for a notification."""
- await checkpoint()
- event = Event()
- self._waiters.append(event)
- self.release()
- try:
- await event.wait()
- except BaseException:
- if not event.is_set():
- self._waiters.remove(event)
-
- raise
- finally:
- with CancelScope(shield=True):
- await self.acquire()
-
- def statistics(self) -> ConditionStatistics:
- """
- Return statistics about the current state of this condition.
-
- .. versionadded:: 3.0
- """
- return ConditionStatistics(len(self._waiters), self._lock.statistics())
-
-
-class Semaphore:
- def __init__(self, initial_value: int, *, max_value: int | None = None):
- if not isinstance(initial_value, int):
- raise TypeError("initial_value must be an integer")
- if initial_value < 0:
- raise ValueError("initial_value must be >= 0")
- if max_value is not None:
- if not isinstance(max_value, int):
- raise TypeError("max_value must be an integer or None")
- if max_value < initial_value:
- raise ValueError(
- "max_value must be equal to or higher than initial_value"
- )
-
- self._value = initial_value
- self._max_value = max_value
- self._waiters: deque[Event] = deque()
-
- async def __aenter__(self) -> Semaphore:
- await self.acquire()
- return self
-
- async def __aexit__(
- self,
- exc_type: type[BaseException] | None,
- exc_val: BaseException | None,
- exc_tb: TracebackType | None,
- ) -> None:
- self.release()
-
- async def acquire(self) -> None:
- """Decrement the semaphore value, blocking if necessary."""
- await checkpoint_if_cancelled()
- try:
- self.acquire_nowait()
- except WouldBlock:
- event = Event()
- self._waiters.append(event)
- try:
- await event.wait()
- except BaseException:
- if not event.is_set():
- self._waiters.remove(event)
- else:
- self.release()
-
- raise
- else:
- try:
- await cancel_shielded_checkpoint()
- except BaseException:
- self.release()
- raise
-
- def acquire_nowait(self) -> None:
- """
- Acquire the underlying lock, without blocking.
-
- :raises ~WouldBlock: if the operation would block
-
- """
- if self._value == 0:
- raise WouldBlock
-
- self._value -= 1
-
- def release(self) -> DeprecatedAwaitable:
- """Increment the semaphore value."""
- if self._max_value is not None and self._value == self._max_value:
- raise ValueError("semaphore released too many times")
-
- if self._waiters:
- self._waiters.popleft().set()
- else:
- self._value += 1
-
- return DeprecatedAwaitable(self.release)
-
- @property
- def value(self) -> int:
- """The current value of the semaphore."""
- return self._value
-
- @property
- def max_value(self) -> int | None:
- """The maximum value of the semaphore."""
- return self._max_value
-
- def statistics(self) -> SemaphoreStatistics:
- """
- Return statistics about the current state of this semaphore.
-
- .. versionadded:: 3.0
- """
- return SemaphoreStatistics(len(self._waiters))
-
-
-class CapacityLimiter:
- def __new__(cls, total_tokens: float) -> CapacityLimiter:
- return get_asynclib().CapacityLimiter(total_tokens)
-
- async def __aenter__(self) -> None:
- raise NotImplementedError
-
- async def __aexit__(
- self,
- exc_type: type[BaseException] | None,
- exc_val: BaseException | None,
- exc_tb: TracebackType | None,
- ) -> bool | None:
- raise NotImplementedError
-
- @property
- def total_tokens(self) -> float:
- """
- The total number of tokens available for borrowing.
-
- This is a read-write property. If the total number of tokens is increased, the
- proportionate number of tasks waiting on this limiter will be granted their tokens.
-
- .. versionchanged:: 3.0
- The property is now writable.
-
- """
- raise NotImplementedError
-
- @total_tokens.setter
- def total_tokens(self, value: float) -> None:
- raise NotImplementedError
-
- async def set_total_tokens(self, value: float) -> None:
- warn(
- "CapacityLimiter.set_total_tokens has been deprecated. Set the value of the"
- '"total_tokens" attribute directly.',
- DeprecationWarning,
- )
- self.total_tokens = value
-
- @property
- def borrowed_tokens(self) -> int:
- """The number of tokens that have currently been borrowed."""
- raise NotImplementedError
-
- @property
- def available_tokens(self) -> float:
- """The number of tokens currently available to be borrowed"""
- raise NotImplementedError
-
- def acquire_nowait(self) -> DeprecatedAwaitable:
- """
- Acquire a token for the current task without waiting for one to become available.
-
- :raises ~anyio.WouldBlock: if there are no tokens available for borrowing
-
- """
- raise NotImplementedError
-
- def acquire_on_behalf_of_nowait(self, borrower: object) -> DeprecatedAwaitable:
- """
- Acquire a token without waiting for one to become available.
-
- :param borrower: the entity borrowing a token
- :raises ~anyio.WouldBlock: if there are no tokens available for borrowing
-
- """
- raise NotImplementedError
-
- async def acquire(self) -> None:
- """
- Acquire a token for the current task, waiting if necessary for one to become available.
-
- """
- raise NotImplementedError
-
- async def acquire_on_behalf_of(self, borrower: object) -> None:
- """
- Acquire a token, waiting if necessary for one to become available.
-
- :param borrower: the entity borrowing a token
-
- """
- raise NotImplementedError
-
- def release(self) -> None:
- """
- Release the token held by the current task.
- :raises RuntimeError: if the current task has not borrowed a token from this limiter.
-
- """
- raise NotImplementedError
-
- def release_on_behalf_of(self, borrower: object) -> None:
- """
- Release the token held by the given borrower.
-
- :raises RuntimeError: if the borrower has not borrowed a token from this limiter.
-
- """
- raise NotImplementedError
-
- def statistics(self) -> CapacityLimiterStatistics:
- """
- Return statistics about the current state of this limiter.
-
- .. versionadded:: 3.0
-
- """
- raise NotImplementedError
-
-
-def create_lock() -> Lock:
- """
- Create an asynchronous lock.
-
- :return: a lock object
-
- .. deprecated:: 3.0
- Use :class:`~Lock` directly.
-
- """
- warn("create_lock() is deprecated -- use Lock() directly", DeprecationWarning)
- return Lock()
-
-
-def create_condition(lock: Lock | None = None) -> Condition:
- """
- Create an asynchronous condition.
-
- :param lock: the lock to base the condition object on
- :return: a condition object
-
- .. deprecated:: 3.0
- Use :class:`~Condition` directly.
-
- """
- warn(
- "create_condition() is deprecated -- use Condition() directly",
- DeprecationWarning,
- )
- return Condition(lock=lock)
-
-
-def create_event() -> Event:
- """
- Create an asynchronous event object.
-
- :return: an event object
-
- .. deprecated:: 3.0
- Use :class:`~Event` directly.
-
- """
- warn("create_event() is deprecated -- use Event() directly", DeprecationWarning)
- return get_asynclib().Event()
-
-
-def create_semaphore(value: int, *, max_value: int | None = None) -> Semaphore:
- """
- Create an asynchronous semaphore.
-
- :param value: the semaphore's initial value
- :param max_value: if set, makes this a "bounded" semaphore that raises :exc:`ValueError` if the
- semaphore's value would exceed this number
- :return: a semaphore object
-
- .. deprecated:: 3.0
- Use :class:`~Semaphore` directly.
-
- """
- warn(
- "create_semaphore() is deprecated -- use Semaphore() directly",
- DeprecationWarning,
- )
- return Semaphore(value, max_value=max_value)
-
-
-def create_capacity_limiter(total_tokens: float) -> CapacityLimiter:
- """
- Create a capacity limiter.
-
- :param total_tokens: the total number of tokens available for borrowing (can be an integer or
- :data:`math.inf`)
- :return: a capacity limiter object
-
- .. deprecated:: 3.0
- Use :class:`~CapacityLimiter` directly.
-
- """
- warn(
- "create_capacity_limiter() is deprecated -- use CapacityLimiter() directly",
- DeprecationWarning,
- )
- return get_asynclib().CapacityLimiter(total_tokens)
-
-
-class ResourceGuard:
- __slots__ = "action", "_guarded"
-
- def __init__(self, action: str):
- self.action = action
- self._guarded = False
-
- def __enter__(self) -> None:
- if self._guarded:
- raise BusyResourceError(self.action)
-
- self._guarded = True
-
- def __exit__(
- self,
- exc_type: type[BaseException] | None,
- exc_val: BaseException | None,
- exc_tb: TracebackType | None,
- ) -> bool | None:
- self._guarded = False
- return None
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_code.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_code.py
deleted file mode 100644
index 6938bd1bfec907c06b6e45deef795ecd53688b12..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_frame_eval/vendored/bytecode/tests/test_code.py
+++ /dev/null
@@ -1,93 +0,0 @@
-
-import pytest
-from tests_python.debugger_unittest import IS_PY36_OR_GREATER, IS_CPYTHON
-from tests_python.debug_constants import TEST_CYTHON
-pytestmark = pytest.mark.skipif(not IS_PY36_OR_GREATER or not IS_CPYTHON or not TEST_CYTHON, reason='Requires CPython >= 3.6')
-import unittest
-
-from _pydevd_frame_eval.vendored.bytecode import ConcreteBytecode, Bytecode, ControlFlowGraph
-from _pydevd_frame_eval.vendored.bytecode.tests import get_code
-
-
-class CodeTests(unittest.TestCase):
- """Check that bytecode.from_code(code).to_code() returns code."""
-
- def check(self, source, function=False):
- ref_code = get_code(source, function=function)
-
- code = ConcreteBytecode.from_code(ref_code).to_code()
- self.assertEqual(code, ref_code)
-
- code = Bytecode.from_code(ref_code).to_code()
- self.assertEqual(code, ref_code)
-
- bytecode = Bytecode.from_code(ref_code)
- blocks = ControlFlowGraph.from_bytecode(bytecode)
- code = blocks.to_bytecode().to_code()
- self.assertEqual(code, ref_code)
-
- def test_loop(self):
- self.check(
- """
- for x in range(1, 10):
- x += 1
- if x == 3:
- continue
- x -= 1
- if x > 7:
- break
- x = 0
- print(x)
- """
- )
-
- def test_varargs(self):
- self.check(
- """
- def func(a, b, *varargs):
- pass
- """,
- function=True,
- )
-
- def test_kwargs(self):
- self.check(
- """
- def func(a, b, **kwargs):
- pass
- """,
- function=True,
- )
-
- def test_kwonlyargs(self):
- self.check(
- """
- def func(*, arg, arg2):
- pass
- """,
- function=True,
- )
-
- # Added because Python 3.10 added some special beahavior with respect to
- # generators in term of stack size
- def test_generator_func(self):
- self.check(
- """
- def func(arg, arg2):
- yield
- """,
- function=True,
- )
-
- def test_async_func(self):
- self.check(
- """
- async def func(arg, arg2):
- pass
- """,
- function=True,
- )
-
-
-if __name__ == "__main__":
- unittest.main() # pragma: no cover
diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/transforms/augmentation_impl.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/transforms/augmentation_impl.py
deleted file mode 100644
index 965f0a947d7c3ff03b0990f1a645703d470227de..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/data/transforms/augmentation_impl.py
+++ /dev/null
@@ -1,736 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-"""
-Implement many useful :class:`Augmentation`.
-"""
-import numpy as np
-import sys
-from numpy import random
-from typing import Tuple
-import torch
-from fvcore.transforms.transform import (
- BlendTransform,
- CropTransform,
- HFlipTransform,
- NoOpTransform,
- PadTransform,
- Transform,
- TransformList,
- VFlipTransform,
-)
-from PIL import Image
-
-from annotator.oneformer.detectron2.structures import Boxes, pairwise_iou
-
-from .augmentation import Augmentation, _transform_to_aug
-from .transform import ExtentTransform, ResizeTransform, RotationTransform
-
-__all__ = [
- "FixedSizeCrop",
- "RandomApply",
- "RandomBrightness",
- "RandomContrast",
- "RandomCrop",
- "RandomExtent",
- "RandomFlip",
- "RandomSaturation",
- "RandomLighting",
- "RandomRotation",
- "Resize",
- "ResizeScale",
- "ResizeShortestEdge",
- "RandomCrop_CategoryAreaConstraint",
- "RandomResize",
- "MinIoURandomCrop",
-]
-
-
-class RandomApply(Augmentation):
- """
- Randomly apply an augmentation with a given probability.
- """
-
- def __init__(self, tfm_or_aug, prob=0.5):
- """
- Args:
- tfm_or_aug (Transform, Augmentation): the transform or augmentation
- to be applied. It can either be a `Transform` or `Augmentation`
- instance.
- prob (float): probability between 0.0 and 1.0 that
- the wrapper transformation is applied
- """
- super().__init__()
- self.aug = _transform_to_aug(tfm_or_aug)
- assert 0.0 <= prob <= 1.0, f"Probablity must be between 0.0 and 1.0 (given: {prob})"
- self.prob = prob
-
- def get_transform(self, *args):
- do = self._rand_range() < self.prob
- if do:
- return self.aug.get_transform(*args)
- else:
- return NoOpTransform()
-
- def __call__(self, aug_input):
- do = self._rand_range() < self.prob
- if do:
- return self.aug(aug_input)
- else:
- return NoOpTransform()
-
-
-class RandomFlip(Augmentation):
- """
- Flip the image horizontally or vertically with the given probability.
- """
-
- def __init__(self, prob=0.5, *, horizontal=True, vertical=False):
- """
- Args:
- prob (float): probability of flip.
- horizontal (boolean): whether to apply horizontal flipping
- vertical (boolean): whether to apply vertical flipping
- """
- super().__init__()
-
- if horizontal and vertical:
- raise ValueError("Cannot do both horiz and vert. Please use two Flip instead.")
- if not horizontal and not vertical:
- raise ValueError("At least one of horiz or vert has to be True!")
- self._init(locals())
-
- def get_transform(self, image):
- h, w = image.shape[:2]
- do = self._rand_range() < self.prob
- if do:
- if self.horizontal:
- return HFlipTransform(w)
- elif self.vertical:
- return VFlipTransform(h)
- else:
- return NoOpTransform()
-
-
-class Resize(Augmentation):
- """Resize image to a fixed target size"""
-
- def __init__(self, shape, interp=Image.BILINEAR):
- """
- Args:
- shape: (h, w) tuple or a int
- interp: PIL interpolation method
- """
- if isinstance(shape, int):
- shape = (shape, shape)
- shape = tuple(shape)
- self._init(locals())
-
- def get_transform(self, image):
- return ResizeTransform(
- image.shape[0], image.shape[1], self.shape[0], self.shape[1], self.interp
- )
-
-
-class ResizeShortestEdge(Augmentation):
- """
- Resize the image while keeping the aspect ratio unchanged.
- It attempts to scale the shorter edge to the given `short_edge_length`,
- as long as the longer edge does not exceed `max_size`.
- If `max_size` is reached, then downscale so that the longer edge does not exceed max_size.
- """
-
- @torch.jit.unused
- def __init__(
- self, short_edge_length, max_size=sys.maxsize, sample_style="range", interp=Image.BILINEAR
- ):
- """
- Args:
- short_edge_length (list[int]): If ``sample_style=="range"``,
- a [min, max] interval from which to sample the shortest edge length.
- If ``sample_style=="choice"``, a list of shortest edge lengths to sample from.
- max_size (int): maximum allowed longest edge length.
- sample_style (str): either "range" or "choice".
- """
- super().__init__()
- assert sample_style in ["range", "choice"], sample_style
-
- self.is_range = sample_style == "range"
- if isinstance(short_edge_length, int):
- short_edge_length = (short_edge_length, short_edge_length)
- if self.is_range:
- assert len(short_edge_length) == 2, (
- "short_edge_length must be two values using 'range' sample style."
- f" Got {short_edge_length}!"
- )
- self._init(locals())
-
- @torch.jit.unused
- def get_transform(self, image):
- h, w = image.shape[:2]
- if self.is_range:
- size = np.random.randint(self.short_edge_length[0], self.short_edge_length[1] + 1)
- else:
- size = np.random.choice(self.short_edge_length)
- if size == 0:
- return NoOpTransform()
-
- newh, neww = ResizeShortestEdge.get_output_shape(h, w, size, self.max_size)
- return ResizeTransform(h, w, newh, neww, self.interp)
-
- @staticmethod
- def get_output_shape(
- oldh: int, oldw: int, short_edge_length: int, max_size: int
- ) -> Tuple[int, int]:
- """
- Compute the output size given input size and target short edge length.
- """
- h, w = oldh, oldw
- size = short_edge_length * 1.0
- scale = size / min(h, w)
- if h < w:
- newh, neww = size, scale * w
- else:
- newh, neww = scale * h, size
- if max(newh, neww) > max_size:
- scale = max_size * 1.0 / max(newh, neww)
- newh = newh * scale
- neww = neww * scale
- neww = int(neww + 0.5)
- newh = int(newh + 0.5)
- return (newh, neww)
-
-
-class ResizeScale(Augmentation):
- """
- Takes target size as input and randomly scales the given target size between `min_scale`
- and `max_scale`. It then scales the input image such that it fits inside the scaled target
- box, keeping the aspect ratio constant.
- This implements the resize part of the Google's 'resize_and_crop' data augmentation:
- https://github.com/tensorflow/tpu/blob/master/models/official/detection/utils/input_utils.py#L127
- """
-
- def __init__(
- self,
- min_scale: float,
- max_scale: float,
- target_height: int,
- target_width: int,
- interp: int = Image.BILINEAR,
- ):
- """
- Args:
- min_scale: minimum image scale range.
- max_scale: maximum image scale range.
- target_height: target image height.
- target_width: target image width.
- interp: image interpolation method.
- """
- super().__init__()
- self._init(locals())
-
- def _get_resize(self, image: np.ndarray, scale: float) -> Transform:
- input_size = image.shape[:2]
-
- # Compute new target size given a scale.
- target_size = (self.target_height, self.target_width)
- target_scale_size = np.multiply(target_size, scale)
-
- # Compute actual rescaling applied to input image and output size.
- output_scale = np.minimum(
- target_scale_size[0] / input_size[0], target_scale_size[1] / input_size[1]
- )
- output_size = np.round(np.multiply(input_size, output_scale)).astype(int)
-
- return ResizeTransform(
- input_size[0], input_size[1], output_size[0], output_size[1], self.interp
- )
-
- def get_transform(self, image: np.ndarray) -> Transform:
- random_scale = np.random.uniform(self.min_scale, self.max_scale)
- return self._get_resize(image, random_scale)
-
-
-class RandomRotation(Augmentation):
- """
- This method returns a copy of this image, rotated the given
- number of degrees counter clockwise around the given center.
- """
-
- def __init__(self, angle, expand=True, center=None, sample_style="range", interp=None):
- """
- Args:
- angle (list[float]): If ``sample_style=="range"``,
- a [min, max] interval from which to sample the angle (in degrees).
- If ``sample_style=="choice"``, a list of angles to sample from
- expand (bool): choose if the image should be resized to fit the whole
- rotated image (default), or simply cropped
- center (list[[float, float]]): If ``sample_style=="range"``,
- a [[minx, miny], [maxx, maxy]] relative interval from which to sample the center,
- [0, 0] being the top left of the image and [1, 1] the bottom right.
- If ``sample_style=="choice"``, a list of centers to sample from
- Default: None, which means that the center of rotation is the center of the image
- center has no effect if expand=True because it only affects shifting
- """
- super().__init__()
- assert sample_style in ["range", "choice"], sample_style
- self.is_range = sample_style == "range"
- if isinstance(angle, (float, int)):
- angle = (angle, angle)
- if center is not None and isinstance(center[0], (float, int)):
- center = (center, center)
- self._init(locals())
-
- def get_transform(self, image):
- h, w = image.shape[:2]
- center = None
- if self.is_range:
- angle = np.random.uniform(self.angle[0], self.angle[1])
- if self.center is not None:
- center = (
- np.random.uniform(self.center[0][0], self.center[1][0]),
- np.random.uniform(self.center[0][1], self.center[1][1]),
- )
- else:
- angle = np.random.choice(self.angle)
- if self.center is not None:
- center = np.random.choice(self.center)
-
- if center is not None:
- center = (w * center[0], h * center[1]) # Convert to absolute coordinates
-
- if angle % 360 == 0:
- return NoOpTransform()
-
- return RotationTransform(h, w, angle, expand=self.expand, center=center, interp=self.interp)
-
-
-class FixedSizeCrop(Augmentation):
- """
- If `crop_size` is smaller than the input image size, then it uses a random crop of
- the crop size. If `crop_size` is larger than the input image size, then it pads
- the right and the bottom of the image to the crop size if `pad` is True, otherwise
- it returns the smaller image.
- """
-
- def __init__(
- self,
- crop_size: Tuple[int],
- pad: bool = True,
- pad_value: float = 128.0,
- seg_pad_value: int = 255,
- ):
- """
- Args:
- crop_size: target image (height, width).
- pad: if True, will pad images smaller than `crop_size` up to `crop_size`
- pad_value: the padding value to the image.
- seg_pad_value: the padding value to the segmentation mask.
- """
- super().__init__()
- self._init(locals())
-
- def _get_crop(self, image: np.ndarray) -> Transform:
- # Compute the image scale and scaled size.
- input_size = image.shape[:2]
- output_size = self.crop_size
-
- # Add random crop if the image is scaled up.
- max_offset = np.subtract(input_size, output_size)
- max_offset = np.maximum(max_offset, 0)
- offset = np.multiply(max_offset, np.random.uniform(0.0, 1.0))
- offset = np.round(offset).astype(int)
- return CropTransform(
- offset[1], offset[0], output_size[1], output_size[0], input_size[1], input_size[0]
- )
-
- def _get_pad(self, image: np.ndarray) -> Transform:
- # Compute the image scale and scaled size.
- input_size = image.shape[:2]
- output_size = self.crop_size
-
- # Add padding if the image is scaled down.
- pad_size = np.subtract(output_size, input_size)
- pad_size = np.maximum(pad_size, 0)
- original_size = np.minimum(input_size, output_size)
- return PadTransform(
- 0,
- 0,
- pad_size[1],
- pad_size[0],
- original_size[1],
- original_size[0],
- self.pad_value,
- self.seg_pad_value,
- )
-
- def get_transform(self, image: np.ndarray) -> TransformList:
- transforms = [self._get_crop(image)]
- if self.pad:
- transforms.append(self._get_pad(image))
- return TransformList(transforms)
-
-
-class RandomCrop(Augmentation):
- """
- Randomly crop a rectangle region out of an image.
- """
-
- def __init__(self, crop_type: str, crop_size):
- """
- Args:
- crop_type (str): one of "relative_range", "relative", "absolute", "absolute_range".
- crop_size (tuple[float, float]): two floats, explained below.
-
- - "relative": crop a (H * crop_size[0], W * crop_size[1]) region from an input image of
- size (H, W). crop size should be in (0, 1]
- - "relative_range": uniformly sample two values from [crop_size[0], 1]
- and [crop_size[1]], 1], and use them as in "relative" crop type.
- - "absolute" crop a (crop_size[0], crop_size[1]) region from input image.
- crop_size must be smaller than the input image size.
- - "absolute_range", for an input of size (H, W), uniformly sample H_crop in
- [crop_size[0], min(H, crop_size[1])] and W_crop in [crop_size[0], min(W, crop_size[1])].
- Then crop a region (H_crop, W_crop).
- """
- # TODO style of relative_range and absolute_range are not consistent:
- # one takes (h, w) but another takes (min, max)
- super().__init__()
- assert crop_type in ["relative_range", "relative", "absolute", "absolute_range"]
- self._init(locals())
-
- def get_transform(self, image):
- h, w = image.shape[:2]
- croph, cropw = self.get_crop_size((h, w))
- assert h >= croph and w >= cropw, "Shape computation in {} has bugs.".format(self)
- h0 = np.random.randint(h - croph + 1)
- w0 = np.random.randint(w - cropw + 1)
- return CropTransform(w0, h0, cropw, croph)
-
- def get_crop_size(self, image_size):
- """
- Args:
- image_size (tuple): height, width
-
- Returns:
- crop_size (tuple): height, width in absolute pixels
- """
- h, w = image_size
- if self.crop_type == "relative":
- ch, cw = self.crop_size
- return int(h * ch + 0.5), int(w * cw + 0.5)
- elif self.crop_type == "relative_range":
- crop_size = np.asarray(self.crop_size, dtype=np.float32)
- ch, cw = crop_size + np.random.rand(2) * (1 - crop_size)
- return int(h * ch + 0.5), int(w * cw + 0.5)
- elif self.crop_type == "absolute":
- return (min(self.crop_size[0], h), min(self.crop_size[1], w))
- elif self.crop_type == "absolute_range":
- assert self.crop_size[0] <= self.crop_size[1]
- ch = np.random.randint(min(h, self.crop_size[0]), min(h, self.crop_size[1]) + 1)
- cw = np.random.randint(min(w, self.crop_size[0]), min(w, self.crop_size[1]) + 1)
- return ch, cw
- else:
- raise NotImplementedError("Unknown crop type {}".format(self.crop_type))
-
-
-class RandomCrop_CategoryAreaConstraint(Augmentation):
- """
- Similar to :class:`RandomCrop`, but find a cropping window such that no single category
- occupies a ratio of more than `single_category_max_area` in semantic segmentation ground
- truth, which can cause unstability in training. The function attempts to find such a valid
- cropping window for at most 10 times.
- """
-
- def __init__(
- self,
- crop_type: str,
- crop_size,
- single_category_max_area: float = 1.0,
- ignored_category: int = None,
- ):
- """
- Args:
- crop_type, crop_size: same as in :class:`RandomCrop`
- single_category_max_area: the maximum allowed area ratio of a
- category. Set to 1.0 to disable
- ignored_category: allow this category in the semantic segmentation
- ground truth to exceed the area ratio. Usually set to the category
- that's ignored in training.
- """
- self.crop_aug = RandomCrop(crop_type, crop_size)
- self._init(locals())
-
- def get_transform(self, image, sem_seg):
- if self.single_category_max_area >= 1.0:
- return self.crop_aug.get_transform(image)
- else:
- h, w = sem_seg.shape
- for _ in range(10):
- crop_size = self.crop_aug.get_crop_size((h, w))
- y0 = np.random.randint(h - crop_size[0] + 1)
- x0 = np.random.randint(w - crop_size[1] + 1)
- sem_seg_temp = sem_seg[y0 : y0 + crop_size[0], x0 : x0 + crop_size[1]]
- labels, cnt = np.unique(sem_seg_temp, return_counts=True)
- if self.ignored_category is not None:
- cnt = cnt[labels != self.ignored_category]
- if len(cnt) > 1 and np.max(cnt) < np.sum(cnt) * self.single_category_max_area:
- break
- crop_tfm = CropTransform(x0, y0, crop_size[1], crop_size[0])
- return crop_tfm
-
-
-class RandomExtent(Augmentation):
- """
- Outputs an image by cropping a random "subrect" of the source image.
-
- The subrect can be parameterized to include pixels outside the source image,
- in which case they will be set to zeros (i.e. black). The size of the output
- image will vary with the size of the random subrect.
- """
-
- def __init__(self, scale_range, shift_range):
- """
- Args:
- output_size (h, w): Dimensions of output image
- scale_range (l, h): Range of input-to-output size scaling factor
- shift_range (x, y): Range of shifts of the cropped subrect. The rect
- is shifted by [w / 2 * Uniform(-x, x), h / 2 * Uniform(-y, y)],
- where (w, h) is the (width, height) of the input image. Set each
- component to zero to crop at the image's center.
- """
- super().__init__()
- self._init(locals())
-
- def get_transform(self, image):
- img_h, img_w = image.shape[:2]
-
- # Initialize src_rect to fit the input image.
- src_rect = np.array([-0.5 * img_w, -0.5 * img_h, 0.5 * img_w, 0.5 * img_h])
-
- # Apply a random scaling to the src_rect.
- src_rect *= np.random.uniform(self.scale_range[0], self.scale_range[1])
-
- # Apply a random shift to the coordinates origin.
- src_rect[0::2] += self.shift_range[0] * img_w * (np.random.rand() - 0.5)
- src_rect[1::2] += self.shift_range[1] * img_h * (np.random.rand() - 0.5)
-
- # Map src_rect coordinates into image coordinates (center at corner).
- src_rect[0::2] += 0.5 * img_w
- src_rect[1::2] += 0.5 * img_h
-
- return ExtentTransform(
- src_rect=(src_rect[0], src_rect[1], src_rect[2], src_rect[3]),
- output_size=(int(src_rect[3] - src_rect[1]), int(src_rect[2] - src_rect[0])),
- )
-
-
-class RandomContrast(Augmentation):
- """
- Randomly transforms image contrast.
-
- Contrast intensity is uniformly sampled in (intensity_min, intensity_max).
- - intensity < 1 will reduce contrast
- - intensity = 1 will preserve the input image
- - intensity > 1 will increase contrast
-
- See: https://pillow.readthedocs.io/en/3.0.x/reference/ImageEnhance.html
- """
-
- def __init__(self, intensity_min, intensity_max):
- """
- Args:
- intensity_min (float): Minimum augmentation
- intensity_max (float): Maximum augmentation
- """
- super().__init__()
- self._init(locals())
-
- def get_transform(self, image):
- w = np.random.uniform(self.intensity_min, self.intensity_max)
- return BlendTransform(src_image=image.mean(), src_weight=1 - w, dst_weight=w)
-
-
-class RandomBrightness(Augmentation):
- """
- Randomly transforms image brightness.
-
- Brightness intensity is uniformly sampled in (intensity_min, intensity_max).
- - intensity < 1 will reduce brightness
- - intensity = 1 will preserve the input image
- - intensity > 1 will increase brightness
-
- See: https://pillow.readthedocs.io/en/3.0.x/reference/ImageEnhance.html
- """
-
- def __init__(self, intensity_min, intensity_max):
- """
- Args:
- intensity_min (float): Minimum augmentation
- intensity_max (float): Maximum augmentation
- """
- super().__init__()
- self._init(locals())
-
- def get_transform(self, image):
- w = np.random.uniform(self.intensity_min, self.intensity_max)
- return BlendTransform(src_image=0, src_weight=1 - w, dst_weight=w)
-
-
-class RandomSaturation(Augmentation):
- """
- Randomly transforms saturation of an RGB image.
- Input images are assumed to have 'RGB' channel order.
-
- Saturation intensity is uniformly sampled in (intensity_min, intensity_max).
- - intensity < 1 will reduce saturation (make the image more grayscale)
- - intensity = 1 will preserve the input image
- - intensity > 1 will increase saturation
-
- See: https://pillow.readthedocs.io/en/3.0.x/reference/ImageEnhance.html
- """
-
- def __init__(self, intensity_min, intensity_max):
- """
- Args:
- intensity_min (float): Minimum augmentation (1 preserves input).
- intensity_max (float): Maximum augmentation (1 preserves input).
- """
- super().__init__()
- self._init(locals())
-
- def get_transform(self, image):
- assert image.shape[-1] == 3, "RandomSaturation only works on RGB images"
- w = np.random.uniform(self.intensity_min, self.intensity_max)
- grayscale = image.dot([0.299, 0.587, 0.114])[:, :, np.newaxis]
- return BlendTransform(src_image=grayscale, src_weight=1 - w, dst_weight=w)
-
-
-class RandomLighting(Augmentation):
- """
- The "lighting" augmentation described in AlexNet, using fixed PCA over ImageNet.
- Input images are assumed to have 'RGB' channel order.
-
- The degree of color jittering is randomly sampled via a normal distribution,
- with standard deviation given by the scale parameter.
- """
-
- def __init__(self, scale):
- """
- Args:
- scale (float): Standard deviation of principal component weighting.
- """
- super().__init__()
- self._init(locals())
- self.eigen_vecs = np.array(
- [[-0.5675, 0.7192, 0.4009], [-0.5808, -0.0045, -0.8140], [-0.5836, -0.6948, 0.4203]]
- )
- self.eigen_vals = np.array([0.2175, 0.0188, 0.0045])
-
- def get_transform(self, image):
- assert image.shape[-1] == 3, "RandomLighting only works on RGB images"
- weights = np.random.normal(scale=self.scale, size=3)
- return BlendTransform(
- src_image=self.eigen_vecs.dot(weights * self.eigen_vals), src_weight=1.0, dst_weight=1.0
- )
-
-
-class RandomResize(Augmentation):
- """Randomly resize image to a target size in shape_list"""
-
- def __init__(self, shape_list, interp=Image.BILINEAR):
- """
- Args:
- shape_list: a list of shapes in (h, w)
- interp: PIL interpolation method
- """
- self.shape_list = shape_list
- self._init(locals())
-
- def get_transform(self, image):
- shape_idx = np.random.randint(low=0, high=len(self.shape_list))
- h, w = self.shape_list[shape_idx]
- return ResizeTransform(image.shape[0], image.shape[1], h, w, self.interp)
-
-
-class MinIoURandomCrop(Augmentation):
- """Random crop the image & bboxes, the cropped patches have minimum IoU
- requirement with original image & bboxes, the IoU threshold is randomly
- selected from min_ious.
-
- Args:
- min_ious (tuple): minimum IoU threshold for all intersections with
- bounding boxes
- min_crop_size (float): minimum crop's size (i.e. h,w := a*h, a*w,
- where a >= min_crop_size)
- mode_trials: number of trials for sampling min_ious threshold
- crop_trials: number of trials for sampling crop_size after cropping
- """
-
- def __init__(
- self,
- min_ious=(0.1, 0.3, 0.5, 0.7, 0.9),
- min_crop_size=0.3,
- mode_trials=1000,
- crop_trials=50,
- ):
- self.min_ious = min_ious
- self.sample_mode = (1, *min_ious, 0)
- self.min_crop_size = min_crop_size
- self.mode_trials = mode_trials
- self.crop_trials = crop_trials
-
- def get_transform(self, image, boxes):
- """Call function to crop images and bounding boxes with minimum IoU
- constraint.
-
- Args:
- boxes: ground truth boxes in (x1, y1, x2, y2) format
- """
- if boxes is None:
- return NoOpTransform()
- h, w, c = image.shape
- for _ in range(self.mode_trials):
- mode = random.choice(self.sample_mode)
- self.mode = mode
- if mode == 1:
- return NoOpTransform()
-
- min_iou = mode
- for _ in range(self.crop_trials):
- new_w = random.uniform(self.min_crop_size * w, w)
- new_h = random.uniform(self.min_crop_size * h, h)
-
- # h / w in [0.5, 2]
- if new_h / new_w < 0.5 or new_h / new_w > 2:
- continue
-
- left = random.uniform(w - new_w)
- top = random.uniform(h - new_h)
-
- patch = np.array((int(left), int(top), int(left + new_w), int(top + new_h)))
- # Line or point crop is not allowed
- if patch[2] == patch[0] or patch[3] == patch[1]:
- continue
- overlaps = pairwise_iou(
- Boxes(patch.reshape(-1, 4)), Boxes(boxes.reshape(-1, 4))
- ).reshape(-1)
- if len(overlaps) > 0 and overlaps.min() < min_iou:
- continue
-
- # center of boxes should inside the crop img
- # only adjust boxes and instance masks when the gt is not empty
- if len(overlaps) > 0:
- # adjust boxes
- def is_center_of_bboxes_in_patch(boxes, patch):
- center = (boxes[:, :2] + boxes[:, 2:]) / 2
- mask = (
- (center[:, 0] > patch[0])
- * (center[:, 1] > patch[1])
- * (center[:, 0] < patch[2])
- * (center[:, 1] < patch[3])
- )
- return mask
-
- mask = is_center_of_bboxes_in_patch(boxes, patch)
- if not mask.any():
- continue
- return CropTransform(int(left), int(top), int(new_w), int(new_h))
diff --git a/spaces/TH5314/newbing/src/components/ui/select.tsx b/spaces/TH5314/newbing/src/components/ui/select.tsx
deleted file mode 100644
index 77f12c2996f541b97663de4c9e20ab34d4ec2fac..0000000000000000000000000000000000000000
--- a/spaces/TH5314/newbing/src/components/ui/select.tsx
+++ /dev/null
@@ -1,123 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import * as SelectPrimitive from '@radix-ui/react-select'
-
-import { cn } from '@/lib/utils'
-import {
- IconArrowDown,
- IconCheck,
- IconChevronUpDown
-} from '@/components/ui/icons'
-
-const Select = SelectPrimitive.Root
-
-const SelectGroup = SelectPrimitive.Group
-
-const SelectValue = SelectPrimitive.Value
-
-const SelectTrigger = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, children, ...props }, ref) => (
-
- {children}
-
-
-
-
-))
-SelectTrigger.displayName = SelectPrimitive.Trigger.displayName
-
-const SelectContent = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, children, position = 'popper', ...props }, ref) => (
-
-
-
- {children}
-
-
-
-))
-SelectContent.displayName = SelectPrimitive.Content.displayName
-
-const SelectLabel = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-SelectLabel.displayName = SelectPrimitive.Label.displayName
-
-const SelectItem = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, children, ...props }, ref) => (
-
-
-
-
-
-
- {children}
-
-))
-SelectItem.displayName = SelectPrimitive.Item.displayName
-
-const SelectSeparator = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-SelectSeparator.displayName = SelectPrimitive.Separator.displayName
-
-export {
- Select,
- SelectGroup,
- SelectValue,
- SelectTrigger,
- SelectContent,
- SelectLabel,
- SelectItem,
- SelectSeparator
-}
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/idna/compat.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/idna/compat.py
deleted file mode 100644
index 786e6bda63699b72d588ba91dd73df017570aee5..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/idna/compat.py
+++ /dev/null
@@ -1,13 +0,0 @@
-from .core import *
-from .codec import *
-from typing import Any, Union
-
-def ToASCII(label: str) -> bytes:
- return encode(label)
-
-def ToUnicode(label: Union[bytes, bytearray]) -> str:
- return decode(label)
-
-def nameprep(s: Any) -> None:
- raise NotImplementedError('IDNA 2008 does not utilise nameprep protocol')
-
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/bdist_dumb.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/bdist_dumb.py
deleted file mode 100644
index 01dd79079b04b6743295ef224592b49e6d9d2cb8..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/bdist_dumb.py
+++ /dev/null
@@ -1,143 +0,0 @@
-"""distutils.command.bdist_dumb
-
-Implements the Distutils 'bdist_dumb' command (create a "dumb" built
-distribution -- i.e., just an archive to be unpacked under $prefix or
-$exec_prefix)."""
-
-import os
-from ..core import Command
-from ..util import get_platform
-from ..dir_util import remove_tree, ensure_relative
-from ..errors import DistutilsPlatformError
-from ..sysconfig import get_python_version
-from distutils._log import log
-
-
-class bdist_dumb(Command):
- description = "create a \"dumb\" built distribution"
-
- user_options = [
- ('bdist-dir=', 'd', "temporary directory for creating the distribution"),
- (
- 'plat-name=',
- 'p',
- "platform name to embed in generated filenames "
- "(default: %s)" % get_platform(),
- ),
- (
- 'format=',
- 'f',
- "archive format to create (tar, gztar, bztar, xztar, " "ztar, zip)",
- ),
- (
- 'keep-temp',
- 'k',
- "keep the pseudo-installation tree around after "
- + "creating the distribution archive",
- ),
- ('dist-dir=', 'd', "directory to put final built distributions in"),
- ('skip-build', None, "skip rebuilding everything (for testing/debugging)"),
- (
- 'relative',
- None,
- "build the archive using relative paths " "(default: false)",
- ),
- (
- 'owner=',
- 'u',
- "Owner name used when creating a tar file" " [default: current user]",
- ),
- (
- 'group=',
- 'g',
- "Group name used when creating a tar file" " [default: current group]",
- ),
- ]
-
- boolean_options = ['keep-temp', 'skip-build', 'relative']
-
- default_format = {'posix': 'gztar', 'nt': 'zip'}
-
- def initialize_options(self):
- self.bdist_dir = None
- self.plat_name = None
- self.format = None
- self.keep_temp = 0
- self.dist_dir = None
- self.skip_build = None
- self.relative = 0
- self.owner = None
- self.group = None
-
- def finalize_options(self):
- if self.bdist_dir is None:
- bdist_base = self.get_finalized_command('bdist').bdist_base
- self.bdist_dir = os.path.join(bdist_base, 'dumb')
-
- if self.format is None:
- try:
- self.format = self.default_format[os.name]
- except KeyError:
- raise DistutilsPlatformError(
- "don't know how to create dumb built distributions "
- "on platform %s" % os.name
- )
-
- self.set_undefined_options(
- 'bdist',
- ('dist_dir', 'dist_dir'),
- ('plat_name', 'plat_name'),
- ('skip_build', 'skip_build'),
- )
-
- def run(self):
- if not self.skip_build:
- self.run_command('build')
-
- install = self.reinitialize_command('install', reinit_subcommands=1)
- install.root = self.bdist_dir
- install.skip_build = self.skip_build
- install.warn_dir = 0
-
- log.info("installing to %s", self.bdist_dir)
- self.run_command('install')
-
- # And make an archive relative to the root of the
- # pseudo-installation tree.
- archive_basename = "{}.{}".format(
- self.distribution.get_fullname(), self.plat_name
- )
-
- pseudoinstall_root = os.path.join(self.dist_dir, archive_basename)
- if not self.relative:
- archive_root = self.bdist_dir
- else:
- if self.distribution.has_ext_modules() and (
- install.install_base != install.install_platbase
- ):
- raise DistutilsPlatformError(
- "can't make a dumb built distribution where "
- "base and platbase are different (%s, %s)"
- % (repr(install.install_base), repr(install.install_platbase))
- )
- else:
- archive_root = os.path.join(
- self.bdist_dir, ensure_relative(install.install_base)
- )
-
- # Make the archive
- filename = self.make_archive(
- pseudoinstall_root,
- self.format,
- root_dir=archive_root,
- owner=self.owner,
- group=self.group,
- )
- if self.distribution.has_ext_modules():
- pyversion = get_python_version()
- else:
- pyversion = 'any'
- self.distribution.dist_files.append(('bdist_dumb', pyversion, filename))
-
- if not self.keep_temp:
- remove_tree(self.bdist_dir, dry_run=self.dry_run)
diff --git a/spaces/TangibleAI/mathtext/README.md b/spaces/TangibleAI/mathtext/README.md
deleted file mode 100644
index 2862432b31a7c56d6f059277e60481ccef71c141..0000000000000000000000000000000000000000
--- a/spaces/TangibleAI/mathtext/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: MathText
-app_file: app.py
-sdk: gradio
-sdk_version: 3.15.0
-license: agpl-3.0
----
-
-## MathText NLU
-
-Natural Language Understanding for math symbols, digits, and words with a Gradio user interface and REST API.
-
diff --git a/spaces/Tej3/ECG_Classification/utils/helper_functions.py b/spaces/Tej3/ECG_Classification/utils/helper_functions.py
deleted file mode 100644
index 3f01bcd5919954cbd6fddec0c1b7655b88927db9..0000000000000000000000000000000000000000
--- a/spaces/Tej3/ECG_Classification/utils/helper_functions.py
+++ /dev/null
@@ -1,86 +0,0 @@
-import torch
-
-def define_optimizer(model, lr, alpha):
- # Define optimizer
- optimizer = torch.optim.RMSprop(model.parameters(), lr=lr, alpha=alpha)
- optimizer.zero_grad()
- return optimizer
-
-def tuple_of_tensors_to_tensor(tuple_of_tensors):
- return torch.stack(list(tuple_of_tensors), dim=0)
-
-def predict(model, inputs, notes, device):
- outputs = model.forward(inputs, notes)
- predicted = torch.sigmoid(outputs)
- predicted = (predicted>0.5).float()
- return outputs, predicted
-
-def display_train(epoch, num_epochs, i, model, correct, total, loss, train_loader, valid_loader, device):
- print (f'Epoch [{epoch+1}/{num_epochs}], Step [{i+1}/{len(train_loader)}], Train Loss: {loss.item():.4f}')
- train_accuracy = correct/total
- print(f'Epoch [{epoch+1}/{num_epochs}], Train Accuracy: {train_accuracy:.4f}')
- valid_loss, valid_accuracy = eval_valid(model, valid_loader, epoch, num_epochs, device)
- return train_accuracy, valid_accuracy, valid_loss
-
-def eval_valid(model, valid_loader, epoch, num_epochs, device):
- # Compute model train accuracy on test after all samples have been seen using test samples
- model.eval()
- with torch.no_grad():
- correct = 0
- total = 0
- running_loss = 0
- for inputs, labels, notes in valid_loader:
- # Get images and labels from test loader
- inputs = inputs.transpose(1,2).float().to(device)
- labels = labels.float().to(device)
- notes = notes.to(device)
-
- # Forward pass and predict class using max
- # outputs = model(inputs)
- outputs, predicted = predict(model, inputs, notes, device) #torch.max(outputs.data, 1)
- loss = torch.nn.functional.binary_cross_entropy_with_logits(outputs, labels)
- running_loss += loss.item()*len(labels)
-
- # Check if predicted class matches label and count numbler of correct predictions
- total += labels.size(0)
- #TODO: change acc criteria
- # correct += torch.nn.functional.cosine_similarity(labels,predicted).sum().item() # (predicted == labels).sum().item()
- values, indices = torch.max(outputs,dim=1)
- correct += sum(1 for s, i in enumerate(indices)
- if labels[s][i] == 1)
-
- # Compute final accuracy and display
- valid_accuracy = correct/total
- validation_loss = running_loss/total
- print(f'Epoch [{epoch+1}/{num_epochs}], Validation Accuracy: {valid_accuracy:.4f}, Validation Loss: {validation_loss:.4f}')
- return validation_loss, valid_accuracy
-
-
-def eval_test(model, test_loader, device):
- # Compute model test accuracy on test after training
- model.eval()
- with torch.no_grad():
- correct = 0
- total = 0
- for inputs, labels, notes in test_loader:
- # Get images and labels from test loader
- inputs = inputs.transpose(1,2).float().to(device)
- labels = labels.float().to(device)
- notes = notes.to(device)
-
- # Forward pass and predict class using max
- # outputs = model(inputs)
- outputs, predicted = predict(model, inputs, notes, device)#torch.max(outputs.data, 1)
-
- # Check if predicted class matches label and count numbler of correct predictions
- total += labels.size(0)
- #TODO: change acc criteria
- # correct += torch.nn.functional.cosine_similarity(labels,predicted).sum().item() # (predicted == labels).sum().item()
- values, indices = torch.max(outputs,dim=1)
- correct += sum(1 for s, i in enumerate(indices)
- if labels[s][i] == 1)
-
- # Compute final accuracy and display
- test_accuracy = correct/total
- print(f'Ended Training, Test Accuracy: {test_accuracy:.4f}')
- return test_accuracy
\ No newline at end of file
diff --git a/spaces/TempoFunk/makeavid-sd-jax/makeavid_sd/flax_impl/flax_trainer.py b/spaces/TempoFunk/makeavid-sd-jax/makeavid_sd/flax_impl/flax_trainer.py
deleted file mode 100644
index d0881f418a39665f0fc02ca821cb2a69bc575850..0000000000000000000000000000000000000000
--- a/spaces/TempoFunk/makeavid-sd-jax/makeavid_sd/flax_impl/flax_trainer.py
+++ /dev/null
@@ -1,608 +0,0 @@
-
-from typing import Any, Optional, Union, Tuple, Dict, List
-
-import os
-import random
-import math
-import time
-import numpy as np
-from tqdm.auto import tqdm, trange
-
-import torch
-from torch.utils.data import DataLoader
-
-import jax
-import jax.numpy as jnp
-import optax
-from flax import jax_utils, traverse_util
-from flax.core.frozen_dict import FrozenDict
-from flax.training.train_state import TrainState
-from flax.training.common_utils import shard
-
-# convert 2D -> 3D
-from diffusers import FlaxUNet2DConditionModel
-
-# inference test, run on these on cpu
-from diffusers import AutoencoderKL
-from diffusers.schedulers.scheduling_ddim_flax import FlaxDDIMScheduler, DDIMSchedulerState
-from transformers import CLIPTextModel, CLIPTokenizer
-from PIL import Image
-
-
-from .flax_unet_pseudo3d_condition import UNetPseudo3DConditionModel
-
-
-def seed_all(seed: int) -> jax.random.PRNGKeyArray:
- random.seed(seed)
- np.random.seed(seed)
- torch.manual_seed(seed)
- rng = jax.random.PRNGKey(seed)
- return rng
-
-def count_params(
- params: Union[Dict[str, Any],
- FrozenDict[str, Any]],
- filter_name: Optional[str] = None
-) -> int:
- p: Dict[Tuple[str], jax.Array] = traverse_util.flatten_dict(params)
- cc = 0
- for k in p:
- if filter_name is not None:
- if filter_name in ' '.join(k):
- cc += len(p[k].flatten())
- else:
- cc += len(p[k].flatten())
- return cc
-
-def map_2d_to_pseudo3d(
- params2d: Dict[str, Any],
- params3d: Dict[str, Any],
- verbose: bool = True
-) -> Dict[str, Any]:
- params2d = traverse_util.flatten_dict(params2d)
- params3d = traverse_util.flatten_dict(params3d)
- new_params = dict()
- for k in params3d:
- if 'spatial_conv' in k:
- k2d = list(k)
- k2d.remove('spatial_conv')
- k2d = tuple(k2d)
- if verbose:
- tqdm.write(f'Spatial: {k} <- {k2d}')
- p = params2d[k2d]
- elif k not in params2d:
- if verbose:
- tqdm.write(f'Missing: {k}')
- p = params3d[k]
- else:
- p = params2d[k]
- assert p.shape == params3d[k].shape, f'shape mismatch: {k}: {p.shape} != {params3d[k].shape}'
- new_params[k] = p
- new_params = traverse_util.unflatten_dict(new_params)
- return new_params
-
-
-class FlaxTrainerUNetPseudo3D:
- def __init__(self,
- model_path: str,
- from_pt: bool = True,
- convert2d: bool = False,
- sample_size: Tuple[int, int] = (64, 64),
- seed: int = 0,
- dtype: str = 'float32',
- param_dtype: str = 'float32',
- only_temporal: bool = True,
- use_memory_efficient_attention = False,
- verbose: bool = True
- ) -> None:
- self.verbose = verbose
- self.tracker: Optional['wandb.sdk.wandb_run.Run'] = None
- self._use_wandb: bool = False
- self._tracker_meta: Dict[str, Union[float, int]] = {
- 't00': 0.0,
- 't0': 0.0,
- 'step0': 0
- }
-
- self.log('Init JAX')
- self.num_devices = jax.device_count()
- self.log(f'Device count: {self.num_devices}')
-
- self.seed = seed
- self.rng: jax.random.PRNGKeyArray = seed_all(self.seed)
-
- self.sample_size = sample_size
- if dtype == 'float32':
- self.dtype = jnp.float32
- elif dtype == 'bfloat16':
- self.dtype = jnp.bfloat16
- elif dtype == 'float16':
- self.dtype = jnp.float16
- else:
- raise ValueError(f'unknown type: {dtype}')
- self.dtype_str: str = dtype
- if param_dtype not in ['float32', 'bfloat16', 'float16']:
- raise ValueError(f'unknown parameter type: {param_dtype}')
- self.param_dtype = param_dtype
- self._load_models(
- model_path = model_path,
- convert2d = convert2d,
- from_pt = from_pt,
- use_memory_efficient_attention = use_memory_efficient_attention
- )
- self._mark_parameters(only_temporal = only_temporal)
- # optionally for validation + sampling
- self.tokenizer: Optional[CLIPTokenizer] = None
- self.text_encoder: Optional[CLIPTextModel] = None
- self.vae: Optional[AutoencoderKL] = None
- self.ddim: Optional[Tuple[FlaxDDIMScheduler, DDIMSchedulerState]] = None
-
- def log(self, message: Any) -> None:
- if self.verbose and jax.process_index() == 0:
- tqdm.write(str(message))
-
- def log_metrics(self, metrics: dict, step: int, epoch: int) -> None:
- if jax.process_index() > 0 or (not self.verbose and self.tracker is None):
- return
- now = time.monotonic()
- log_data = {
- 'train/step': step,
- 'train/epoch': epoch,
- 'train/steps_per_sec': (step - self._tracker_meta['step0']) / (now - self._tracker_meta['t0']),
- **{ f'train/{k}': v for k, v in metrics.items() }
- }
- self._tracker_meta['t0'] = now
- self._tracker_meta['step0'] = step
- self.log(log_data)
- if self.tracker is not None:
- self.tracker.log(log_data, step = step)
-
-
- def enable_wandb(self, enable: bool = True) -> None:
- self._use_wandb = enable
-
- def _setup_wandb(self, config: Dict[str, Any] = dict()) -> None:
- import wandb
- import wandb.sdk
- self.tracker: wandb.sdk.wandb_run.Run = wandb.init(
- config = config,
- settings = wandb.sdk.Settings(
- username = 'anon',
- host = 'anon',
- email = 'anon',
- root_dir = 'anon',
- _executable = 'anon',
- _disable_stats = True,
- _disable_meta = True,
- disable_code = True,
- disable_git = True
- ) # pls don't log sensitive data like system user names. also, fuck you for even trying.
- )
-
- def _init_tracker_meta(self) -> None:
- now = time.monotonic()
- self._tracker_meta = {
- 't00': now,
- 't0': now,
- 'step0': 0
- }
-
- def _load_models(self,
- model_path: str,
- convert2d: bool,
- from_pt: bool,
- use_memory_efficient_attention: bool
- ) -> None:
- self.log(f'Load pretrained from {model_path}')
- if convert2d:
- self.log(' Convert 2D model to Pseudo3D')
- self.log(' Initiate Pseudo3D model')
- config = UNetPseudo3DConditionModel.load_config(model_path, subfolder = 'unet')
- model = UNetPseudo3DConditionModel.from_config(
- config,
- sample_size = self.sample_size,
- dtype = self.dtype,
- param_dtype = self.param_dtype,
- use_memory_efficient_attention = use_memory_efficient_attention
- )
- params: Dict[str, Any] = model.init_weights(self.rng).unfreeze()
- self.log(' Load 2D model')
- model2d, params2d = FlaxUNet2DConditionModel.from_pretrained(
- model_path,
- subfolder = 'unet',
- dtype = self.dtype,
- from_pt = from_pt
- )
- self.log(' Map 2D -> 3D')
- params = map_2d_to_pseudo3d(params2d, params, verbose = self.verbose)
- del params2d
- del model2d
- del config
- else:
- model, params = UNetPseudo3DConditionModel.from_pretrained(
- model_path,
- subfolder = 'unet',
- from_pt = from_pt,
- sample_size = self.sample_size,
- dtype = self.dtype,
- param_dtype = self.param_dtype,
- use_memory_efficient_attention = use_memory_efficient_attention
- )
- self.log(f'Cast parameters to {model.param_dtype}')
- if model.param_dtype == 'float32':
- params = model.to_fp32(params)
- elif model.param_dtype == 'float16':
- params = model.to_fp16(params)
- elif model.param_dtype == 'bfloat16':
- params = model.to_bf16(params)
- self.pretrained_model = model_path
- self.model: UNetPseudo3DConditionModel = model
- self.params: FrozenDict[str, Any] = FrozenDict(params)
-
- def _mark_parameters(self, only_temporal: bool) -> None:
- self.log('Mark training parameters')
- if only_temporal:
- self.log('Only training temporal layers')
- if only_temporal:
- param_partitions = traverse_util.path_aware_map(
- lambda path, _: 'trainable' if 'temporal' in ' '.join(path) else 'frozen', self.params
- )
- else:
- param_partitions = traverse_util.path_aware_map(
- lambda *_: 'trainable', self.params
- )
- self.only_temporal = only_temporal
- self.param_partitions: FrozenDict[str, Any] = FrozenDict(param_partitions)
- self.log(f'Total parameters: {count_params(self.params)}')
- self.log(f'Temporal parameters: {count_params(self.params, "temporal")}')
-
- def _load_inference_models(self) -> None:
- assert jax.process_index() == 0, 'not main process'
- if self.text_encoder is None:
- self.log('Load text encoder')
- self.text_encoder = CLIPTextModel.from_pretrained(
- self.pretrained_model,
- subfolder = 'text_encoder'
- )
- if self.tokenizer is None:
- self.log('Load tokenizer')
- self.tokenizer = CLIPTokenizer.from_pretrained(
- self.pretrained_model,
- subfolder = 'tokenizer'
- )
- if self.vae is None:
- self.log('Load vae')
- self.vae = AutoencoderKL.from_pretrained(
- self.pretrained_model,
- subfolder = 'vae'
- )
- if self.ddim is None:
- self.log('Load ddim scheduler')
- # tuple(scheduler , scheduler state)
- self.ddim = FlaxDDIMScheduler.from_pretrained(
- self.pretrained_model,
- subfolder = 'scheduler',
- from_pt = True
- )
-
- def _unload_inference_models(self) -> None:
- self.text_encoder = None
- self.tokenizer = None
- self.vae = None
- self.ddim = None
-
- def sample(self,
- params: Union[Dict[str, Any], FrozenDict[str, Any]],
- prompt: str,
- image_path: str,
- num_frames: int,
- replicate_params: bool = True,
- neg_prompt: str = '',
- steps: int = 50,
- cfg: float = 9.0,
- unload_after_usage: bool = False
- ) -> List[Image.Image]:
- assert jax.process_index() == 0, 'not main process'
- self.log('Sample')
- self._load_inference_models()
- with torch.no_grad():
- tokens = self.tokenizer(
- [ prompt ],
- truncation = True,
- return_overflowing_tokens = False,
- padding = 'max_length',
- return_tensors = 'pt'
- ).input_ids
- neg_tokens = self.tokenizer(
- [ neg_prompt ],
- truncation = True,
- return_overflowing_tokens = False,
- padding = 'max_length',
- return_tensors = 'pt'
- ).input_ids
- encoded_prompt = self.text_encoder(input_ids = tokens).last_hidden_state
- encoded_neg_prompt = self.text_encoder(input_ids = neg_tokens).last_hidden_state
- hint_latent = torch.tensor(np.asarray(Image.open(image_path))).permute(2,0,1).to(torch.float32).div(255).mul(2).sub(1).unsqueeze(0)
- hint_latent = self.vae.encode(hint_latent).latent_dist.mean * self.vae.config.scaling_factor #0.18215 # deterministic
- hint_latent = hint_latent.unsqueeze(2).repeat_interleave(num_frames, 2)
- mask = torch.zeros_like(hint_latent[:,0:1,:,:,:]) # zero mask, e.g. skip masking for now
- init_latent = torch.randn_like(hint_latent)
- # move to devices
- encoded_prompt = jnp.array(encoded_prompt.numpy())
- encoded_neg_prompt = jnp.array(encoded_neg_prompt.numpy())
- hint_latent = jnp.array(hint_latent.numpy())
- mask = jnp.array(mask.numpy())
- init_latent = init_latent.repeat(jax.device_count(), 1, 1, 1, 1)
- init_latent = jnp.array(init_latent.numpy())
- self.ddim = (self.ddim[0], self.ddim[0].set_timesteps(self.ddim[1], steps))
- timesteps = self.ddim[1].timesteps
- if replicate_params:
- params = jax_utils.replicate(params)
- ddim_state = jax_utils.replicate(self.ddim[1])
- encoded_prompt = jax_utils.replicate(encoded_prompt)
- encoded_neg_prompt = jax_utils.replicate(encoded_neg_prompt)
- hint_latent = jax_utils.replicate(hint_latent)
- mask = jax_utils.replicate(mask)
- # sampling fun
- def sample_loop(init_latent, ddim_state, t, params, encoded_prompt, encoded_neg_prompt, hint_latent, mask):
- latent_model_input = jnp.concatenate([init_latent, mask, hint_latent], axis = 1)
- pred = self.model.apply(
- { 'params': params },
- latent_model_input,
- t,
- encoded_prompt
- ).sample
- if cfg != 1.0:
- neg_pred = self.model.apply(
- { 'params': params },
- latent_model_input,
- t,
- encoded_neg_prompt
- ).sample
- pred = neg_pred + cfg * (pred - neg_pred)
- # TODO check if noise is added at the right dimension
- init_latent, ddim_state = self.ddim[0].step(ddim_state, pred, t, init_latent).to_tuple()
- return init_latent, ddim_state
- p_sample_loop = jax.pmap(sample_loop, 'sample', donate_argnums = ())
- pbar_sample = trange(len(timesteps), desc = 'Sample', dynamic_ncols = True, smoothing = 0.1, disable = not self.verbose)
- init_latent = shard(init_latent)
- for i in pbar_sample:
- t = timesteps[i].repeat(self.num_devices)
- t = shard(t)
- init_latent, ddim_state = p_sample_loop(init_latent, ddim_state, t, params, encoded_prompt, encoded_neg_prompt, hint_latent, mask)
- # decode
- self.log('Decode')
- init_latent = torch.tensor(np.array(init_latent))
- init_latent = init_latent / self.vae.config.scaling_factor
- # d:0 b:1 c:2 f:3 h:4 w:5 -> d b f c h w
- init_latent = init_latent.permute(0, 1, 3, 2, 4, 5)
- images = []
- pbar_decode = trange(len(init_latent), desc = 'Decode', dynamic_ncols = True)
- for sample in init_latent:
- ims = self.vae.decode(sample.squeeze()).sample
- ims = ims.add(1).div(2).mul(255).round().clamp(0, 255).to(torch.uint8).permute(0,2,3,1).numpy()
- ims = [ Image.fromarray(x) for x in ims ]
- for im in ims:
- images.append(im)
- pbar_decode.update(1)
- if unload_after_usage:
- self._unload_inference_models()
- return images
-
- def get_params_from_state(self, state: TrainState) -> FrozenDict[Any, str]:
- return FrozenDict(jax.device_get(jax.tree_util.tree_map(lambda x: x[0], state.params)))
-
- def train(self,
- dataloader: DataLoader,
- lr: float,
- num_frames: int,
- log_every_step: int = 10,
- save_every_epoch: int = 1,
- sample_every_epoch: int = 1,
- output_dir: str = 'output',
- warmup: float = 0,
- decay: float = 0,
- epochs: int = 10,
- weight_decay: float = 1e-2
- ) -> None:
- eps = 1e-8
- total_steps = len(dataloader) * epochs
- warmup_steps = math.ceil(warmup * total_steps) if warmup > 0 else 0
- decay_steps = math.ceil(decay * total_steps) + warmup_steps if decay > 0 else warmup_steps + 1
- self.log(f'Total steps: {total_steps}')
- self.log(f'Warmup steps: {warmup_steps}')
- self.log(f'Decay steps: {decay_steps - warmup_steps}')
- if warmup > 0 or decay > 0:
- if not decay > 0:
- # only warmup, keep peak lr until end
- self.log('Warmup schedule')
- end_lr = lr
- else:
- # warmup + annealing to end lr
- self.log('Warmup + cosine annealing schedule')
- end_lr = eps
- lr_schedule = optax.warmup_cosine_decay_schedule(
- init_value = 0.0,
- peak_value = lr,
- warmup_steps = warmup_steps,
- decay_steps = decay_steps,
- end_value = end_lr
- )
- else:
- # no warmup or decay -> constant lr
- self.log('constant schedule')
- lr_schedule = optax.constant_schedule(value = lr)
- adamw = optax.adamw(
- learning_rate = lr_schedule,
- b1 = 0.9,
- b2 = 0.999,
- eps = eps,
- weight_decay = weight_decay #0.01 # 0.0001
- )
- optim = optax.chain(
- optax.clip_by_global_norm(max_norm = 1.0),
- adamw
- )
- partition_optimizers = {
- 'trainable': optim,
- 'frozen': optax.set_to_zero()
- }
- tx = optax.multi_transform(partition_optimizers, self.param_partitions)
- state = TrainState.create(
- apply_fn = self.model.__call__,
- params = self.params,
- tx = tx
- )
- validation_rng, train_rngs = jax.random.split(self.rng)
- train_rngs = jax.random.split(train_rngs, jax.local_device_count())
-
- def train_step(state: TrainState, batch: Dict[str, jax.Array], train_rng: jax.random.PRNGKeyArray):
- def compute_loss(
- params: Dict[str, Any],
- batch: Dict[str, jax.Array],
- sample_rng: jax.random.PRNGKeyArray # unused, dataloader provides everything
- ) -> jax.Array:
- # 'latent_model_input': latent_model_input
- # 'encoder_hidden_states': encoder_hidden_states
- # 'timesteps': timesteps
- # 'noise': noise
- latent_model_input = batch['latent_model_input']
- encoder_hidden_states = batch['encoder_hidden_states']
- timesteps = batch['timesteps']
- noise = batch['noise']
- model_pred = self.model.apply(
- { 'params': params },
- latent_model_input,
- timesteps,
- encoder_hidden_states
- ).sample
- loss = (noise - model_pred) ** 2
- loss = loss.mean()
- return loss
- grad_fn = jax.value_and_grad(compute_loss)
-
- def loss_and_grad(
- train_rng: jax.random.PRNGKeyArray
- ) -> Tuple[jax.Array, Any, jax.random.PRNGKeyArray]:
- sample_rng, train_rng = jax.random.split(train_rng, 2)
- loss, grad = grad_fn(state.params, batch, sample_rng)
- return loss, grad, train_rng
-
- loss, grad, new_train_rng = loss_and_grad(train_rng)
- # self.log(grad) # NOTE uncomment to visualize gradient
- grad = jax.lax.pmean(grad, axis_name = 'batch')
- new_state = state.apply_gradients(grads = grad)
- metrics: Dict[str, Any] = { 'loss': loss }
- metrics = jax.lax.pmean(metrics, axis_name = 'batch')
- def l2(xs) -> jax.Array:
- return jnp.sqrt(sum([jnp.vdot(x, x) for x in jax.tree_util.tree_leaves(xs)]))
- metrics['l2_grads'] = l2(jax.tree_util.tree_leaves(grad))
-
- return new_state, metrics, new_train_rng
-
- p_train_step = jax.pmap(fun = train_step, axis_name = 'batch', donate_argnums = (0, ))
- state = jax_utils.replicate(state)
-
- train_metrics = []
- train_metric = None
-
- global_step: int = 0
-
- if jax.process_index() == 0:
- self._init_tracker_meta()
- hyper_params = {
- 'lr': lr,
- 'lr_warmup': warmup,
- 'lr_decay': decay,
- 'weight_decay': weight_decay,
- 'total_steps': total_steps,
- 'batch_size': dataloader.batch_size // self.num_devices,
- 'num_frames': num_frames,
- 'sample_size': self.sample_size,
- 'num_devices': self.num_devices,
- 'seed': self.seed,
- 'use_memory_efficient_attention': self.model.use_memory_efficient_attention,
- 'only_temporal': self.only_temporal,
- 'dtype': self.dtype_str,
- 'param_dtype': self.param_dtype,
- 'pretrained_model': self.pretrained_model,
- 'model_config': self.model.config
- }
- if self._use_wandb:
- self.log('Setting up wandb')
- self._setup_wandb(hyper_params)
- self.log(hyper_params)
- output_path = os.path.join(output_dir, str(global_step), 'unet')
- self.log(f'saving checkpoint to {output_path}')
- self.model.save_pretrained(
- save_directory = output_path,
- params = self.get_params_from_state(state),#jax.device_get(jax.tree_util.tree_map(lambda x: x[0], state.params)),
- is_main_process = True
- )
-
- pbar_epoch = tqdm(
- total = epochs,
- desc = 'Epochs',
- smoothing = 1,
- position = 0,
- dynamic_ncols = True,
- leave = True,
- disable = jax.process_index() > 0
- )
- steps_per_epoch = len(dataloader) # TODO dataloader
- for epoch in range(epochs):
- pbar_steps = tqdm(
- total = steps_per_epoch,
- desc = 'Steps',
- position = 1,
- smoothing = 0.1,
- dynamic_ncols = True,
- leave = True,
- disable = jax.process_index() > 0
- )
- for batch in dataloader:
- # keep input + gt as float32, results in fp32 loss and grad
- # otherwise uncomment the following to cast to the model dtype
- # batch = { k: (v.astype(self.dtype) if v.dtype == np.float32 else v) for k,v in batch.items() }
- batch = shard(batch)
- state, train_metric, train_rngs = p_train_step(
- state, batch, train_rngs
- )
- train_metrics.append(train_metric)
- if global_step % log_every_step == 0 and jax.process_index() == 0:
- train_metrics = jax_utils.unreplicate(train_metrics)
- train_metrics = jax.tree_util.tree_map(lambda *m: jnp.array(m).mean(), *train_metrics)
- if global_step == 0:
- self.log(f'grad dtype: {train_metrics["l2_grads"].dtype}')
- self.log(f'loss dtype: {train_metrics["loss"].dtype}')
- train_metrics_dict = { k: v.item() for k, v in train_metrics.items() }
- train_metrics_dict['lr'] = lr_schedule(global_step).item()
- self.log_metrics(train_metrics_dict, step = global_step, epoch = epoch)
- train_metrics = []
- pbar_steps.update(1)
- global_step += 1
- if epoch % save_every_epoch == 0 and jax.process_index() == 0:
- output_path = os.path.join(output_dir, str(global_step), 'unet')
- self.log(f'saving checkpoint to {output_path}')
- self.model.save_pretrained(
- save_directory = output_path,
- params = self.get_params_from_state(state),#jax.device_get(jax.tree_util.tree_map(lambda x: x[0], state.params)),
- is_main_process = True
- )
- self.log(f'checkpoint saved ')
- if epoch % sample_every_epoch == 0 and jax.process_index() == 0:
- images = self.sample(
- params = state.params,
- replicate_params = False,
- prompt = 'dancing person',
- image_path = 'testimage.png',
- num_frames = num_frames,
- steps = 50,
- cfg = 9.0,
- unload_after_usage = False
- )
- os.makedirs(os.path.join('image_output', str(epoch)), exist_ok = True)
- for i, im in enumerate(images):
- im.save(os.path.join('image_output', str(epoch), str(i).zfill(5) + '.png'), optimize = True)
- pbar_epoch.update(1)
-
diff --git a/spaces/Vipitis/shadermatch/app.py b/spaces/Vipitis/shadermatch/app.py
deleted file mode 100644
index 77cffee82e6eb030113fcb4c60617932748d2d12..0000000000000000000000000000000000000000
--- a/spaces/Vipitis/shadermatch/app.py
+++ /dev/null
@@ -1,6 +0,0 @@
-import evaluate
-from evaluate.utils import launch_gradio_widget
-
-
-module = evaluate.load("Vipitis/shadermatch")
-launch_gradio_widget(module)
\ No newline at end of file
diff --git a/spaces/WZT/DigiProj/app.py b/spaces/WZT/DigiProj/app.py
deleted file mode 100644
index d5ab379d4f18a38fc6d334b33a3956fe3ff89eff..0000000000000000000000000000000000000000
--- a/spaces/WZT/DigiProj/app.py
+++ /dev/null
@@ -1,178 +0,0 @@
-import os
-import numpy as np
-import cv2
-import torch
-from torch import nn
-from torch.nn import functional as F
-from torch.utils import data
-from torchvision import transforms, utils
-from tqdm import tqdm
-torch.backends.cudnn.benchmark = True
-import copy
-from util import *
-from PIL import Image
-
-from model import *
-import moviepy.video.io.ImageSequenceClip
-import scipy
-import kornia.augmentation as K
-
-from base64 import b64encode
-import gradio as gr
-from torchvision import transforms
-
-# torch.hub.download_url_to_file('https://i.imgur.com/HiOTPNg.png', 'mona.png')
-# torch.hub.download_url_to_file('https://i.imgur.com/Cw8HcTN.png', 'painting.png')
-
-device = 'cpu'
-latent_dim = 8
-n_mlp = 5
-num_down = 3
-
-G_A2B = Generator(256, 4, latent_dim, n_mlp, channel_multiplier=1, lr_mlp=.01,n_res=1).to(device).eval()
-
-ensure_checkpoint_exists('GNR_checkpoint_full.pt')
-ckpt = torch.load('GNR_checkpoint_full.pt', map_location=device)
-
-G_A2B.load_state_dict(ckpt['G_A2B_ema'])
-
-# mean latent
-truncation = 1
-with torch.no_grad():
- mean_style = G_A2B.mapping(torch.randn([1000, latent_dim]).to(device)).mean(0, keepdim=True)
-
-
-test_transform = transforms.Compose([
- transforms.Resize((256, 256)),
- transforms.ToTensor(),
- transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5), inplace=True)
-])
-plt.rcParams['figure.dpi'] = 200
-
-# torch.manual_seed(84986)
-
-num_styles = 1
-style = torch.randn([num_styles, latent_dim]).to(device)
-
-
-def inference(input_im):
- if input_im == None:
- return
- real_A = test_transform(input_im).unsqueeze(0).to(device)
-
- with torch.no_grad():
- A2B_content, _ = G_A2B.encode(real_A)
- #fake_A2B = G_A2B.decode(A2B_content.repeat(num_styles,1,1,1), style)
- fake_A2B = G_A2B.decode(A2B_content.repeat(num_styles,1,1,1), torch.randn([num_styles, latent_dim]).to(device))
- std=(0.5, 0.5, 0.5)
- mean=(0.5, 0.5, 0.5)
- z = fake_A2B * torch.tensor(std).view(3, 1, 1)
- z = z + torch.tensor(mean).view(3, 1, 1)
- tensor_to_pil = transforms.ToPILImage(mode='RGB')(z.squeeze())
- return tensor_to_pil
-
-def clear(image):
- return
-
-def setsample(image):
- return image
-
-
-# with gr.Blocks() as demo:
-# gr.Markdown("
GANs N' Roses
")
-# gr.Markdown("""Convert real-life face images into diverse anime versions of themselves. Use the default sample image or replace the input
-# by first clicking X then dragging a new image into the Input box. Crop the image by cliking the pen tool. Click Run to transform the input
-# into an anime version. Click Clear to clear the ouput box. Try running it multiple times for different anime styles!""")
-
-# with gr.Row():
-# with gr.Column():
-# inp = gr.Image(type="pil", value ="", label="Input")
-# with gr.Row():
-# clr = gr.Button("Clear") #needs implementation
-# run = gr.Button("Run")
-# with gr.Column():
-# out = gr.outputs.Image(type="pil")
-# clr.click(fn=clear, inputs=inp, outputs=inp) # clear input gr.Image
-# clr.click(fn=clear, inputs=out, outputs=out) # clear output gr.Image
-
-
-# gr.Markdown("
Sample Inputs
")
-
-# # with gr.Row():
-# # with gr.Column():
-# # sample1 = gr.Image(value="sample_images/1.JPG")
-# # with gr.Column():
-# # samplebtn1 = gr.Button(value="Try sample 1")
-# # samplebtn1.click(fn=setsample, inputs=sample1, outputs=inp)
-
-# # with gr.Column():
-# # sample2 = gr.Image(value="sample_images/2.JPG")
-# # with gr.Column():
-# # samplebtn2 = gr.Button(value="Try sample 2")
-# # samplebtn2.click(fn=setsample, inputs=sample2, outputs=inp)
-
-# # with gr.Column():
-# # sample3 = gr.Image(value="sample_images/3.JPG")
-# # with gr.Column():
-# # samplebtn3 = gr.Button(value="Try sample 3")
-# # samplebtn3.click(fn=setsample, inputs=sample3, outputs=inp)
-
-# #add info here
-# gr.Markdown("""
-# GANs N' Roses (GNR) is an image-to-image framework for face images that uses a multimodal approach with novel definitions for content and style.
-# Content is defined as what changes when a augmentations are applied to a face image. Style is defined as what does not change when augmentations
-# are applied to a face image.
-
-# GNR's implementation borrows heavily from StyleGAN2; however, adversarial loss is derived from the introduced content and style definitions, ensuring diversity of
-# outputs when repeatedly transforming the same input face image.
-
-# The current implementation was trained on the selfie2anime dataset and transforms real human faces into anime faces. Due to limitations of the dataset, GNR works best
-# when working with female face inputs that are cropped to include only the face (no neck and body).
-
-# GNR was implemented by Chong, M. & Forsyth, D. (2021) in the paper GANs N' Roses: Stable, Controllable, Diverse Image to Image Translation (works for videos too!)
-# """)
-
-
-# run.click(fn=inference, inputs=inp, outputs=out)
-title = "GANs N' Roses"
-description = """Convert real-life face images into diverse anime versions of themselves. Use the default sample image or replace the input
- by first clicking X then dragging a new image into the Input box. Crop the image by clicking the pen tool. Click Submit to transform the input
- into an anime version. Click Clear to clear the output box. Try running it multiple times for different anime styles!"""
-article = """
GANs N' Roses (GNR) is an image-to-image framework for face images that uses a multi-modal approach with novel definitions for content and style.
- Content is defined as what changes when a augmentations are applied to a face image. Style is defined as what does not change when augmentations
- are applied to a face image.
-
GNR's implementation borrows heavily from StyleGAN2; however, adversarial loss is derived from the introduced content and style definitions, ensuring diversity of
- outputs when repeatedly transforming the same input face image.
-
The current implementation was trained on the selfie2anime dataset and transforms real human faces into anime faces. Due to limitations of the dataset, GNR works best
- when working with female face inputs that are cropped to include only the face (no neck and body).
-
GNR was implemented by Chong, M. & Forsyth, D. (2021) in the paper GANs N' Roses: Stable, Controllable, Diverse Image to Image Translation (works for videos too!)
- """
-article = """
- What is GANs N's Roses
-
-
GANs N' Roses (GNR) is an image-to-image framework for face images that uses a multimodal approach with novel definitions for content and style.
- Content is defined as what changes when a augmentations are applied to a face image. Style is defined as what does not change when augmentations
- are applied to a face image. The backbone learns these two things separately and uses that information to generate images.
-
- How does it work?
-
-
- GNR creates images through the use of what's called a Generative Adversarial Network (GAN). To explain what a GAN is, imagine a situation where Tom is learning to draw an apple. Tom knows nothing about an apple so he scribbles a random shape and calls it an apple. He asks his friend Jerry if he got it correctly and naturally Jerry said no. Tom reflects on his drawing and scribbles a new "apple", showing it to Jerry each time. Eventually, Tom gets lucky and draws something close to an apple and fools Jerry. Tom picks up on what features that drawing has, creating more drawings similar to it. He eventually gets better and better but Jerry doesn't like getting fooled so he learns how to tell apart Tom's fake apples better. At this point, it becomes a cat-and-mouse game where both keep learning new things in order to outwit each other. This is the general idea behind GAN's. In more fomal terms, GAN's function using 2 neural networks: the generator and the discriminator. The former would be Tom and the latter would be Jerry.
-
-
- GNR's implementation borrows heavily from an existing system called StyleGAN2. The main difference is that adversarial loss is derived from the introduced content and style definitions, ensuring diversity of outputs when repeatedly transforming the same input face image.
-
-
The current implementation was trained on the selfie2anime dataset and transforms real human faces into anime faces. Due to limitations of the dataset, GNR works best when working with female face inputs that are cropped to include only the face (no neck and body).
-
GNR was implemented by Chong, M. & Forsyth, D. (2021) in the paper GANs N' Roses: Stable, Controllable, Diverse Image to Image Translation (works for videos too!)
"""
-gr.Interface(
- inference,
- [gr.inputs.Image(type="pil", label="Input")],
- gr.outputs.Image(type="pil", label="Output"),
- title=title,
- description=description,
- article=article,
- allow_flagging='never',
- examples =
- [["sample_images/2.jpg"],["sample_images/1.JPG"],["sample_images/3.jpg"]]
- ).launch(share=True)
-# demo.launch(share = True)
diff --git a/spaces/Wrightjay/togethercomputer-LLaMA-2-7B-32K/app.py b/spaces/Wrightjay/togethercomputer-LLaMA-2-7B-32K/app.py
deleted file mode 100644
index 0eea9d6f508c3048be87fc452d36415699a6999e..0000000000000000000000000000000000000000
--- a/spaces/Wrightjay/togethercomputer-LLaMA-2-7B-32K/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/togethercomputer/LLaMA-2-7B-32K").launch()
\ No newline at end of file
diff --git a/spaces/XzJosh/ShanBao-Bert-VITS2/resample.py b/spaces/XzJosh/ShanBao-Bert-VITS2/resample.py
deleted file mode 100644
index 2ed1685654a371c5722168e9987809b05b1cb224..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/ShanBao-Bert-VITS2/resample.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import os
-import argparse
-import librosa
-import numpy as np
-from multiprocessing import Pool, cpu_count
-
-import soundfile
-from scipy.io import wavfile
-from tqdm import tqdm
-
-
-def process(item):
- spkdir, wav_name, args = item
- speaker = spkdir.replace("\\", "/").split("/")[-1]
- wav_path = os.path.join(args.in_dir, speaker, wav_name)
- if os.path.exists(wav_path) and '.wav' in wav_path:
- os.makedirs(os.path.join(args.out_dir, speaker), exist_ok=True)
- wav, sr = librosa.load(wav_path, sr=args.sr)
- soundfile.write(
- os.path.join(args.out_dir, speaker, wav_name),
- wav,
- sr
- )
-
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--sr", type=int, default=44100, help="sampling rate")
- parser.add_argument("--in_dir", type=str, default="./raw", help="path to source dir")
- parser.add_argument("--out_dir", type=str, default="./dataset", help="path to target dir")
- args = parser.parse_args()
- # processs = 8
- processs = cpu_count()-2 if cpu_count() >4 else 1
- pool = Pool(processes=processs)
-
- for speaker in os.listdir(args.in_dir):
- spk_dir = os.path.join(args.in_dir, speaker)
- if os.path.isdir(spk_dir):
- print(spk_dir)
- for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])):
- pass
diff --git a/spaces/Yesmyboi/Yes/Dockerfile b/spaces/Yesmyboi/Yes/Dockerfile
deleted file mode 100644
index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000
--- a/spaces/Yesmyboi/Yes/Dockerfile
+++ /dev/null
@@ -1,21 +0,0 @@
-FROM node:18-bullseye-slim
-
-RUN apt-get update && \
-
-apt-get install -y git
-
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-
-WORKDIR /app
-
-RUN npm install
-
-COPY Dockerfile greeting.md* .env* ./
-
-RUN npm run build
-
-EXPOSE 7860
-
-ENV NODE_ENV=production
-
-CMD [ "npm", "start" ]
\ No newline at end of file
diff --git a/spaces/Yiqin/ChatVID/model/vision/ImageCaptioner.py b/spaces/Yiqin/ChatVID/model/vision/ImageCaptioner.py
deleted file mode 100644
index 4ea6344e3f5f4a1ed0e1bc119aba1adfe847e377..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/vision/ImageCaptioner.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import torch
-from transformers import Blip2ForConditionalGeneration, Blip2Processor
-
-
-class ImageCaptioner:
-
- def __init__(self, device='cuda'):
- self.device = device
- if self.device == 'cpu':
- self.data_type = torch.float32
- else:
- self.data_type = torch.float16
- self.processor = Blip2Processor.from_pretrained(
- "/home/user/app/pretrained_models/blip2-opt-2.7b")
- self.model = Blip2ForConditionalGeneration.from_pretrained(
- "/home/user/app/pretrained_models/blip2-opt-2.7b",
- torch_dtype=self.data_type, device_map="auto")
- # self.processor = Blip2Processor.from_pretrained(
- # "/mnt/petrelfs/wangyiqin/vid_cap/ChatVID_huggingface/pretrained_models/blip2-opt-2.7b")
- # self.model = Blip2ForConditionalGeneration.from_pretrained(
- # "/mnt/petrelfs/wangyiqin/vid_cap/ChatVID_huggingface/pretrained_models/blip2-opt-2.7b",
- # torch_dtype=self.data_type, device_map="auto")
-
- def __call__(self, imgs):
- inputs = self.processor(
- images=imgs, return_tensors="pt").to(self.device, self.data_type)
- generated_ids = self.model.generate(**inputs)
- generated_text = self.processor.batch_decode(
- generated_ids, skip_special_tokens=True)
-
- return generated_text
diff --git a/spaces/Yudha515/Rvc-Models/audiocraft/modules/seanet.py b/spaces/Yudha515/Rvc-Models/audiocraft/modules/seanet.py
deleted file mode 100644
index 3e5998e9153afb6e68ea410d565e00ea835db248..0000000000000000000000000000000000000000
--- a/spaces/Yudha515/Rvc-Models/audiocraft/modules/seanet.py
+++ /dev/null
@@ -1,258 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import typing as tp
-
-import numpy as np
-import torch.nn as nn
-
-from .conv import StreamableConv1d, StreamableConvTranspose1d
-from .lstm import StreamableLSTM
-
-
-class SEANetResnetBlock(nn.Module):
- """Residual block from SEANet model.
-
- Args:
- dim (int): Dimension of the input/output.
- kernel_sizes (list): List of kernel sizes for the convolutions.
- dilations (list): List of dilations for the convolutions.
- activation (str): Activation function.
- activation_params (dict): Parameters to provide to the activation function.
- norm (str): Normalization method.
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
- causal (bool): Whether to use fully causal convolution.
- pad_mode (str): Padding mode for the convolutions.
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
- true_skip (bool): Whether to use true skip connection or a simple
- (streamable) convolution as the skip connection.
- """
- def __init__(self, dim: int, kernel_sizes: tp.List[int] = [3, 1], dilations: tp.List[int] = [1, 1],
- activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, causal: bool = False,
- pad_mode: str = 'reflect', compress: int = 2, true_skip: bool = True):
- super().__init__()
- assert len(kernel_sizes) == len(dilations), 'Number of kernel sizes should match number of dilations'
- act = getattr(nn, activation)
- hidden = dim // compress
- block = []
- for i, (kernel_size, dilation) in enumerate(zip(kernel_sizes, dilations)):
- in_chs = dim if i == 0 else hidden
- out_chs = dim if i == len(kernel_sizes) - 1 else hidden
- block += [
- act(**activation_params),
- StreamableConv1d(in_chs, out_chs, kernel_size=kernel_size, dilation=dilation,
- norm=norm, norm_kwargs=norm_params,
- causal=causal, pad_mode=pad_mode),
- ]
- self.block = nn.Sequential(*block)
- self.shortcut: nn.Module
- if true_skip:
- self.shortcut = nn.Identity()
- else:
- self.shortcut = StreamableConv1d(dim, dim, kernel_size=1, norm=norm, norm_kwargs=norm_params,
- causal=causal, pad_mode=pad_mode)
-
- def forward(self, x):
- return self.shortcut(x) + self.block(x)
-
-
-class SEANetEncoder(nn.Module):
- """SEANet encoder.
-
- Args:
- channels (int): Audio channels.
- dimension (int): Intermediate representation dimension.
- n_filters (int): Base width for the model.
- n_residual_layers (int): nb of residual layers.
- ratios (Sequence[int]): kernel size and stride ratios. The encoder uses downsampling ratios instead of
- upsampling ratios, hence it will use the ratios in the reverse order to the ones specified here
- that must match the decoder order. We use the decoder order as some models may only employ the decoder.
- activation (str): Activation function.
- activation_params (dict): Parameters to provide to the activation function.
- norm (str): Normalization method.
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
- kernel_size (int): Kernel size for the initial convolution.
- last_kernel_size (int): Kernel size for the initial convolution.
- residual_kernel_size (int): Kernel size for the residual layers.
- dilation_base (int): How much to increase the dilation with each layer.
- causal (bool): Whether to use fully causal convolution.
- pad_mode (str): Padding mode for the convolutions.
- true_skip (bool): Whether to use true skip connection or a simple
- (streamable) convolution as the skip connection in the residual network blocks.
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
- lstm (int): Number of LSTM layers at the end of the encoder.
- disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm.
- For the encoder, it corresponds to the N first blocks.
- """
- def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3,
- ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7,
- last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False,
- pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0,
- disable_norm_outer_blocks: int = 0):
- super().__init__()
- self.channels = channels
- self.dimension = dimension
- self.n_filters = n_filters
- self.ratios = list(reversed(ratios))
- del ratios
- self.n_residual_layers = n_residual_layers
- self.hop_length = np.prod(self.ratios)
- self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks
- self.disable_norm_outer_blocks = disable_norm_outer_blocks
- assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \
- "Number of blocks for which to disable norm is invalid." \
- "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0."
-
- act = getattr(nn, activation)
- mult = 1
- model: tp.List[nn.Module] = [
- StreamableConv1d(channels, mult * n_filters, kernel_size,
- norm='none' if self.disable_norm_outer_blocks >= 1 else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
- # Downsample to raw audio scale
- for i, ratio in enumerate(self.ratios):
- block_norm = 'none' if self.disable_norm_outer_blocks >= i + 2 else norm
- # Add residual layers
- for j in range(n_residual_layers):
- model += [
- SEANetResnetBlock(mult * n_filters, kernel_sizes=[residual_kernel_size, 1],
- dilations=[dilation_base ** j, 1],
- norm=block_norm, norm_params=norm_params,
- activation=activation, activation_params=activation_params,
- causal=causal, pad_mode=pad_mode, compress=compress, true_skip=true_skip)]
-
- # Add downsampling layers
- model += [
- act(**activation_params),
- StreamableConv1d(mult * n_filters, mult * n_filters * 2,
- kernel_size=ratio * 2, stride=ratio,
- norm=block_norm, norm_kwargs=norm_params,
- causal=causal, pad_mode=pad_mode),
- ]
- mult *= 2
-
- if lstm:
- model += [StreamableLSTM(mult * n_filters, num_layers=lstm)]
-
- model += [
- act(**activation_params),
- StreamableConv1d(mult * n_filters, dimension, last_kernel_size,
- norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
-
- self.model = nn.Sequential(*model)
-
- def forward(self, x):
- return self.model(x)
-
-
-class SEANetDecoder(nn.Module):
- """SEANet decoder.
-
- Args:
- channels (int): Audio channels.
- dimension (int): Intermediate representation dimension.
- n_filters (int): Base width for the model.
- n_residual_layers (int): nb of residual layers.
- ratios (Sequence[int]): kernel size and stride ratios.
- activation (str): Activation function.
- activation_params (dict): Parameters to provide to the activation function.
- final_activation (str): Final activation function after all convolutions.
- final_activation_params (dict): Parameters to provide to the activation function.
- norm (str): Normalization method.
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
- kernel_size (int): Kernel size for the initial convolution.
- last_kernel_size (int): Kernel size for the initial convolution.
- residual_kernel_size (int): Kernel size for the residual layers.
- dilation_base (int): How much to increase the dilation with each layer.
- causal (bool): Whether to use fully causal convolution.
- pad_mode (str): Padding mode for the convolutions.
- true_skip (bool): Whether to use true skip connection or a simple.
- (streamable) convolution as the skip connection in the residual network blocks.
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
- lstm (int): Number of LSTM layers at the end of the encoder.
- disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm.
- For the decoder, it corresponds to the N last blocks.
- trim_right_ratio (float): Ratio for trimming at the right of the transposed convolution under the causal setup.
- If equal to 1.0, it means that all the trimming is done at the right.
- """
- def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3,
- ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
- final_activation: tp.Optional[str] = None, final_activation_params: tp.Optional[dict] = None,
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7,
- last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False,
- pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0,
- disable_norm_outer_blocks: int = 0, trim_right_ratio: float = 1.0):
- super().__init__()
- self.dimension = dimension
- self.channels = channels
- self.n_filters = n_filters
- self.ratios = ratios
- del ratios
- self.n_residual_layers = n_residual_layers
- self.hop_length = np.prod(self.ratios)
- self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks
- self.disable_norm_outer_blocks = disable_norm_outer_blocks
- assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \
- "Number of blocks for which to disable norm is invalid." \
- "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0."
-
- act = getattr(nn, activation)
- mult = int(2 ** len(self.ratios))
- model: tp.List[nn.Module] = [
- StreamableConv1d(dimension, mult * n_filters, kernel_size,
- norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
-
- if lstm:
- model += [StreamableLSTM(mult * n_filters, num_layers=lstm)]
-
- # Upsample to raw audio scale
- for i, ratio in enumerate(self.ratios):
- block_norm = 'none' if self.disable_norm_outer_blocks >= self.n_blocks - (i + 1) else norm
- # Add upsampling layers
- model += [
- act(**activation_params),
- StreamableConvTranspose1d(mult * n_filters, mult * n_filters // 2,
- kernel_size=ratio * 2, stride=ratio,
- norm=block_norm, norm_kwargs=norm_params,
- causal=causal, trim_right_ratio=trim_right_ratio),
- ]
- # Add residual layers
- for j in range(n_residual_layers):
- model += [
- SEANetResnetBlock(mult * n_filters // 2, kernel_sizes=[residual_kernel_size, 1],
- dilations=[dilation_base ** j, 1],
- activation=activation, activation_params=activation_params,
- norm=block_norm, norm_params=norm_params, causal=causal,
- pad_mode=pad_mode, compress=compress, true_skip=true_skip)]
-
- mult //= 2
-
- # Add final layers
- model += [
- act(**activation_params),
- StreamableConv1d(n_filters, channels, last_kernel_size,
- norm='none' if self.disable_norm_outer_blocks >= 1 else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
- # Add optional final activation to decoder (eg. tanh)
- if final_activation is not None:
- final_act = getattr(nn, final_activation)
- final_activation_params = final_activation_params or {}
- model += [
- final_act(**final_activation_params)
- ]
- self.model = nn.Sequential(*model)
-
- def forward(self, z):
- y = self.model(z)
- return y
diff --git a/spaces/ZenXir/FreeVC/tts_voice.py b/spaces/ZenXir/FreeVC/tts_voice.py
deleted file mode 100644
index 8740ebab4a127a13ea9e7cf6a4fbacb6f442e742..0000000000000000000000000000000000000000
--- a/spaces/ZenXir/FreeVC/tts_voice.py
+++ /dev/null
@@ -1,290 +0,0 @@
-tts_order_voice = {'英语 (美国)-Jenny-女': 'en-US-JennyNeural',
- '英语 (美国)-Guy-男': 'en-US-GuyNeural',
- '英语 (美国)-Ana-女': 'en-US-AnaNeural',
- '英语 (美国)-Aria-女': 'en-US-AriaNeural',
- '英语 (美国)-Christopher-男': 'en-US-ChristopherNeural',
- '英语 (美国)-Eric-男': 'en-US-EricNeural',
- '英语 (美国)-Michelle-女': 'en-US-MichelleNeural',
- '英语 (美国)-Roger-男': 'en-US-RogerNeural',
- '西班牙语 (墨西哥)-Dalia-女': 'es-MX-DaliaNeural',
- '西班牙语 (墨西哥)-Jorge-男': 'es-MX-JorgeNeural',
- '韩语 (韩国)-Sun-Hi-女': 'ko-KR-SunHiNeural',
- '韩语 (韩国)-InJoon-男': 'ko-KR-InJoonNeural',
-'泰语 (泰国)-Premwadee-女': 'th-TH-PremwadeeNeural',
- '泰语 (泰国)-Niwat-男': 'th-TH-NiwatNeural',
- '越南语 (越南)-HoaiMy-女': 'vi-VN-HoaiMyNeural',
-'越南语 (越南)-NamMinh-男': 'vi-VN-NamMinhNeural',
- '日语 (日本)-Nanami-女': 'ja-JP-NanamiNeural',
- '日语 (日本)-Keita-男': 'ja-JP-KeitaNeural',
- '法语 (法国)-Denise-女': 'fr-FR-DeniseNeural',
- '法语 (法国)-Eloise-女': 'fr-FR-EloiseNeural',
- '法语 (法国)-Henri-男': 'fr-FR-HenriNeural',
- '葡萄牙语 (巴西)-Francisca-女': 'pt-BR-FranciscaNeural',
- '葡萄牙语 (巴西)-Antonio-男': 'pt-BR-AntonioNeural',
- '印度尼西亚语 (印度尼西亚)-Ardi-男': 'id-ID-ArdiNeural',
- '印度尼西亚语 (印度尼西亚)-Gadis-女': 'id-ID-GadisNeural',
- '希伯来语 (以色列)-Avri-男': 'he-IL-AvriNeural',
- '希伯来语 (以色列)-Hila-女': 'he-IL-HilaNeural',
-'意大利语 (意大利)-Isabella-女': 'it-IT-IsabellaNeural',
- '意大利语 (意大利)-Diego-男': 'it-IT-DiegoNeural',
- '意大利语 (意大利)-Elsa-女': 'it-IT-ElsaNeural',
- '荷兰语 (荷兰)-Colette-女': 'nl-NL-ColetteNeural',
- '荷兰语 (荷兰)-Fenna-女': 'nl-NL-FennaNeural',
- '荷兰语 (荷兰)-Maarten-男': 'nl-NL-MaartenNeural',
-'马来语 (马来西亚)-Osman-男': 'ms-MY-OsmanNeural',
- '马来语 (马来西亚)-Yasmin-女': 'ms-MY-YasminNeural',
- '挪威语 (挪威)-Pernille-女': 'nb-NO-PernilleNeural',
- '挪威语 (挪威)-Finn-男': 'nb-NO-FinnNeural',
- '瑞典语 (瑞典)-Sofie-女': 'sv-SE-SofieNeural',
- '瑞典语 (瑞典)-Mattias-男': 'sv-SE-MattiasNeural',
- '阿拉伯语 (沙特阿拉伯)-Hamed-男': 'ar-SA-HamedNeural',
- '阿拉伯语 (沙特阿拉伯)-Zariyah-女': 'ar-SA-ZariyahNeural',
- '希腊语 (希腊)-Athina-女': 'el-GR-AthinaNeural',
- '希腊语 (希腊)-Nestoras-男': 'el-GR-NestorasNeural',
-'德语 (德国)-Katja-女': 'de-DE-KatjaNeural',
- '德语 (德国)-Amala-女': 'de-DE-AmalaNeural',
- '德语 (德国)-Conrad-男': 'de-DE-ConradNeural',
- '德语 (德国)-Killian-男': 'de-DE-KillianNeural',
- '阿拉伯语 (南非)-Adri-女': 'af-ZA-AdriNeural',
- '阿拉伯语 (南非)-Willem-男': 'af-ZA-WillemNeural',
- '阿姆哈拉语 (埃塞俄比亚)-Ameha-男': 'am-ET-AmehaNeural',
- '阿姆哈拉语 (埃塞俄比亚)-Mekdes-女': 'am-ET-MekdesNeural',
- '阿拉伯语 (阿拉伯联合酋长国)-Fatima-女': 'ar-AE-FatimaNeural',
- '阿拉伯语 (阿拉伯联合酋长国)-Hamdan-男': 'ar-AE-HamdanNeural',
- '阿拉伯语 (巴林)-Ali-男': 'ar-BH-AliNeural',
- '阿拉伯语 (巴林)-Laila-女': 'ar-BH-LailaNeural',
- '阿拉伯语 (阿尔及利亚)-Ismael-男': 'ar-DZ-IsmaelNeural',
- '阿拉伯语 (埃及)-Salma-女': 'ar-EG-SalmaNeural',
- '阿拉伯语 (埃及)-Shakir-男': 'ar-EG-ShakirNeural',
- '阿拉伯语 (伊拉克)-Bassel-男': 'ar-IQ-BasselNeural',
- '阿拉伯语 (伊拉克)-Rana-女': 'ar-IQ-RanaNeural',
- '阿拉伯语 (约旦)-Sana-女': 'ar-JO-SanaNeural',
- '阿拉伯语 (约旦)-Taim-男': 'ar-JO-TaimNeural',
- '阿拉伯语 (科威特)-Fahed-男': 'ar-KW-FahedNeural',
- '阿拉伯语 (科威特)-Noura-女': 'ar-KW-NouraNeural',
- '阿拉伯语 (黎巴嫩)-Layla-女': 'ar-LB-LaylaNeural',
- '阿拉伯语 (黎巴嫩)-Rami-男': 'ar-LB-RamiNeural',
- '阿拉伯语 (利比亚)-Iman-女': 'ar-LY-ImanNeural',
- '阿拉伯语 (利比亚)-Omar-男': 'ar-LY-OmarNeural',
- '阿拉伯语 (摩洛哥)-Jamal-男': 'ar-MA-JamalNeural',
- '阿拉伯语 (摩洛哥)-Mouna-女': 'ar-MA-MounaNeural',
- '阿拉伯语 (阿曼)-Abdullah-男': 'ar-OM-AbdullahNeural',
- '阿拉伯语 (阿曼)-Aysha-女': 'ar-OM-AyshaNeural',
- '阿拉伯语 (卡塔尔)-Amal-女': 'ar-QA-AmalNeural',
- '阿拉伯语 (卡塔尔)-Moaz-男': 'ar-QA-MoazNeural',
- '阿拉伯语 (叙利亚)-Amany-女': 'ar-SY-AmanyNeural',
- '阿拉伯语 (叙利亚)-Laith-男': 'ar-SY-LaithNeural',
- '阿拉伯语 (突尼斯)-Hedi-男': 'ar-TN-HediNeural',
- '阿拉伯语 (突尼斯)-Reem-女': 'ar-TN-ReemNeural',
- '阿拉伯语 (也门)-Maryam-女': 'ar-YE-MaryamNeural',
- '阿拉伯语 (也门)-Saleh-男': 'ar-YE-SalehNeural',
- '阿塞拜疆语 (阿塞拜疆)-Babek-男': 'az-AZ-BabekNeural',
- '阿塞拜疆语 (阿塞拜疆)-Banu-女': 'az-AZ-BanuNeural',
- '保加利亚语 (保加利亚)-Borislav-男': 'bg-BG-BorislavNeural',
- '保加利亚语 (保加利亚)-Kalina-女': 'bg-BG-KalinaNeural',
- '孟加拉语 (孟加拉国)-Nabanita-女': 'bn-BD-NabanitaNeural',
- '孟加拉语 (孟加拉国)-Pradeep-男': 'bn-BD-PradeepNeural',
- '孟加拉语 (印度)-Bashkar-男': 'bn-IN-BashkarNeural',
- '孟加拉语 (印度)-Tanishaa-女': 'bn-IN-TanishaaNeural',
- '波斯尼亚语 (波斯尼亚和黑塞哥维那)-Goran-男': 'bs-BA-GoranNeural',
- '波斯尼亚语 (波斯尼亚和黑塞哥维那)-Vesna-女': 'bs-BA-VesnaNeural',
- '加泰罗尼亚语 (西班牙)-Joana-女': 'ca-ES-JoanaNeural',
- '加泰罗尼亚语 (西班牙)-Enric-男': 'ca-ES-EnricNeural',
- '捷克语 (捷克共和国)-Antonin-男': 'cs-CZ-AntoninNeural',
- '捷克语 (捷克共和国)-Vlasta-女': 'cs-CZ-VlastaNeural',
- '威尔士语 (英国)-Aled-男': 'cy-GB-AledNeural',
- '威尔士语 (英国)-Nia-女': 'cy-GB-NiaNeural',
- '丹麦语 (丹麦)-Christel-女': 'da-DK-ChristelNeural',
- '丹麦语 (丹麦)-Jeppe-男': 'da-DK-JeppeNeural',
- '德语 (奥地利)-Ingrid-女': 'de-AT-IngridNeural',
- '德语 (奥地利)-Jonas-男': 'de-AT-JonasNeural',
- '德语 (瑞士)-Jan-男': 'de-CH-JanNeural',
- '德语 (瑞士)-Leni-女': 'de-CH-LeniNeural',
- '英语 (澳大利亚)-Natasha-女': 'en-AU-NatashaNeural',
- '英语 (澳大利亚)-William-男': 'en-AU-WilliamNeural',
- '英语 (加拿大)-Clara-女': 'en-CA-ClaraNeural',
- '英语 (加拿大)-Liam-男': 'en-CA-LiamNeural',
- '英语 (英国)-Libby-女': 'en-GB-LibbyNeural',
- '英语 (英国)-Maisie-女': 'en-GB-MaisieNeural',
- '英语 (英国)-Ryan-男': 'en-GB-RyanNeural',
- '英语 (英国)-Sonia-女': 'en-GB-SoniaNeural',
- '英语 (英国)-Thomas-男': 'en-GB-ThomasNeural',
- '英语 (香港)-Sam-男': 'en-HK-SamNeural',
- '英语 (香港)-Yan-女': 'en-HK-YanNeural',
- '英语 (爱尔兰)-Connor-男': 'en-IE-ConnorNeural',
- '英语 (爱尔兰)-Emily-女': 'en-IE-EmilyNeural',
- '英语 (印度)-Neerja-女': 'en-IN-NeerjaNeural',
- '英语 (印度)-Prabhat-男': 'en-IN-PrabhatNeural',
- '英语 (肯尼亚)-Asilia-女': 'en-KE-AsiliaNeural',
- '英语 (肯尼亚)-Chilemba-男': 'en-KE-ChilembaNeural',
- '英语 (尼日利亚)-Abeo-男': 'en-NG-AbeoNeural',
- '英语 (尼日利亚)-Ezinne-女': 'en-NG-EzinneNeural',
- '英语 (新西兰)-Mitchell-男': 'en-NZ-MitchellNeural',
- '英语 (菲律宾)-James-男': 'en-PH-JamesNeural',
- '英语 (菲律宾)-Rosa-女': 'en-PH-RosaNeural',
- '英语 (新加坡)-Luna-女': 'en-SG-LunaNeural',
- '英语 (新加坡)-Wayne-男': 'en-SG-WayneNeural',
- '英语 (坦桑尼亚)-Elimu-男': 'en-TZ-ElimuNeural',
- '英语 (坦桑尼亚)-Imani-女': 'en-TZ-ImaniNeural',
- '英语 (南非)-Leah-女': 'en-ZA-LeahNeural',
- '英语 (南非)-Luke-男': 'en-ZA-LukeNeural',
- '西班牙语 (阿根廷)-Elena-女': 'es-AR-ElenaNeural',
- '西班牙语 (阿根廷)-Tomas-男': 'es-AR-TomasNeural',
- '西班牙语 (玻利维亚)-Marcelo-男': 'es-BO-MarceloNeural',
- '西班牙语 (玻利维亚)-Sofia-女': 'es-BO-SofiaNeural',
- '西班牙语 (哥伦比亚)-Gonzalo-男': 'es-CO-GonzaloNeural',
- '西班牙语 (哥伦比亚)-Salome-女': 'es-CO-SalomeNeural',
- '西班牙语 (哥斯达黎加)-Juan-男': 'es-CR-JuanNeural',
- '西班牙语 (哥斯达黎加)-Maria-女': 'es-CR-MariaNeural',
- '西班牙语 (古巴)-Belkys-女': 'es-CU-BelkysNeural',
- '西班牙语 (多米尼加共和国)-Emilio-男': 'es-DO-EmilioNeural',
- '西班牙语 (多米尼加共和国)-Ramona-女': 'es-DO-RamonaNeural',
- '西班牙语 (厄瓜多尔)-Andrea-女': 'es-EC-AndreaNeural',
- '西班牙语 (厄瓜多尔)-Luis-男': 'es-EC-LuisNeural',
- '西班牙语 (西班牙)-Alvaro-男': 'es-ES-AlvaroNeural',
- '西班牙语 (西班牙)-Elvira-女': 'es-ES-ElviraNeural',
- '西班牙语 (赤道几内亚)-Teresa-女': 'es-GQ-TeresaNeural',
- '西班牙语 (危地马拉)-Andres-男': 'es-GT-AndresNeural',
- '西班牙语 (危地马拉)-Marta-女': 'es-GT-MartaNeural',
- '西班牙语 (洪都拉斯)-Carlos-男': 'es-HN-CarlosNeural',
- '西班牙语 (洪都拉斯)-Karla-女': 'es-HN-KarlaNeural',
- '西班牙语 (尼加拉瓜)-Federico-男': 'es-NI-FedericoNeural',
- '西班牙语 (尼加拉瓜)-Yolanda-女': 'es-NI-YolandaNeural',
- '西班牙语 (巴拿马)-Margarita-女': 'es-PA-MargaritaNeural',
- '西班牙语 (巴拿马)-Roberto-男': 'es-PA-RobertoNeural',
- '西班牙语 (秘鲁)-Alex-男': 'es-PE-AlexNeural',
- '西班牙语 (秘鲁)-Camila-女': 'es-PE-CamilaNeural',
- '西班牙语 (波多黎各)-Karina-女': 'es-PR-KarinaNeural',
- '西班牙语 (波多黎各)-Victor-男': 'es-PR-VictorNeural',
- '西班牙语 (巴拉圭)-Mario-男': 'es-PY-MarioNeural',
- '西班牙语 (巴拉圭)-Tania-女': 'es-PY-TaniaNeural',
- '西班牙语 (萨尔瓦多)-Lorena-女': 'es-SV-LorenaNeural',
- '西班牙语 (萨尔瓦多)-Rodrigo-男': 'es-SV-RodrigoNeural',
- '西班牙语 (美国)-Alonso-男': 'es-US-AlonsoNeural',
- '西班牙语 (美国)-Paloma-女': 'es-US-PalomaNeural',
- '西班牙语 (乌拉圭)-Mateo-男': 'es-UY-MateoNeural',
- '西班牙语 (乌拉圭)-Valentina-女': 'es-UY-ValentinaNeural',
- '西班牙语 (委内瑞拉)-Paola-女': 'es-VE-PaolaNeural',
- '西班牙语 (委内瑞拉)-Sebastian-男': 'es-VE-SebastianNeural',
- '爱沙尼亚语 (爱沙尼亚)-Anu-女': 'et-EE-AnuNeural',
- '爱沙尼亚语 (爱沙尼亚)-Kert-男': 'et-EE-KertNeural',
- '波斯语 (伊朗)-Dilara-女': 'fa-IR-DilaraNeural',
- '波斯语 (伊朗)-Farid-男': 'fa-IR-FaridNeural',
- '芬兰语 (芬兰)-Harri-男': 'fi-FI-HarriNeural',
- '芬兰语 (芬兰)-Noora-女': 'fi-FI-NooraNeural',
- '法语 (比利时)-Charline-女': 'fr-BE-CharlineNeural',
- '法语 (比利时)-Gerard-男': 'fr-BE-GerardNeural',
- '法语 (加拿大)-Sylvie-女': 'fr-CA-SylvieNeural',
- '法语 (加拿大)-Antoine-男': 'fr-CA-AntoineNeural',
- '法语 (加拿大)-Jean-男': 'fr-CA-JeanNeural',
- '法语 (瑞士)-Ariane-女': 'fr-CH-ArianeNeural',
- '法语 (瑞士)-Fabrice-男': 'fr-CH-FabriceNeural',
- '爱尔兰语 (爱尔兰)-Colm-男': 'ga-IE-ColmNeural',
- '爱尔兰语 (爱尔兰)-Orla-女': 'ga-IE-OrlaNeural',
- '加利西亚语 (西班牙)-Roi-男': 'gl-ES-RoiNeural',
- '加利西亚语 (西班牙)-Sabela-女': 'gl-ES-SabelaNeural',
- '古吉拉特语 (印度)-Dhwani-女': 'gu-IN-DhwaniNeural',
- '古吉拉特语 (印度)-Niranjan-男': 'gu-IN-NiranjanNeural',
- '印地语 (印度)-Madhur-男': 'hi-IN-MadhurNeural',
- '印地语 (印度)-Swara-女': 'hi-IN-SwaraNeural',
- '克罗地亚语 (克罗地亚)-Gabrijela-女': 'hr-HR-GabrijelaNeural',
- '克罗地亚语 (克罗地亚)-Srecko-男': 'hr-HR-SreckoNeural',
- '匈牙利语 (匈牙利)-Noemi-女': 'hu-HU-NoemiNeural',
- '匈牙利语 (匈牙利)-Tamas-男': 'hu-HU-TamasNeural',
- '冰岛语 (冰岛)-Gudrun-女': 'is-IS-GudrunNeural',
- '冰岛语 (冰岛)-Gunnar-男': 'is-IS-GunnarNeural',
- '爪哇语 (印度尼西亚)-Dimas-男': 'jv-ID-DimasNeural',
- '爪哇语 (印度尼西亚)-Siti-女': 'jv-ID-SitiNeural',
- '格鲁吉亚语 (格鲁吉亚)-Eka-女': 'ka-GE-EkaNeural',
- '格鲁吉亚语 (格鲁吉亚)-Giorgi-男': 'ka-GE-GiorgiNeural',
- '哈萨克语 (哈萨克斯坦)-Aigul-女': 'kk-KZ-AigulNeural',
- '哈萨克语 (哈萨克斯坦)-Daulet-男': 'kk-KZ-DauletNeural',
- '高棉语 (柬埔寨)-Piseth-男': 'km-KH-PisethNeural',
- '高棉语 (柬埔寨)-Sreymom-女': 'km-KH-SreymomNeural',
- '卡纳达语 (印度)-Gagan-男': 'kn-IN-GaganNeural',
- '卡纳达语 (印度)-Sapna-女': 'kn-IN-SapnaNeural',
- '老挝语 (老挝)-Chanthavong-男': 'lo-LA-ChanthavongNeural',
- '老挝语 (老挝)-Keomany-女': 'lo-LA-KeomanyNeural',
- '立陶宛语 (立陶宛)-Leonas-男': 'lt-LT-LeonasNeural',
- '立陶宛语 (立陶宛)-Ona-女': 'lt-LT-OnaNeural',
- '拉脱维亚语 (拉脱维亚)-Everita-女': 'lv-LV-EveritaNeural',
- '拉脱维亚语 (拉脱维亚)-Nils-男': 'lv-LV-NilsNeural',
- '马其顿语 (北马其顿共和国)-Aleksandar-男': 'mk-MK-AleksandarNeural',
- '马其顿语 (北马其顿共和国)-Marija-女': 'mk-MK-MarijaNeural',
- '马拉雅拉姆语 (印度)-Midhun-男': 'ml-IN-MidhunNeural',
- '马拉雅拉姆语 (印度)-Sobhana-女': 'ml-IN-SobhanaNeural',
- '蒙古语 (蒙古)-Bataa-男': 'mn-MN-BataaNeural',
- '蒙古语 (蒙古)-Yesui-女': 'mn-MN-YesuiNeural',
- '马拉地语 (印度)-Aarohi-女': 'mr-IN-AarohiNeural',
- '马拉地语 (印度)-Manohar-男': 'mr-IN-ManoharNeural',
- '马耳他语 (马耳他)-Grace-女': 'mt-MT-GraceNeural',
- '马耳他语 (马耳他)-Joseph-男': 'mt-MT-JosephNeural',
- '缅甸语 (缅甸)-Nilar-女': 'my-MM-NilarNeural',
- '缅甸语 (缅甸)-Thiha-男': 'my-MM-ThihaNeural',
- '尼泊尔语 (尼泊尔)-Hemkala-女': 'ne-NP-HemkalaNeural',
- '尼泊尔语 (尼泊尔)-Sagar-男': 'ne-NP-SagarNeural',
- '荷兰语 (比利时)-Arnaud-男': 'nl-BE-ArnaudNeural',
- '荷兰语 (比利时)-Dena-女': 'nl-BE-DenaNeural',
- '波兰语 (波兰)-Marek-男': 'pl-PL-MarekNeural',
- '波兰语 (波兰)-Zofia-女': 'pl-PL-ZofiaNeural',
- '普什图语 (阿富汗)-Gul Nawaz-男': 'ps-AF-GulNawazNeural',
- '普什图语 (阿富汗)-Latifa-女': 'ps-AF-LatifaNeural',
- '葡萄牙语 (葡萄牙)-Duarte-男': 'pt-PT-DuarteNeural',
- '葡萄牙语 (葡萄牙)-Raquel-女': 'pt-PT-RaquelNeural',
- '罗马尼亚语 (罗马尼亚)-Alina-女': 'ro-RO-AlinaNeural',
- '罗马尼亚语 (罗马尼亚)-Emil-男': 'ro-RO-EmilNeural',
- '俄语 (俄罗斯)-Svetlana-女': 'ru-RU-SvetlanaNeural',
- '俄语 (俄罗斯)-Dmitry-男': 'ru-RU-DmitryNeural',
- '僧伽罗语 (斯里兰卡)-Sameera-男': 'si-LK-SameeraNeural',
- '僧伽罗语 (斯里兰卡)-Thilini-女': 'si-LK-ThiliniNeural',
- '斯洛伐克语 (斯洛伐克)-Lukas-男': 'sk-SK-LukasNeural',
- '斯洛伐克语 (斯洛伐克)-Viktoria-女': 'sk-SK-ViktoriaNeural',
- '斯洛文尼亚语 (斯洛文尼亚)-Petra-女': 'sl-SI-PetraNeural',
- '斯洛文尼亚语 (斯洛文尼亚)-Rok-男': 'sl-SI-RokNeural',
- '索马里语 (索马里)-Muuse-男': 'so-SO-MuuseNeural',
- '索马里语 (索马里)-Ubax-女': 'so-SO-UbaxNeural',
- '阿尔巴尼亚语 (阿尔巴尼亚)-Anila-女': 'sq-AL-AnilaNeural',
- '阿尔巴尼亚语 (阿尔巴尼亚)-Ilir-男': 'sq-AL-IlirNeural',
- '塞尔维亚语 (塞尔维亚)-Nicholas-男': 'sr-RS-NicholasNeural',
- '塞尔维亚语 (塞尔维亚)-Sophie-女': 'sr-RS-SophieNeural',
- '巽他语 (印度尼西亚)-Jajang-男': 'su-ID-JajangNeural',
- '巽他语 (印度尼西亚)-Tuti-女': 'su-ID-TutiNeural',
- '斯瓦希里语 (肯尼亚)-Rafiki-男': 'sw-KE-RafikiNeural',
- '斯瓦希里语 (肯尼亚)-Zuri-女': 'sw-KE-ZuriNeural',
- '斯瓦希里语 (坦桑尼亚)-Daudi-男': 'sw-TZ-DaudiNeural',
- '斯瓦希里语 (坦桑尼亚)-Rehema-女': 'sw-TZ-RehemaNeural',
- '泰米尔语 (印度)-Pallavi-女': 'ta-IN-PallaviNeural',
- '泰米尔语 (印度)-Valluvar-男': 'ta-IN-ValluvarNeural',
- '泰米尔语 (斯里兰卡)-Kumar-男': 'ta-LK-KumarNeural',
- '泰米尔语 (斯里兰卡)-Saranya-女': 'ta-LK-SaranyaNeural',
- '泰米尔语 (马来西亚)-Kani-女': 'ta-MY-KaniNeural',
- '泰米尔语 (马来西亚)-Surya-男': 'ta-MY-SuryaNeural',
- '泰米尔语 (新加坡)-Anbu-男': 'ta-SG-AnbuNeural',
- '泰卢固语 (印度)-Mohan-男': 'te-IN-MohanNeural',
- '泰卢固语 (印度)-Shruti-女': 'te-IN-ShrutiNeural',
- '土耳其语 (土耳其)-Ahmet-男': 'tr-TR-AhmetNeural',
- '土耳其语 (土耳其)-Emel-女': 'tr-TR-EmelNeural',
- '乌克兰语 (乌克兰)-Ostap-男': 'uk-UA-OstapNeural',
- '乌克兰语 (乌克兰)-Polina-女': 'uk-UA-PolinaNeural',
- '乌尔都语 (印度)-Gul-女': 'ur-IN-GulNeural',
- '乌尔都语 (印度)-Salman-男': 'ur-IN-SalmanNeural',
- '乌尔都语 (巴基斯坦)-Asad-男': 'ur-PK-AsadNeural',
- '乌尔都语 (巴基斯坦)-Uzma-女': 'ur-PK-UzmaNeural',
- '乌兹别克语 (乌兹别克斯坦)-Madina-女': 'uz-UZ-MadinaNeural',
- '乌兹别克语 (乌兹别克斯坦)-Sardor-男': 'uz-UZ-SardorNeural',
- '普通话 (中国大陆)-Xiaoxiao-女': 'zh-CN-XiaoxiaoNeural',
- '普通话 (中国大陆)-Yunyang-男': 'zh-CN-YunyangNeural',
- '普通话 (中国大陆)-Yunxi-男': 'zh-CN-YunxiNeural',
- '普通话 (中国大陆)-Xiaoyi-女': 'zh-CN-XiaoyiNeural',
- '普通话 (中国大陆)-Yunjian-男': 'zh-CN-YunjianNeural',
- '普通话 (中国大陆)-Yunxia-男': 'zh-CN-YunxiaNeural',
- '东北话 (中国大陆)-Xiaobei-女': 'zh-CN-liaoning-XiaobeiNeural',
- '中原官话 (中国陕西)-Xiaoni-女': 'zh-CN-shaanxi-XiaoniNeural',
- '粤语 (中国香港)-HiuMaan-女': 'zh-HK-HiuMaanNeural',
- '粤语 (中国香港)-HiuGaai-女': 'zh-HK-HiuGaaiNeural',
- '粤语 (中国香港)-WanLung-男': 'zh-HK-WanLungNeural',
- '台湾普通话-HsiaoChen-女': 'zh-TW-HsiaoChenNeural',
- '台湾普通话-HsiaoYu-女': 'zh-TW-HsiaoYuNeural',
- '台湾普通话-YunJhe-男': 'zh-TW-YunJheNeural',
- '祖鲁语 (南非)-Thando-女': 'zu-ZA-ThandoNeural',
- '祖鲁语 (南非)-Themba-男': 'zu-ZA-ThembaNeural'}
\ No newline at end of file
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/losses/dice_loss.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/losses/dice_loss.py
deleted file mode 100644
index 8f52969c03b02116b618ecd889adaa5ed98e8ec3..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/losses/dice_loss.py
+++ /dev/null
@@ -1,131 +0,0 @@
-'''
- * Copyright (c) 2023 Salesforce, Inc.
- * All rights reserved.
- * SPDX-License-Identifier: Apache License 2.0
- * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/
- * By Can Qin
- * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet
- * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala
- * Modified from MMCV repo: From https://github.com/open-mmlab/mmcv
- * Copyright (c) OpenMMLab. All rights reserved.
-'''
-
-"""Modified from https://github.com/LikeLy-Journey/SegmenTron/blob/master/
-segmentron/solver/loss.py (Apache-2.0 License)"""
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from ..builder import LOSSES
-from .utils import get_class_weight, weighted_loss
-
-
-@weighted_loss
-def dice_loss(pred,
- target,
- valid_mask,
- smooth=1,
- exponent=2,
- class_weight=None,
- ignore_index=255):
- assert pred.shape[0] == target.shape[0]
- total_loss = 0
- num_classes = pred.shape[1]
- for i in range(num_classes):
- if i != ignore_index:
- dice_loss = binary_dice_loss(
- pred[:, i],
- target[..., i],
- valid_mask=valid_mask,
- smooth=smooth,
- exponent=exponent)
- if class_weight is not None:
- dice_loss *= class_weight[i]
- total_loss += dice_loss
- return total_loss / num_classes
-
-
-@weighted_loss
-def binary_dice_loss(pred, target, valid_mask, smooth=1, exponent=2, **kwards):
- assert pred.shape[0] == target.shape[0]
- pred = pred.reshape(pred.shape[0], -1)
- target = target.reshape(target.shape[0], -1)
- valid_mask = valid_mask.reshape(valid_mask.shape[0], -1)
-
- num = torch.sum(torch.mul(pred, target) * valid_mask, dim=1) * 2 + smooth
- den = torch.sum(pred.pow(exponent) + target.pow(exponent), dim=1) + smooth
-
- return 1 - num / den
-
-
-@LOSSES.register_module()
-class DiceLoss(nn.Module):
- """DiceLoss.
-
- This loss is proposed in `V-Net: Fully Convolutional Neural Networks for
- Volumetric Medical Image Segmentation `_.
-
- Args:
- loss_type (str, optional): Binary or multi-class loss.
- Default: 'multi_class'. Options are "binary" and "multi_class".
- smooth (float): A float number to smooth loss, and avoid NaN error.
- Default: 1
- exponent (float): An float number to calculate denominator
- value: \\sum{x^exponent} + \\sum{y^exponent}. Default: 2.
- reduction (str, optional): The method used to reduce the loss. Options
- are "none", "mean" and "sum". This parameter only works when
- per_image is True. Default: 'mean'.
- class_weight (list[float] | str, optional): Weight of each class. If in
- str format, read them from a file. Defaults to None.
- loss_weight (float, optional): Weight of the loss. Default to 1.0.
- ignore_index (int | None): The label index to be ignored. Default: 255.
- """
-
- def __init__(self,
- smooth=1,
- exponent=2,
- reduction='mean',
- class_weight=None,
- loss_weight=1.0,
- ignore_index=255,
- **kwards):
- super(DiceLoss, self).__init__()
- self.smooth = smooth
- self.exponent = exponent
- self.reduction = reduction
- self.class_weight = get_class_weight(class_weight)
- self.loss_weight = loss_weight
- self.ignore_index = ignore_index
-
- def forward(self,
- pred,
- target,
- avg_factor=None,
- reduction_override=None,
- **kwards):
- assert reduction_override in (None, 'none', 'mean', 'sum')
- reduction = (
- reduction_override if reduction_override else self.reduction)
- if self.class_weight is not None:
- class_weight = pred.new_tensor(self.class_weight)
- else:
- class_weight = None
-
- pred = F.softmax(pred, dim=1)
- num_classes = pred.shape[1]
- one_hot_target = F.one_hot(
- torch.clamp(target.long(), 0, num_classes - 1),
- num_classes=num_classes)
- valid_mask = (target != self.ignore_index).long()
-
- loss = self.loss_weight * dice_loss(
- pred,
- one_hot_target,
- valid_mask=valid_mask,
- reduction=reduction,
- avg_factor=avg_factor,
- smooth=self.smooth,
- exponent=self.exponent,
- class_weight=class_weight,
- ignore_index=self.ignore_index)
- return loss
diff --git a/spaces/abtech/README/README.md b/spaces/abtech/README/README.md
deleted file mode 100644
index 6107ab2288e5ca964185c1631e5a2385d9d6a46c..0000000000000000000000000000000000000000
--- a/spaces/abtech/README/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: README
-emoji: 📉
-colorFrom: green
-colorTo: pink
-sdk: static
-pinned: false
----
-
-Edit this `README.md` markdown file to author your organization card 🔥
diff --git a/spaces/adhisetiawan/anime-voice-generator/attentions.py b/spaces/adhisetiawan/anime-voice-generator/attentions.py
deleted file mode 100644
index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000
--- a/spaces/adhisetiawan/anime-voice-generator/attentions.py
+++ /dev/null
@@ -1,300 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/adorp/ControlNet-v1-1-duplicate/model.py b/spaces/adorp/ControlNet-v1-1-duplicate/model.py
deleted file mode 100644
index a9239489a9ee2d1a082f701847dccd209f0477ac..0000000000000000000000000000000000000000
--- a/spaces/adorp/ControlNet-v1-1-duplicate/model.py
+++ /dev/null
@@ -1,591 +0,0 @@
-from __future__ import annotations
-
-import gc
-
-import numpy as np
-import PIL.Image
-import torch
-from controlnet_aux.util import HWC3
-from diffusers import (ControlNetModel, DiffusionPipeline,
- StableDiffusionControlNetPipeline,
- UniPCMultistepScheduler)
-
-from cv_utils import resize_image
-from preprocessor import Preprocessor
-
-CONTROLNET_MODEL_IDS = {
- 'Openpose': 'lllyasviel/control_v11p_sd15_openpose',
- 'Canny': 'lllyasviel/control_v11p_sd15_canny',
- 'MLSD': 'lllyasviel/control_v11p_sd15_mlsd',
- 'scribble': 'lllyasviel/control_v11p_sd15_scribble',
- 'softedge': 'lllyasviel/control_v11p_sd15_softedge',
- 'segmentation': 'lllyasviel/control_v11p_sd15_seg',
- 'depth': 'lllyasviel/control_v11f1p_sd15_depth',
- 'NormalBae': 'lllyasviel/control_v11p_sd15_normalbae',
- 'lineart': 'lllyasviel/control_v11p_sd15_lineart',
- 'lineart_anime': 'lllyasviel/control_v11p_sd15s2_lineart_anime',
- 'shuffle': 'lllyasviel/control_v11e_sd15_shuffle',
- 'ip2p': 'lllyasviel/control_v11e_sd15_ip2p',
- 'inpaint': 'lllyasviel/control_v11e_sd15_inpaint',
-}
-
-
-def download_all_controlnet_weights() -> None:
- for model_id in CONTROLNET_MODEL_IDS.values():
- ControlNetModel.from_pretrained(model_id)
-
-
-class Model:
- def __init__(self,
- base_model_id: str = 'runwayml/stable-diffusion-v1-5',
- task_name: str = 'Canny'):
- self.device = torch.device(
- 'cuda:0' if torch.cuda.is_available() else 'cpu')
- self.base_model_id = ''
- self.task_name = ''
- self.pipe = self.load_pipe(base_model_id, task_name)
- self.preprocessor = Preprocessor()
-
- def load_pipe(self, base_model_id: str, task_name) -> DiffusionPipeline:
- if base_model_id == self.base_model_id and task_name == self.task_name and hasattr(
- self, 'pipe') and self.pipe is not None:
- return self.pipe
- model_id = CONTROLNET_MODEL_IDS[task_name]
- controlnet = ControlNetModel.from_pretrained(model_id,
- torch_dtype=torch.float16)
- pipe = StableDiffusionControlNetPipeline.from_pretrained(
- base_model_id,
- safety_checker=None,
- controlnet=controlnet,
- torch_dtype=torch.float16)
- pipe.scheduler = UniPCMultistepScheduler.from_config(
- pipe.scheduler.config)
- if self.device.type == 'cuda':
- pipe.enable_xformers_memory_efficient_attention()
- pipe.to(self.device)
- torch.cuda.empty_cache()
- gc.collect()
- self.base_model_id = base_model_id
- self.task_name = task_name
- return pipe
-
- def set_base_model(self, base_model_id: str) -> str:
- if not base_model_id or base_model_id == self.base_model_id:
- return self.base_model_id
- del self.pipe
- torch.cuda.empty_cache()
- gc.collect()
- try:
- self.pipe = self.load_pipe(base_model_id, self.task_name)
- except Exception:
- self.pipe = self.load_pipe(self.base_model_id, self.task_name)
- return self.base_model_id
-
- def load_controlnet_weight(self, task_name: str) -> None:
- if task_name == self.task_name:
- return
- if self.pipe is not None and hasattr(self.pipe, 'controlnet'):
- del self.pipe.controlnet
- torch.cuda.empty_cache()
- gc.collect()
- model_id = CONTROLNET_MODEL_IDS[task_name]
- controlnet = ControlNetModel.from_pretrained(model_id,
- torch_dtype=torch.float16)
- controlnet.to(self.device)
- torch.cuda.empty_cache()
- gc.collect()
- self.pipe.controlnet = controlnet
- self.task_name = task_name
-
- def get_prompt(self, prompt: str, additional_prompt: str) -> str:
- if not prompt:
- prompt = additional_prompt
- else:
- prompt = f'{prompt}, {additional_prompt}'
- return prompt
-
- @torch.autocast('cuda')
- def run_pipe(
- self,
- prompt: str,
- negative_prompt: str,
- control_image: PIL.Image.Image,
- num_images: int,
- num_steps: int,
- guidance_scale: float,
- seed: int,
- ) -> list[PIL.Image.Image]:
- if seed == -1:
- seed = np.random.randint(0, np.iinfo(np.int64).max)
- generator = torch.Generator().manual_seed(seed)
- return self.pipe(prompt=prompt,
- negative_prompt=negative_prompt,
- guidance_scale=guidance_scale,
- num_images_per_prompt=num_images,
- num_inference_steps=num_steps,
- generator=generator,
- image=control_image).images
-
- @torch.inference_mode()
- def process_canny(
- self,
- image: np.ndarray,
- prompt: str,
- additional_prompt: str,
- negative_prompt: str,
- num_images: int,
- image_resolution: int,
- num_steps: int,
- guidance_scale: float,
- seed: int,
- low_threshold: int,
- high_threshold: int,
- ) -> list[PIL.Image.Image]:
- self.preprocessor.load('Canny')
- control_image = self.preprocessor(image=image,
- low_threshold=low_threshold,
- high_threshold=high_threshold,
- detect_resolution=image_resolution)
-
- self.load_controlnet_weight('Canny')
- results = self.run_pipe(
- prompt=self.get_prompt(prompt, additional_prompt),
- negative_prompt=negative_prompt,
- control_image=control_image,
- num_images=num_images,
- num_steps=num_steps,
- guidance_scale=guidance_scale,
- seed=seed,
- )
- return [control_image] + results
-
- @torch.inference_mode()
- def process_mlsd(
- self,
- image: np.ndarray,
- prompt: str,
- additional_prompt: str,
- negative_prompt: str,
- num_images: int,
- image_resolution: int,
- preprocess_resolution: int,
- num_steps: int,
- guidance_scale: float,
- seed: int,
- value_threshold: float,
- distance_threshold: float,
- ) -> list[PIL.Image.Image]:
- self.preprocessor.load('MLSD')
- control_image = self.preprocessor(
- image=image,
- image_resolution=image_resolution,
- detect_resolution=preprocess_resolution,
- thr_v=value_threshold,
- thr_d=distance_threshold,
- )
- self.load_controlnet_weight('MLSD')
- results = self.run_pipe(
- prompt=self.get_prompt(prompt, additional_prompt),
- negative_prompt=negative_prompt,
- control_image=control_image,
- num_images=num_images,
- num_steps=num_steps,
- guidance_scale=guidance_scale,
- seed=seed,
- )
- return [control_image] + results
-
- @torch.inference_mode()
- def process_scribble(
- self,
- image: np.ndarray,
- prompt: str,
- additional_prompt: str,
- negative_prompt: str,
- num_images: int,
- image_resolution: int,
- preprocess_resolution: int,
- num_steps: int,
- guidance_scale: float,
- seed: int,
- preprocessor_name: str,
- ) -> list[PIL.Image.Image]:
- if preprocessor_name == 'None':
- image = HWC3(image)
- image = resize_image(image, resolution=image_resolution)
- control_image = PIL.Image.fromarray(image)
- elif preprocessor_name == 'HED':
- self.preprocessor.load(preprocessor_name)
- control_image = self.preprocessor(
- image=image,
- image_resolution=image_resolution,
- detect_resolution=preprocess_resolution,
- scribble=False,
- )
- elif preprocessor_name == 'PidiNet':
- self.preprocessor.load(preprocessor_name)
- control_image = self.preprocessor(
- image=image,
- image_resolution=image_resolution,
- detect_resolution=preprocess_resolution,
- safe=False,
- )
- self.load_controlnet_weight('scribble')
- results = self.run_pipe(
- prompt=self.get_prompt(prompt, additional_prompt),
- negative_prompt=negative_prompt,
- control_image=control_image,
- num_images=num_images,
- num_steps=num_steps,
- guidance_scale=guidance_scale,
- seed=seed,
- )
- return [control_image] + results
-
- @torch.inference_mode()
- def process_scribble_interactive(
- self,
- image_and_mask: dict[str, np.ndarray],
- prompt: str,
- additional_prompt: str,
- negative_prompt: str,
- num_images: int,
- image_resolution: int,
- num_steps: int,
- guidance_scale: float,
- seed: int,
- ) -> list[PIL.Image.Image]:
- image = image_and_mask['mask']
- image = HWC3(image)
- image = resize_image(image, resolution=image_resolution)
- control_image = PIL.Image.fromarray(image)
-
- self.load_controlnet_weight('scribble')
- results = self.run_pipe(
- prompt=self.get_prompt(prompt, additional_prompt),
- negative_prompt=negative_prompt,
- control_image=control_image,
- num_images=num_images,
- num_steps=num_steps,
- guidance_scale=guidance_scale,
- seed=seed,
- )
- return [control_image] + results
-
- @torch.inference_mode()
- def process_softedge(
- self,
- image: np.ndarray,
- prompt: str,
- additional_prompt: str,
- negative_prompt: str,
- num_images: int,
- image_resolution: int,
- preprocess_resolution: int,
- num_steps: int,
- guidance_scale: float,
- seed: int,
- preprocessor_name: str,
- ) -> list[PIL.Image.Image]:
- if preprocessor_name == 'None':
- image = HWC3(image)
- image = resize_image(image, resolution=image_resolution)
- control_image = PIL.Image.fromarray(image)
- elif preprocessor_name in ['HED', 'HED safe']:
- safe = 'safe' in preprocessor_name
- self.preprocessor.load('HED')
- control_image = self.preprocessor(
- image=image,
- image_resolution=image_resolution,
- detect_resolution=preprocess_resolution,
- scribble=safe,
- )
- elif preprocessor_name in ['PidiNet', 'PidiNet safe']:
- safe = 'safe' in preprocessor_name
- self.preprocessor.load('PidiNet')
- control_image = self.preprocessor(
- image=image,
- image_resolution=image_resolution,
- detect_resolution=preprocess_resolution,
- safe=safe,
- )
- else:
- raise ValueError
- self.load_controlnet_weight('softedge')
- results = self.run_pipe(
- prompt=self.get_prompt(prompt, additional_prompt),
- negative_prompt=negative_prompt,
- control_image=control_image,
- num_images=num_images,
- num_steps=num_steps,
- guidance_scale=guidance_scale,
- seed=seed,
- )
- return [control_image] + results
-
- @torch.inference_mode()
- def process_openpose(
- self,
- image: np.ndarray,
- prompt: str,
- additional_prompt: str,
- negative_prompt: str,
- num_images: int,
- image_resolution: int,
- preprocess_resolution: int,
- num_steps: int,
- guidance_scale: float,
- seed: int,
- preprocessor_name: str,
- ) -> list[PIL.Image.Image]:
- if preprocessor_name == 'None':
- image = HWC3(image)
- image = resize_image(image, resolution=image_resolution)
- control_image = PIL.Image.fromarray(image)
- else:
- self.preprocessor.load('Openpose')
- control_image = self.preprocessor(
- image=image,
- image_resolution=image_resolution,
- detect_resolution=preprocess_resolution,
- hand_and_face=True,
- )
- self.load_controlnet_weight('Openpose')
- results = self.run_pipe(
- prompt=self.get_prompt(prompt, additional_prompt),
- negative_prompt=negative_prompt,
- control_image=control_image,
- num_images=num_images,
- num_steps=num_steps,
- guidance_scale=guidance_scale,
- seed=seed,
- )
- return [control_image] + results
-
- @torch.inference_mode()
- def process_segmentation(
- self,
- image: np.ndarray,
- prompt: str,
- additional_prompt: str,
- negative_prompt: str,
- num_images: int,
- image_resolution: int,
- preprocess_resolution: int,
- num_steps: int,
- guidance_scale: float,
- seed: int,
- preprocessor_name: str,
- ) -> list[PIL.Image.Image]:
- if preprocessor_name == 'None':
- image = HWC3(image)
- image = resize_image(image, resolution=image_resolution)
- control_image = PIL.Image.fromarray(image)
- else:
- self.preprocessor.load(preprocessor_name)
- control_image = self.preprocessor(
- image=image,
- image_resolution=image_resolution,
- detect_resolution=preprocess_resolution,
- )
- self.load_controlnet_weight('segmentation')
- results = self.run_pipe(
- prompt=self.get_prompt(prompt, additional_prompt),
- negative_prompt=negative_prompt,
- control_image=control_image,
- num_images=num_images,
- num_steps=num_steps,
- guidance_scale=guidance_scale,
- seed=seed,
- )
- return [control_image] + results
-
- @torch.inference_mode()
- def process_depth(
- self,
- image: np.ndarray,
- prompt: str,
- additional_prompt: str,
- negative_prompt: str,
- num_images: int,
- image_resolution: int,
- preprocess_resolution: int,
- num_steps: int,
- guidance_scale: float,
- seed: int,
- preprocessor_name: str,
- ) -> list[PIL.Image.Image]:
- if preprocessor_name == 'None':
- image = HWC3(image)
- image = resize_image(image, resolution=image_resolution)
- control_image = PIL.Image.fromarray(image)
- else:
- self.preprocessor.load(preprocessor_name)
- control_image = self.preprocessor(
- image=image,
- image_resolution=image_resolution,
- detect_resolution=preprocess_resolution,
- )
- self.load_controlnet_weight('depth')
- results = self.run_pipe(
- prompt=self.get_prompt(prompt, additional_prompt),
- negative_prompt=negative_prompt,
- control_image=control_image,
- num_images=num_images,
- num_steps=num_steps,
- guidance_scale=guidance_scale,
- seed=seed,
- )
- return [control_image] + results
-
- @torch.inference_mode()
- def process_normal(
- self,
- image: np.ndarray,
- prompt: str,
- additional_prompt: str,
- negative_prompt: str,
- num_images: int,
- image_resolution: int,
- preprocess_resolution: int,
- num_steps: int,
- guidance_scale: float,
- seed: int,
- preprocessor_name: str,
- ) -> list[PIL.Image.Image]:
- if preprocessor_name == 'None':
- image = HWC3(image)
- image = resize_image(image, resolution=image_resolution)
- control_image = PIL.Image.fromarray(image)
- else:
- self.preprocessor.load('NormalBae')
- control_image = self.preprocessor(
- image=image,
- image_resolution=image_resolution,
- detect_resolution=preprocess_resolution,
- )
- self.load_controlnet_weight('NormalBae')
- results = self.run_pipe(
- prompt=self.get_prompt(prompt, additional_prompt),
- negative_prompt=negative_prompt,
- control_image=control_image,
- num_images=num_images,
- num_steps=num_steps,
- guidance_scale=guidance_scale,
- seed=seed,
- )
- return [control_image] + results
-
- @torch.inference_mode()
- def process_lineart(
- self,
- image: np.ndarray,
- prompt: str,
- additional_prompt: str,
- negative_prompt: str,
- num_images: int,
- image_resolution: int,
- preprocess_resolution: int,
- num_steps: int,
- guidance_scale: float,
- seed: int,
- preprocessor_name: str,
- ) -> list[PIL.Image.Image]:
- if preprocessor_name in ['None', 'None (anime)']:
- image = HWC3(image)
- image = resize_image(image, resolution=image_resolution)
- control_image = PIL.Image.fromarray(image)
- elif preprocessor_name in ['Lineart', 'Lineart coarse']:
- coarse = 'coarse' in preprocessor_name
- self.preprocessor.load('Lineart')
- control_image = self.preprocessor(
- image=image,
- image_resolution=image_resolution,
- detect_resolution=preprocess_resolution,
- coarse=coarse,
- )
- elif preprocessor_name == 'Lineart (anime)':
- self.preprocessor.load('LineartAnime')
- control_image = self.preprocessor(
- image=image,
- image_resolution=image_resolution,
- detect_resolution=preprocess_resolution,
- )
- if 'anime' in preprocessor_name:
- self.load_controlnet_weight('lineart_anime')
- else:
- self.load_controlnet_weight('lineart')
- results = self.run_pipe(
- prompt=self.get_prompt(prompt, additional_prompt),
- negative_prompt=negative_prompt,
- control_image=control_image,
- num_images=num_images,
- num_steps=num_steps,
- guidance_scale=guidance_scale,
- seed=seed,
- )
- return [control_image] + results
-
- @torch.inference_mode()
- def process_shuffle(
- self,
- image: np.ndarray,
- prompt: str,
- additional_prompt: str,
- negative_prompt: str,
- num_images: int,
- image_resolution: int,
- num_steps: int,
- guidance_scale: float,
- seed: int,
- preprocessor_name: str,
- ) -> list[PIL.Image.Image]:
- if preprocessor_name == 'None':
- image = HWC3(image)
- image = resize_image(image, resolution=image_resolution)
- control_image = PIL.Image.fromarray(image)
- else:
- self.preprocessor.load(preprocessor_name)
- control_image = self.preprocessor(
- image=image,
- image_resolution=image_resolution,
- )
- self.load_controlnet_weight('shuffle')
- results = self.run_pipe(
- prompt=self.get_prompt(prompt, additional_prompt),
- negative_prompt=negative_prompt,
- control_image=control_image,
- num_images=num_images,
- num_steps=num_steps,
- guidance_scale=guidance_scale,
- seed=seed,
- )
- return [control_image] + results
-
- @torch.inference_mode()
- def process_ip2p(
- self,
- image: np.ndarray,
- prompt: str,
- additional_prompt: str,
- negative_prompt: str,
- num_images: int,
- image_resolution: int,
- num_steps: int,
- guidance_scale: float,
- seed: int,
- ) -> list[PIL.Image.Image]:
- image = HWC3(image)
- image = resize_image(image, resolution=image_resolution)
- control_image = PIL.Image.fromarray(image)
- self.load_controlnet_weight('ip2p')
- results = self.run_pipe(
- prompt=self.get_prompt(prompt, additional_prompt),
- negative_prompt=negative_prompt,
- control_image=control_image,
- num_images=num_images,
- num_steps=num_steps,
- guidance_scale=guidance_scale,
- seed=seed,
- )
- return [control_image] + results
diff --git a/spaces/adpro/dpt-depth16/README.md b/spaces/adpro/dpt-depth16/README.md
deleted file mode 100644
index a2df32f52be298450622acdf691911580499139c..0000000000000000000000000000000000000000
--- a/spaces/adpro/dpt-depth16/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Dpt Depth Estimation
-emoji: ⚡
-colorFrom: blue
-colorTo: red
-sdk: gradio
-sdk_version: 2.8.13
-app_file: app.py
-pinned: false
-duplicated_from: adpro/dpt-depth01
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/adriansd12/Bible_Index/module/bible_index.py b/spaces/adriansd12/Bible_Index/module/bible_index.py
deleted file mode 100644
index e6b0d104fdd3f558192b77ac3761fcee837651fd..0000000000000000000000000000000000000000
--- a/spaces/adriansd12/Bible_Index/module/bible_index.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import numpy as np
-from sentence_transformers import SentenceTransformer, util
-
-
-class BibleIndex:
- def __init__(self, testament: str = "all") -> None:
- self.model = SentenceTransformer(
- "sentence-transformers/msmarco-bert-base-dot-v5"
- )
-
- self.testament = testament
-
- self.load_emb()
- self.load_text()
-
- def load_emb(self) -> None:
- self.emb = np.load(f"data/embeddings/{self.testament}_esv_embeddings.npy")
-
- def load_text(self) -> None:
- text_path = f"data/text/{self.testament}_testament_esv.txt"
-
- with open(text_path, "r") as f:
- self.text = f.readlines()[1:]
-
- def query(self, query: str = "", top_n: int = 10):
- query_emb = self.model.encode(query)
- scores = util.dot_score(query_emb, self.emb)[0].cpu().tolist()
-
- # Combine docs & scores
- doc_score_pairs = list(zip(self.text, scores))
-
- # Sort by decreasing score
- doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
-
- # Output passages & scores
- print("Query:", query)
- results = []
- for doc, score in doc_score_pairs[:top_n]:
- text_split = doc.split(",")
- results.append(
- {
- "src": f"{text_split[0]} {text_split[1]}:{text_split[2]}",
- "text": ",".join(text_split[3:])
- .replace("\xa0", "")
- .replace("\n", ""),
- "score": score,
- }
- )
- return results
diff --git a/spaces/akhaliq/JoJoGAN/e4e/editings/sefa.py b/spaces/akhaliq/JoJoGAN/e4e/editings/sefa.py
deleted file mode 100644
index db7083ce463b765a7cf452807883a3b85fb63fa5..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/JoJoGAN/e4e/editings/sefa.py
+++ /dev/null
@@ -1,46 +0,0 @@
-import torch
-import numpy as np
-from tqdm import tqdm
-
-
-def edit(generator, latents, indices, semantics=1, start_distance=-15.0, end_distance=15.0, num_samples=1, step=11):
-
- layers, boundaries, values = factorize_weight(generator, indices)
- codes = latents.detach().cpu().numpy() # (1,18,512)
-
- # Generate visualization pages.
- distances = np.linspace(start_distance, end_distance, step)
- num_sam = num_samples
- num_sem = semantics
-
- edited_latents = []
- for sem_id in tqdm(range(num_sem), desc='Semantic ', leave=False):
- boundary = boundaries[sem_id:sem_id + 1]
- for sam_id in tqdm(range(num_sam), desc='Sample ', leave=False):
- code = codes[sam_id:sam_id + 1]
- for col_id, d in enumerate(distances, start=1):
- temp_code = code.copy()
- temp_code[:, layers, :] += boundary * d
- edited_latents.append(torch.from_numpy(temp_code).float().cuda())
- return torch.cat(edited_latents)
-
-
-def factorize_weight(g_ema, layers='all'):
-
- weights = []
- if layers == 'all' or 0 in layers:
- weight = g_ema.conv1.conv.modulation.weight.T
- weights.append(weight.cpu().detach().numpy())
-
- if layers == 'all':
- layers = list(range(g_ema.num_layers - 1))
- else:
- layers = [l - 1 for l in layers if l != 0]
-
- for idx in layers:
- weight = g_ema.convs[idx].conv.modulation.weight.T
- weights.append(weight.cpu().detach().numpy())
- weight = np.concatenate(weights, axis=1).astype(np.float32)
- weight = weight / np.linalg.norm(weight, axis=0, keepdims=True)
- eigen_values, eigen_vectors = np.linalg.eig(weight.dot(weight.T))
- return layers, eigen_vectors.T, eigen_values
diff --git a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/AttDef.pod b/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/AttDef.pod
deleted file mode 100644
index b5acb78f2e7d95c6638117f5973342da36c00689..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/AttDef.pod
+++ /dev/null
@@ -1,36 +0,0 @@
-=head1 NAME
-
-XML::DOM::AttDef - A single XML attribute definition in an ATTLIST in XML::DOM
-
-=head1 DESCRIPTION
-
-XML::DOM::AttDef extends L, but is not part of the DOM Level 1
-specification.
-
-Each object of this class represents one attribute definition in an AttlistDecl.
-
-=head2 METHODS
-
-=over 4
-
-=item getName
-
-Returns the attribute name.
-
-=item getDefault
-
-Returns the default value, or undef.
-
-=item isFixed
-
-Whether the attribute value is fixed (see #FIXED keyword.)
-
-=item isRequired
-
-Whether the attribute value is required (see #REQUIRED keyword.)
-
-=item isImplied
-
-Whether the attribute value is implied (see #IMPLIED keyword.)
-
-=back
diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/libritts/voc1/local/data_prep.sh b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/libritts/voc1/local/data_prep.sh
deleted file mode 100644
index b4102b80fcd75e320a0f3540112adc4311171dd9..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/libritts/voc1/local/data_prep.sh
+++ /dev/null
@@ -1,85 +0,0 @@
-#!/bin/bash
-
-# Copyright 2020 Tomoki Hayashi
-# MIT License (https://opensource.org/licenses/MIT)
-
-# shellcheck disable=SC1091
-. ./path.sh || exit 1;
-
-# shellcheck disable=SC1091
-. utils/parse_options.sh || exit 1;
-
-db_root=$1
-part=$2
-data_dir=$3
-db_label_root=$4
-
-# check arguments
-if [ $# -lt 3 ] || [ $# -gt 4 ]; then
- echo "Usage: $0 [Options] []"
- echo "e.g.: $0 downloads/LibriTTS train-clean-100 data"
- echo "e.g.: $0 downloads/LibriTTS train-clean-100 data downloads/LibriTTSLabel"
- exit 1
-fi
-
-set -euo pipefail
-
-# check spk existence
-[ ! -e "${db_root}/${part}" ] && \
- echo "${part} does not exist." >&2 && exit 1;
-
-[ ! -e "${data_dir}/${part}" ] && mkdir -p "${data_dir}/${part}"
-
-# set filenames
-scp="${data_dir}/${part}/wav.scp"
-if [ -n "${db_label_root}" ]; then
- use_segments=true
- segments="${data_dir}/${part}/segments"
-else
- use_segments=false
-fi
-
-# check file existence
-[ -e "${scp}" ] && rm "${scp}"
-if "${use_segments}"; then
- [ -e "${segments}" ] && rm "${segments}"
-fi
-
-# make scp and segments
-find "${db_root}/${part}" -follow -name "*.wav" | sort | while read -r wav; do
- id=$(basename "${wav}" | sed -e "s/\.[^\.]*$//g")
- lab=$(echo "${wav}" | sed -e "s;${db_root}/${part};${db_label_root}/lab/phone/${part};g" -e "s/.wav/.lab/g")
-
- # check lab existence
- if "${use_segments}" && [ ! -e "${lab}" ]; then
- echo "${id} does not have a label file. skipped."
- continue
- fi
-
- echo "${id} ${wav}" >> "${scp}"
-
- if "${use_segments}"; then
- # parse label
- idx=1
- while true; do
- symbol=$(sed -n "${idx}p" "${lab}" | awk '{print $3}')
- if [ "${symbol}" != "sil" ]; then
- start_sec=$(sed -n "${idx}p" "${lab}" | awk '{print $1}')
- break
- fi
- idx=$((idx+1))
- done
- idx=$(wc -l < "${lab}")
- while true; do
- symbol=$(sed -n "${idx}p" "${lab}" | awk '{print $3}')
- if [ -n "${symbol}" ] && [ "${symbol}" != "sp" ]; then
- end_sec=$(sed -n "${idx}p" "${lab}" | awk '{print $2}')
- break
- fi
- idx=$((idx-1))
- done
- echo "${id} ${id} ${start_sec} ${end_sec}" >> "${segments}"
- fi
-done
-
-echo "Successfully prepared ${part} data."
diff --git a/spaces/akhaliq/lama/models/ade20k/resnet.py b/spaces/akhaliq/lama/models/ade20k/resnet.py
deleted file mode 100644
index 3e1d521f171c984cf6a7ff3dcebd96f8c5faf908..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/lama/models/ade20k/resnet.py
+++ /dev/null
@@ -1,181 +0,0 @@
-"""Modified from https://github.com/CSAILVision/semantic-segmentation-pytorch"""
-
-import math
-
-import torch.nn as nn
-from torch.nn import BatchNorm2d
-
-from .utils import load_url
-
-__all__ = ['ResNet', 'resnet50']
-
-
-model_urls = {
- 'resnet50': 'http://sceneparsing.csail.mit.edu/model/pretrained_resnet/resnet50-imagenet.pth',
-}
-
-
-def conv3x3(in_planes, out_planes, stride=1):
- "3x3 convolution with padding"
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
- padding=1, bias=False)
-
-
-class BasicBlock(nn.Module):
- expansion = 1
-
- def __init__(self, inplanes, planes, stride=1, downsample=None):
- super(BasicBlock, self).__init__()
- self.conv1 = conv3x3(inplanes, planes, stride)
- self.bn1 = BatchNorm2d(planes)
- self.relu = nn.ReLU(inplace=True)
- self.conv2 = conv3x3(planes, planes)
- self.bn2 = BatchNorm2d(planes)
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class Bottleneck(nn.Module):
- expansion = 4
-
- def __init__(self, inplanes, planes, stride=1, downsample=None):
- super(Bottleneck, self).__init__()
- self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
- self.bn1 = BatchNorm2d(planes)
- self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
- padding=1, bias=False)
- self.bn2 = BatchNorm2d(planes)
- self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
- self.bn3 = BatchNorm2d(planes * 4)
- self.relu = nn.ReLU(inplace=True)
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
- out = self.relu(out)
-
- out = self.conv3(out)
- out = self.bn3(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class ResNet(nn.Module):
-
- def __init__(self, block, layers, num_classes=1000):
- self.inplanes = 128
- super(ResNet, self).__init__()
- self.conv1 = conv3x3(3, 64, stride=2)
- self.bn1 = BatchNorm2d(64)
- self.relu1 = nn.ReLU(inplace=True)
- self.conv2 = conv3x3(64, 64)
- self.bn2 = BatchNorm2d(64)
- self.relu2 = nn.ReLU(inplace=True)
- self.conv3 = conv3x3(64, 128)
- self.bn3 = BatchNorm2d(128)
- self.relu3 = nn.ReLU(inplace=True)
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
-
- self.layer1 = self._make_layer(block, 64, layers[0])
- self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
- self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
- self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
- self.avgpool = nn.AvgPool2d(7, stride=1)
- self.fc = nn.Linear(512 * block.expansion, num_classes)
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
- m.weight.data.normal_(0, math.sqrt(2. / n))
- elif isinstance(m, BatchNorm2d):
- m.weight.data.fill_(1)
- m.bias.data.zero_()
-
- def _make_layer(self, block, planes, blocks, stride=1):
- downsample = None
- if stride != 1 or self.inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- nn.Conv2d(self.inplanes, planes * block.expansion,
- kernel_size=1, stride=stride, bias=False),
- BatchNorm2d(planes * block.expansion),
- )
-
- layers = []
- layers.append(block(self.inplanes, planes, stride, downsample))
- self.inplanes = planes * block.expansion
- for i in range(1, blocks):
- layers.append(block(self.inplanes, planes))
-
- return nn.Sequential(*layers)
-
- def forward(self, x):
- x = self.relu1(self.bn1(self.conv1(x)))
- x = self.relu2(self.bn2(self.conv2(x)))
- x = self.relu3(self.bn3(self.conv3(x)))
- x = self.maxpool(x)
-
- x = self.layer1(x)
- x = self.layer2(x)
- x = self.layer3(x)
- x = self.layer4(x)
-
- x = self.avgpool(x)
- x = x.view(x.size(0), -1)
- x = self.fc(x)
-
- return x
-
-
-def resnet50(pretrained=False, **kwargs):
- """Constructs a ResNet-50 model.
-
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- """
- model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
- if pretrained:
- model.load_state_dict(load_url(model_urls['resnet50']), strict=False)
- return model
-
-
-def resnet18(pretrained=False, **kwargs):
- """Constructs a ResNet-18 model.
- Args:
- pretrained (bool): If True, returns a model pre-trained on ImageNet
- """
- model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)
- if pretrained:
- model.load_state_dict(load_url(model_urls['resnet18']))
- return model
\ No newline at end of file
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/utils/datetime.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/utils/datetime.py
deleted file mode 100644
index 8668b3b0ec1deec2aeb7ff6bd94265d6705e05bf..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/utils/datetime.py
+++ /dev/null
@@ -1,11 +0,0 @@
-"""For when pip wants to check the date or time.
-"""
-
-import datetime
-
-
-def today_is_later_than(year: int, month: int, day: int) -> bool:
- today = datetime.date.today()
- given = datetime.date(year, month, day)
-
- return today > given
diff --git a/spaces/allknowingroger/text-generation-webui-space-1/modules/html_generator.py b/spaces/allknowingroger/text-generation-webui-space-1/modules/html_generator.py
deleted file mode 100644
index 162040bac68c2e987b33a02ccb12e90b51a63b2d..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/text-generation-webui-space-1/modules/html_generator.py
+++ /dev/null
@@ -1,357 +0,0 @@
-'''
-
-This is a library for formatting GPT-4chan and chat outputs as nice HTML.
-
-'''
-
-import os
-import re
-from pathlib import Path
-
-from PIL import Image
-
-# This is to store the paths to the thumbnails of the profile pictures
-image_cache = {}
-
-def generate_basic_html(s):
- css = """
- .container {
- max-width: 600px;
- margin-left: auto;
- margin-right: auto;
- background-color: rgb(31, 41, 55);
- padding:3em;
- }
- .container p {
- font-size: 16px !important;
- color: white !important;
- margin-bottom: 22px;
- line-height: 1.4 !important;
- }
- """
- s = '\n'.join([f'
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/.github/ISSUE_TEMPLATE/documentation.md b/spaces/brjathu/HMR2.0/vendor/detectron2/.github/ISSUE_TEMPLATE/documentation.md
deleted file mode 100644
index 88214d62e5228639491e019c78bb4171d535cdd1..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/.github/ISSUE_TEMPLATE/documentation.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-name: "\U0001F4DA Documentation Issue"
-about: Report a problem about existing documentation, comments, website or tutorials.
-labels: documentation
-
----
-
-## 📚 Documentation Issue
-
-This issue category is for problems about existing documentation, not for asking how-to questions.
-
-* Provide a link to an existing documentation/comment/tutorial:
-
-* How should the above documentation/comment/tutorial improve:
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/env.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/env.py
deleted file mode 100644
index 40634c17c73273ac8927632be164f466cfe7d1fa..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/env.py
+++ /dev/null
@@ -1,170 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import importlib
-import importlib.util
-import logging
-import numpy as np
-import os
-import random
-import sys
-from datetime import datetime
-import torch
-
-__all__ = ["seed_all_rng"]
-
-
-TORCH_VERSION = tuple(int(x) for x in torch.__version__.split(".")[:2])
-"""
-PyTorch version as a tuple of 2 ints. Useful for comparison.
-"""
-
-
-DOC_BUILDING = os.getenv("_DOC_BUILDING", False) # set in docs/conf.py
-"""
-Whether we're building documentation.
-"""
-
-
-def seed_all_rng(seed=None):
- """
- Set the random seed for the RNG in torch, numpy and python.
-
- Args:
- seed (int): if None, will use a strong random seed.
- """
- if seed is None:
- seed = (
- os.getpid()
- + int(datetime.now().strftime("%S%f"))
- + int.from_bytes(os.urandom(2), "big")
- )
- logger = logging.getLogger(__name__)
- logger.info("Using a generated random seed {}".format(seed))
- np.random.seed(seed)
- torch.manual_seed(seed)
- random.seed(seed)
- os.environ["PYTHONHASHSEED"] = str(seed)
-
-
-# from https://stackoverflow.com/questions/67631/how-to-import-a-module-given-the-full-path
-def _import_file(module_name, file_path, make_importable=False):
- spec = importlib.util.spec_from_file_location(module_name, file_path)
- module = importlib.util.module_from_spec(spec)
- spec.loader.exec_module(module)
- if make_importable:
- sys.modules[module_name] = module
- return module
-
-
-def _configure_libraries():
- """
- Configurations for some libraries.
- """
- # An environment option to disable `import cv2` globally,
- # in case it leads to negative performance impact
- disable_cv2 = int(os.environ.get("DETECTRON2_DISABLE_CV2", False))
- if disable_cv2:
- sys.modules["cv2"] = None
- else:
- # Disable opencl in opencv since its interaction with cuda often has negative effects
- # This envvar is supported after OpenCV 3.4.0
- os.environ["OPENCV_OPENCL_RUNTIME"] = "disabled"
- try:
- import cv2
-
- if int(cv2.__version__.split(".")[0]) >= 3:
- cv2.ocl.setUseOpenCL(False)
- except ModuleNotFoundError:
- # Other types of ImportError, if happened, should not be ignored.
- # Because a failed opencv import could mess up address space
- # https://github.com/skvark/opencv-python/issues/381
- pass
-
- def get_version(module, digit=2):
- return tuple(map(int, module.__version__.split(".")[:digit]))
-
- # fmt: off
- assert get_version(torch) >= (1, 4), "Requires torch>=1.4"
- import fvcore
- assert get_version(fvcore, 3) >= (0, 1, 2), "Requires fvcore>=0.1.2"
- import yaml
- assert get_version(yaml) >= (5, 1), "Requires pyyaml>=5.1"
- # fmt: on
-
-
-_ENV_SETUP_DONE = False
-
-
-def setup_environment():
- """Perform environment setup work. The default setup is a no-op, but this
- function allows the user to specify a Python source file or a module in
- the $DETECTRON2_ENV_MODULE environment variable, that performs
- custom setup work that may be necessary to their computing environment.
- """
- global _ENV_SETUP_DONE
- if _ENV_SETUP_DONE:
- return
- _ENV_SETUP_DONE = True
-
- _configure_libraries()
-
- custom_module_path = os.environ.get("DETECTRON2_ENV_MODULE")
-
- if custom_module_path:
- setup_custom_environment(custom_module_path)
- else:
- # The default setup is a no-op
- pass
-
-
-def setup_custom_environment(custom_module):
- """
- Load custom environment setup by importing a Python source file or a
- module, and run the setup function.
- """
- if custom_module.endswith(".py"):
- module = _import_file("detectron2.utils.env.custom_module", custom_module)
- else:
- module = importlib.import_module(custom_module)
- assert hasattr(module, "setup_environment") and callable(module.setup_environment), (
- "Custom environment module defined in {} does not have the "
- "required callable attribute 'setup_environment'."
- ).format(custom_module)
- module.setup_environment()
-
-
-def fixup_module_metadata(module_name, namespace, keys=None):
- """
- Fix the __qualname__ of module members to be their exported api name, so
- when they are referenced in docs, sphinx can find them. Reference:
- https://github.com/python-trio/trio/blob/6754c74eacfad9cc5c92d5c24727a2f3b620624e/trio/_util.py#L216-L241
- """
- if not DOC_BUILDING:
- return
- seen_ids = set()
-
- def fix_one(qualname, name, obj):
- # avoid infinite recursion (relevant when using
- # typing.Generic, for example)
- if id(obj) in seen_ids:
- return
- seen_ids.add(id(obj))
-
- mod = getattr(obj, "__module__", None)
- if mod is not None and (mod.startswith(module_name) or mod.startswith("fvcore.")):
- obj.__module__ = module_name
- # Modules, unlike everything else in Python, put fully-qualitied
- # names into their __name__ attribute. We check for "." to avoid
- # rewriting these.
- if hasattr(obj, "__name__") and "." not in obj.__name__:
- obj.__name__ = name
- obj.__qualname__ = qualname
- if isinstance(obj, type):
- for attr_name, attr_value in obj.__dict__.items():
- fix_one(objname + "." + attr_name, attr_name, attr_value)
-
- if keys is None:
- keys = namespace.keys()
- for objname in keys:
- if not objname.startswith("_"):
- obj = namespace[objname]
- fix_one(objname, objname, obj)
diff --git a/spaces/cahya/websocket/app/main.py b/spaces/cahya/websocket/app/main.py
deleted file mode 100644
index 8ea6ea468b8f4489231f08c5f9acd74a2954d0b7..0000000000000000000000000000000000000000
--- a/spaces/cahya/websocket/app/main.py
+++ /dev/null
@@ -1,61 +0,0 @@
-from fastapi import FastAPI, WebSocket
-from fastapi.responses import HTMLResponse
-import os
-
-
-app = FastAPI()
-
-html = """
-
-
-
- Chat
-
-
-
"
- for name, value in os.environ.items():
- environment_variables += f"{name}: {value} "
- return HTMLResponse(environment_variables)
-
-@app.websocket("/ws")
-async def websocket_endpoint(websocket: WebSocket):
- await websocket.accept()
- while True:
- data = await websocket.receive_text()
- await websocket.send_text(f"Message text was: {data}")
-
diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageMath.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageMath.py
deleted file mode 100644
index ac7d36b698c2ec9839d8a771734c9f730f701534..0000000000000000000000000000000000000000
--- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageMath.py
+++ /dev/null
@@ -1,263 +0,0 @@
-#
-# The Python Imaging Library
-# $Id$
-#
-# a simple math add-on for the Python Imaging Library
-#
-# History:
-# 1999-02-15 fl Original PIL Plus release
-# 2005-05-05 fl Simplified and cleaned up for PIL 1.1.6
-# 2005-09-12 fl Fixed int() and float() for Python 2.4.1
-#
-# Copyright (c) 1999-2005 by Secret Labs AB
-# Copyright (c) 2005 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-
-import builtins
-
-from . import Image, _imagingmath
-
-
-def _isconstant(v):
- return isinstance(v, (int, float))
-
-
-class _Operand:
- """Wraps an image operand, providing standard operators"""
-
- def __init__(self, im):
- self.im = im
-
- def __fixup(self, im1):
- # convert image to suitable mode
- if isinstance(im1, _Operand):
- # argument was an image.
- if im1.im.mode in ("1", "L"):
- return im1.im.convert("I")
- elif im1.im.mode in ("I", "F"):
- return im1.im
- else:
- msg = f"unsupported mode: {im1.im.mode}"
- raise ValueError(msg)
- else:
- # argument was a constant
- if _isconstant(im1) and self.im.mode in ("1", "L", "I"):
- return Image.new("I", self.im.size, im1)
- else:
- return Image.new("F", self.im.size, im1)
-
- def apply(self, op, im1, im2=None, mode=None):
- im1 = self.__fixup(im1)
- if im2 is None:
- # unary operation
- out = Image.new(mode or im1.mode, im1.size, None)
- im1.load()
- try:
- op = getattr(_imagingmath, op + "_" + im1.mode)
- except AttributeError as e:
- msg = f"bad operand type for '{op}'"
- raise TypeError(msg) from e
- _imagingmath.unop(op, out.im.id, im1.im.id)
- else:
- # binary operation
- im2 = self.__fixup(im2)
- if im1.mode != im2.mode:
- # convert both arguments to floating point
- if im1.mode != "F":
- im1 = im1.convert("F")
- if im2.mode != "F":
- im2 = im2.convert("F")
- if im1.size != im2.size:
- # crop both arguments to a common size
- size = (min(im1.size[0], im2.size[0]), min(im1.size[1], im2.size[1]))
- if im1.size != size:
- im1 = im1.crop((0, 0) + size)
- if im2.size != size:
- im2 = im2.crop((0, 0) + size)
- out = Image.new(mode or im1.mode, im1.size, None)
- im1.load()
- im2.load()
- try:
- op = getattr(_imagingmath, op + "_" + im1.mode)
- except AttributeError as e:
- msg = f"bad operand type for '{op}'"
- raise TypeError(msg) from e
- _imagingmath.binop(op, out.im.id, im1.im.id, im2.im.id)
- return _Operand(out)
-
- # unary operators
- def __bool__(self):
- # an image is "true" if it contains at least one non-zero pixel
- return self.im.getbbox() is not None
-
- def __abs__(self):
- return self.apply("abs", self)
-
- def __pos__(self):
- return self
-
- def __neg__(self):
- return self.apply("neg", self)
-
- # binary operators
- def __add__(self, other):
- return self.apply("add", self, other)
-
- def __radd__(self, other):
- return self.apply("add", other, self)
-
- def __sub__(self, other):
- return self.apply("sub", self, other)
-
- def __rsub__(self, other):
- return self.apply("sub", other, self)
-
- def __mul__(self, other):
- return self.apply("mul", self, other)
-
- def __rmul__(self, other):
- return self.apply("mul", other, self)
-
- def __truediv__(self, other):
- return self.apply("div", self, other)
-
- def __rtruediv__(self, other):
- return self.apply("div", other, self)
-
- def __mod__(self, other):
- return self.apply("mod", self, other)
-
- def __rmod__(self, other):
- return self.apply("mod", other, self)
-
- def __pow__(self, other):
- return self.apply("pow", self, other)
-
- def __rpow__(self, other):
- return self.apply("pow", other, self)
-
- # bitwise
- def __invert__(self):
- return self.apply("invert", self)
-
- def __and__(self, other):
- return self.apply("and", self, other)
-
- def __rand__(self, other):
- return self.apply("and", other, self)
-
- def __or__(self, other):
- return self.apply("or", self, other)
-
- def __ror__(self, other):
- return self.apply("or", other, self)
-
- def __xor__(self, other):
- return self.apply("xor", self, other)
-
- def __rxor__(self, other):
- return self.apply("xor", other, self)
-
- def __lshift__(self, other):
- return self.apply("lshift", self, other)
-
- def __rshift__(self, other):
- return self.apply("rshift", self, other)
-
- # logical
- def __eq__(self, other):
- return self.apply("eq", self, other)
-
- def __ne__(self, other):
- return self.apply("ne", self, other)
-
- def __lt__(self, other):
- return self.apply("lt", self, other)
-
- def __le__(self, other):
- return self.apply("le", self, other)
-
- def __gt__(self, other):
- return self.apply("gt", self, other)
-
- def __ge__(self, other):
- return self.apply("ge", self, other)
-
-
-# conversions
-def imagemath_int(self):
- return _Operand(self.im.convert("I"))
-
-
-def imagemath_float(self):
- return _Operand(self.im.convert("F"))
-
-
-# logical
-def imagemath_equal(self, other):
- return self.apply("eq", self, other, mode="I")
-
-
-def imagemath_notequal(self, other):
- return self.apply("ne", self, other, mode="I")
-
-
-def imagemath_min(self, other):
- return self.apply("min", self, other)
-
-
-def imagemath_max(self, other):
- return self.apply("max", self, other)
-
-
-def imagemath_convert(self, mode):
- return _Operand(self.im.convert(mode))
-
-
-ops = {}
-for k, v in list(globals().items()):
- if k[:10] == "imagemath_":
- ops[k[10:]] = v
-
-
-def eval(expression, _dict={}, **kw):
- """
- Evaluates an image expression.
-
- :param expression: A string containing a Python-style expression.
- :param options: Values to add to the evaluation context. You
- can either use a dictionary, or one or more keyword
- arguments.
- :return: The evaluated expression. This is usually an image object, but can
- also be an integer, a floating point value, or a pixel tuple,
- depending on the expression.
- """
-
- # build execution namespace
- args = ops.copy()
- args.update(_dict)
- args.update(kw)
- for k, v in list(args.items()):
- if hasattr(v, "im"):
- args[k] = _Operand(v)
-
- compiled_code = compile(expression, "", "eval")
-
- def scan(code):
- for const in code.co_consts:
- if type(const) == type(compiled_code):
- scan(const)
-
- for name in code.co_names:
- if name not in args and name != "abs":
- msg = f"'{name}' not allowed"
- raise ValueError(msg)
-
- scan(compiled_code)
- out = builtins.eval(expression, {"__builtins": {"abs": abs}}, args)
- try:
- return out.im
- except AttributeError:
- return out
diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiofiles/threadpool/binary.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiofiles/threadpool/binary.py
deleted file mode 100644
index 63fcaff25959472c3282674a0c9e95160a8210b7..0000000000000000000000000000000000000000
--- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiofiles/threadpool/binary.py
+++ /dev/null
@@ -1,104 +0,0 @@
-from ..base import AsyncBase, AsyncIndirectBase
-from .utils import delegate_to_executor, proxy_method_directly, proxy_property_directly
-
-
-@delegate_to_executor(
- "close",
- "flush",
- "isatty",
- "read",
- "read1",
- "readinto",
- "readline",
- "readlines",
- "seek",
- "seekable",
- "tell",
- "truncate",
- "writable",
- "write",
- "writelines",
-)
-@proxy_method_directly("detach", "fileno", "readable")
-@proxy_property_directly("closed", "raw", "name", "mode")
-class AsyncBufferedIOBase(AsyncBase):
- """The asyncio executor version of io.BufferedWriter and BufferedIOBase."""
-
-
-@delegate_to_executor("peek")
-class AsyncBufferedReader(AsyncBufferedIOBase):
- """The asyncio executor version of io.BufferedReader and Random."""
-
-
-@delegate_to_executor(
- "close",
- "flush",
- "isatty",
- "read",
- "readall",
- "readinto",
- "readline",
- "readlines",
- "seek",
- "seekable",
- "tell",
- "truncate",
- "writable",
- "write",
- "writelines",
-)
-@proxy_method_directly("fileno", "readable")
-@proxy_property_directly("closed", "name", "mode")
-class AsyncFileIO(AsyncBase):
- """The asyncio executor version of io.FileIO."""
-
-
-@delegate_to_executor(
- "close",
- "flush",
- "isatty",
- "read",
- "read1",
- "readinto",
- "readline",
- "readlines",
- "seek",
- "seekable",
- "tell",
- "truncate",
- "writable",
- "write",
- "writelines",
-)
-@proxy_method_directly("detach", "fileno", "readable")
-@proxy_property_directly("closed", "raw", "name", "mode")
-class AsyncIndirectBufferedIOBase(AsyncIndirectBase):
- """The indirect asyncio executor version of io.BufferedWriter and BufferedIOBase."""
-
-
-@delegate_to_executor("peek")
-class AsyncIndirectBufferedReader(AsyncIndirectBufferedIOBase):
- """The indirect asyncio executor version of io.BufferedReader and Random."""
-
-
-@delegate_to_executor(
- "close",
- "flush",
- "isatty",
- "read",
- "readall",
- "readinto",
- "readline",
- "readlines",
- "seek",
- "seekable",
- "tell",
- "truncate",
- "writable",
- "write",
- "writelines",
-)
-@proxy_method_directly("fileno", "readable")
-@proxy_property_directly("closed", "name", "mode")
-class AsyncIndirectFileIO(AsyncIndirectBase):
- """The indirect asyncio executor version of io.FileIO."""
diff --git a/spaces/chansung/palm-with-gradio-chat/README.md b/spaces/chansung/palm-with-gradio-chat/README.md
deleted file mode 100644
index 7fb9b21e04591a1dc96aae5adae5c3d4e116f946..0000000000000000000000000000000000000000
--- a/spaces/chansung/palm-with-gradio-chat/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: PaLM2 With Gradio Chat
-emoji: 🌴💬
-colorFrom: pink
-colorTo: green
-sdk: gradio
-sdk_version: 3.41.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/cheetah003/HMMC_t2v_search/modules/until_module.py b/spaces/cheetah003/HMMC_t2v_search/modules/until_module.py
deleted file mode 100644
index 204cad91ad7309dfe0064a7d14c6843a9f4dd60d..0000000000000000000000000000000000000000
--- a/spaces/cheetah003/HMMC_t2v_search/modules/until_module.py
+++ /dev/null
@@ -1,295 +0,0 @@
-# coding=utf-8
-# Copyright 2018 The Google AI Language Team Authors and The HugginFace Inc. team.
-# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""PyTorch BERT model."""
-
-import logging
-import numpy as np
-import torch
-from torch import nn
-import torch.nn.functional as F
-import math
-from modules.until_config import PretrainedConfig
-
-logger = logging.getLogger(__name__)
-
-
-def gelu(x):
- """Implementation of the gelu activation function.
- For information: OpenAI GPT's gelu is slightly different (and gives slightly different results):
- 0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3))))
- """
- return x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0)))
-
-def swish(x):
- return x * torch.sigmoid(x)
-
-def get_dual_matrix(sim_matrix):
- if torch.is_tensor(sim_matrix):
- pass
- else:
- sim_matrix = torch.tensor(sim_matrix)
- temp = 1
- # sim_matrix = sim_matrix * F.softmax(sim_matrix / temp, dim=0) * len(sim_matrix)
- alpha = F.softmax(sim_matrix / temp, dim=0)
- beta = F.softmax(sim_matrix / temp, dim=1)
- sim_matrix = sim_matrix * alpha * beta
- return sim_matrix
-
-
-ACT2FN = {"gelu": gelu, "relu": torch.nn.functional.relu, "swish": swish}
-
-class LayerNorm(nn.Module):
- def __init__(self, hidden_size, eps=1e-12):
- """Construct a layernorm module in the TF style (epsilon inside the square root).
- """
- super(LayerNorm, self).__init__()
- self.weight = nn.Parameter(torch.ones(hidden_size))
- self.bias = nn.Parameter(torch.zeros(hidden_size))
- self.variance_epsilon = eps
-
- def forward(self, x):
- u = x.mean(-1, keepdim=True)
- s = (x - u).pow(2).mean(-1, keepdim=True)
- x = (x - u) / torch.sqrt(s + self.variance_epsilon)
- return self.weight * x + self.bias
-
-class PreTrainedModel(nn.Module):
- """ An abstract class to handle weights initialization and
- a simple interface for dowloading and loading pretrained models.
- """
- def __init__(self, config, *inputs, **kwargs):
- super(PreTrainedModel, self).__init__()
- if not isinstance(config, PretrainedConfig):
- raise ValueError(
- "Parameter config in `{}(config)` should be an instance of class `PretrainedConfig`. "
- "To create a model from a Google pretrained model use "
- "`model = {}.from_pretrained(PRETRAINED_MODEL_NAME)`".format(
- self.__class__.__name__, self.__class__.__name__
- ))
- self.config = config
-
- def init_weights(self, module):
- """ Initialize the weights.
- """
- if isinstance(module, (nn.Linear, nn.Embedding)):
- # Slightly different from the TF version which uses truncated_normal for initialization
- # cf https://github.com/pytorch/pytorch/pull/5617
- module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
- elif isinstance(module, LayerNorm):
- if 'beta' in dir(module) and 'gamma' in dir(module):
- module.beta.data.zero_()
- module.gamma.data.fill_(1.0)
- else:
- module.bias.data.zero_()
- module.weight.data.fill_(1.0)
- if isinstance(module, nn.Linear) and module.bias is not None:
- module.bias.data.zero_()
-
- def resize_token_embeddings(self, new_num_tokens=None):
- raise NotImplementedError
-
- @classmethod
- def init_preweight(cls, model, state_dict, prefix=None, task_config=None):
- old_keys = []
- new_keys = []
- for key in state_dict.keys():
- new_key = None
- if 'gamma' in key:
- new_key = key.replace('gamma', 'weight')
- if 'beta' in key:
- new_key = key.replace('beta', 'bias')
- if new_key:
- old_keys.append(key)
- new_keys.append(new_key)
- for old_key, new_key in zip(old_keys, new_keys):
- state_dict[new_key] = state_dict.pop(old_key)
-
- if prefix is not None:
- old_keys = []
- new_keys = []
- for key in state_dict.keys():
- old_keys.append(key)
- new_keys.append(prefix + key)
- for old_key, new_key in zip(old_keys, new_keys):
- state_dict[new_key] = state_dict.pop(old_key)
-
- missing_keys = []
- unexpected_keys = []
- error_msgs = []
- # copy state_dict so _load_from_state_dict can modify it
- metadata = getattr(state_dict, '_metadata', None)
- state_dict = state_dict.copy()
- if metadata is not None:
- state_dict._metadata = metadata
-
- def load(module, prefix=''):
- local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {})
- module._load_from_state_dict(
- state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs)
- for name, child in module._modules.items():
- if child is not None:
- load(child, prefix + name + '.')
-
- load(model, prefix='')
-
- if prefix is None and (task_config is None or task_config.local_rank == 0):
- logger.info("-" * 20)
- if len(missing_keys) > 0:
- logger.info("Weights of {} not initialized from pretrained model: {}"
- .format(model.__class__.__name__, "\n " + "\n ".join(missing_keys)))
- if len(unexpected_keys) > 0:
- logger.info("Weights from pretrained model not used in {}: {}"
- .format(model.__class__.__name__, "\n " + "\n ".join(unexpected_keys)))
- if len(error_msgs) > 0:
- logger.error("Weights from pretrained model cause errors in {}: {}"
- .format(model.__class__.__name__, "\n " + "\n ".join(error_msgs)))
-
- return model
-
- @property
- def dtype(self):
- """
- :obj:`torch.dtype`: The dtype of the module (assuming that all the module parameters have the same dtype).
- """
- try:
- return next(self.parameters()).dtype
- except StopIteration:
- # For nn.DataParallel compatibility in PyTorch 1.5
- def find_tensor_attributes(module: nn.Module):
- tuples = [(k, v) for k, v in module.__dict__.items() if torch.is_tensor(v)]
- return tuples
-
- gen = self._named_members(get_members_fn=find_tensor_attributes)
- first_tuple = next(gen)
- return first_tuple[1].dtype
-
- @classmethod
- def from_pretrained(cls, config, state_dict=None, *inputs, **kwargs):
- """
- Instantiate a PreTrainedModel from a pre-trained model file or a pytorch state dict.
- Download and cache the pre-trained model file if needed.
- """
- # Instantiate model.
- model = cls(config, *inputs, **kwargs)
- if state_dict is None:
- return model
- model = cls.init_preweight(model, state_dict)
-
- return model
-
-##################################
-###### LOSS FUNCTION #############
-##################################
-class CrossEn(nn.Module):
- def __init__(self,):
- super(CrossEn, self).__init__()
-
- def forward(self, sim_matrix):
- logpt = F.log_softmax(sim_matrix, dim=-1)
- logpt = torch.diag(logpt)
- nce_loss = -logpt
- sim_loss = nce_loss.mean()
- return sim_loss
-
-class Dual_CrossEn(nn.Module):
- def __init__(self,):
- super(Dual_CrossEn, self).__init__()
-
- def forward(self, sim_matrix):
- sim_matrix = get_dual_matrix(sim_matrix)
- logpt = F.log_softmax(sim_matrix, dim=-1)
- logpt = torch.diag(logpt)
- nce_loss = -logpt
- sim_loss = nce_loss.mean()
- return sim_loss
-
-class MILNCELoss(nn.Module):
- def __init__(self, batch_size=1, n_pair=1,):
- super(MILNCELoss, self).__init__()
- self.batch_size = batch_size
- self.n_pair = n_pair
- torch_v = float(".".join(torch.__version__.split(".")[:2]))
- self.bool_dtype = torch.bool if torch_v >= 1.3 else torch.uint8
-
- def forward(self, sim_matrix):
- mm_mask = np.eye(self.batch_size)
- mm_mask = np.kron(mm_mask, np.ones((self.n_pair, self.n_pair)))
- mm_mask = torch.tensor(mm_mask).float().to(sim_matrix.device)
-
- from_text_matrix = sim_matrix + mm_mask * -1e12
- from_video_matrix = sim_matrix.transpose(1, 0)
-
- new_sim_matrix = torch.cat([from_video_matrix, from_text_matrix], dim=-1)
- logpt = F.log_softmax(new_sim_matrix, dim=-1)
-
- mm_mask_logpt = torch.cat([mm_mask, torch.zeros_like(mm_mask)], dim=-1)
- masked_logpt = logpt + (torch.ones_like(mm_mask_logpt) - mm_mask_logpt) * -1e12
-
- new_logpt = -torch.logsumexp(masked_logpt, dim=-1)
-
- logpt_choice = torch.zeros_like(new_logpt)
- mark_ind = torch.arange(self.batch_size).to(sim_matrix.device) * self.n_pair + (self.n_pair//2)
- logpt_choice[mark_ind] = 1
- sim_loss = new_logpt.masked_select(logpt_choice.to(dtype=self.bool_dtype)).mean()
- return sim_loss
-
-class MaxMarginRankingLoss(nn.Module):
- def __init__(self,
- margin=1.0,
- negative_weighting=False,
- batch_size=1,
- n_pair=1,
- hard_negative_rate=0.5,
- ):
- super(MaxMarginRankingLoss, self).__init__()
- self.margin = margin
- self.n_pair = n_pair
- self.batch_size = batch_size
- easy_negative_rate = 1 - hard_negative_rate
- self.easy_negative_rate = easy_negative_rate
- self.negative_weighting = negative_weighting
- if n_pair > 1 and batch_size > 1:
- alpha = easy_negative_rate / ((batch_size - 1) * (1 - easy_negative_rate))
- mm_mask = (1 - alpha) * np.eye(self.batch_size) + alpha
- mm_mask = np.kron(mm_mask, np.ones((n_pair, n_pair)))
- mm_mask = torch.tensor(mm_mask) * (batch_size * (1 - easy_negative_rate))
- self.mm_mask = mm_mask.float()
-
- def forward(self, x):
- d = torch.diag(x)
- max_margin = F.relu(self.margin + x - d.view(-1, 1)) + \
- F.relu(self.margin + x - d.view(1, -1))
- if self.negative_weighting and self.n_pair > 1 and self.batch_size > 1:
- max_margin = max_margin * self.mm_mask.to(max_margin.device)
- return max_margin.mean()
-
-class AllGather(torch.autograd.Function):
- """An autograd function that performs allgather on a tensor."""
-
- @staticmethod
- def forward(ctx, tensor, args):
- output = [torch.empty_like(tensor) for _ in range(args.world_size)]
- torch.distributed.all_gather(output, tensor)
- ctx.rank = args.rank
- ctx.batch_size = tensor.shape[0]
- return torch.cat(output, dim=0)
-
- @staticmethod
- def backward(ctx, grad_output):
- return (
- grad_output[ctx.batch_size * ctx.rank : ctx.batch_size * (ctx.rank + 1)],
- None,
- )
diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/luke/run_luke_ner_no_trainer.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/luke/run_luke_ner_no_trainer.py
deleted file mode 100644
index 4c5227d2c7e011811dc5e716fe301a30f7c84160..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/research_projects/luke/run_luke_ner_no_trainer.py
+++ /dev/null
@@ -1,712 +0,0 @@
-#!/usr/bin/env python
-# coding=utf-8
-# Copyright 2022 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Fine-tuning (m)LUKE model on token classification tasks (NER, POS, CHUNKS) relying on the accelerate library 🤗
-without using a Trainer.
-"""
-
-import argparse
-import logging
-import math
-import os
-import random
-from pathlib import Path
-
-import datasets
-import torch
-from accelerate import Accelerator, DistributedDataParallelKwargs
-from datasets import ClassLabel, load_dataset, load_metric
-from huggingface_hub import Repository
-from luke_utils import DataCollatorForLukeTokenClassification, is_punctuation, padding_tensor
-from torch.utils.data import DataLoader
-from tqdm.auto import tqdm
-
-import transformers
-from transformers import (
- AdamW,
- LukeConfig,
- LukeForEntitySpanClassification,
- LukeTokenizer,
- SchedulerType,
- default_data_collator,
- get_scheduler,
- set_seed,
-)
-from transformers.file_utils import get_full_repo_name
-from transformers.utils.versions import require_version
-
-
-logger = logging.getLogger(__name__)
-require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/token-classification/requirements.txt")
-
-
-def parse_args():
- parser = argparse.ArgumentParser(
- description="Finetune (m)LUKE on a token classification task (such as NER) with the accelerate library"
- )
- parser.add_argument(
- "--dataset_name",
- type=str,
- default=None,
- help="The name of the dataset to use (via the datasets library).",
- )
- parser.add_argument(
- "--dataset_config_name",
- type=str,
- default=None,
- help="The configuration name of the dataset to use (via the datasets library).",
- )
- parser.add_argument(
- "--train_file", type=str, default=None, help="A csv or a json file containing the training data."
- )
- parser.add_argument(
- "--validation_file", type=str, default=None, help="A csv or a json file containing the validation data."
- )
- parser.add_argument(
- "--text_column_name",
- type=str,
- default=None,
- help="The column name of text to input in the file (a csv or JSON file).",
- )
- parser.add_argument(
- "--label_column_name",
- type=str,
- default=None,
- help="The column name of label to input in the file (a csv or JSON file).",
- )
- parser.add_argument(
- "--max_length",
- type=int,
- default=128,
- help=(
- "The maximum total input sequence length after tokenization. Sequences longer than this will be truncated,"
- " sequences shorter will be padded if `--pad_to_max_length` is passed."
- ),
- )
- parser.add_argument(
- "--max_entity_length",
- type=int,
- default=32,
- help=(
- "The maximum total input entity length after tokenization (Used only for (M)Luke models). Sequences longer"
- " than this will be truncated, sequences shorter will be padded if `--pad_to_max_length` is passed."
- ),
- )
- parser.add_argument(
- "--max_mention_length",
- type=int,
- default=30,
- help=(
- "The maximum total input mention length after tokenization (Used only for (M)Luke models). Sequences"
- " longer than this will be truncated, sequences shorter will be padded if `--pad_to_max_length` is passed."
- ),
- )
- parser.add_argument(
- "--pad_to_max_length",
- action="store_true",
- help="If passed, pad all samples to `max_length`. Otherwise, dynamic padding is used.",
- )
- parser.add_argument(
- "--model_name_or_path",
- type=str,
- help="Path to pretrained model or model identifier from huggingface.co/models.",
- required=True,
- )
- parser.add_argument(
- "--config_name",
- type=str,
- default=None,
- help="Pretrained config name or path if not the same as model_name",
- )
- parser.add_argument(
- "--tokenizer_name",
- type=str,
- default=None,
- help="Pretrained tokenizer name or path if not the same as model_name",
- )
- parser.add_argument(
- "--per_device_train_batch_size",
- type=int,
- default=8,
- help="Batch size (per device) for the training dataloader.",
- )
- parser.add_argument(
- "--per_device_eval_batch_size",
- type=int,
- default=8,
- help="Batch size (per device) for the evaluation dataloader.",
- )
- parser.add_argument(
- "--learning_rate",
- type=float,
- default=5e-5,
- help="Initial learning rate (after the potential warmup period) to use.",
- )
- parser.add_argument("--weight_decay", type=float, default=0.0, help="Weight decay to use.")
- parser.add_argument("--num_train_epochs", type=int, default=3, help="Total number of training epochs to perform.")
- parser.add_argument(
- "--max_train_steps",
- type=int,
- default=None,
- help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
- )
- parser.add_argument(
- "--gradient_accumulation_steps",
- type=int,
- default=1,
- help="Number of updates steps to accumulate before performing a backward/update pass.",
- )
- parser.add_argument(
- "--lr_scheduler_type",
- type=SchedulerType,
- default="linear",
- help="The scheduler type to use.",
- choices=["linear", "cosine", "cosine_with_restarts", "polynomial", "constant", "constant_with_warmup"],
- )
- parser.add_argument(
- "--num_warmup_steps", type=int, default=0, help="Number of steps for the warmup in the lr scheduler."
- )
- parser.add_argument("--output_dir", type=str, default=None, help="Where to store the final model.")
- parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
- parser.add_argument(
- "--label_all_tokens",
- action="store_true",
- help="Setting labels of all special tokens to -100 and thus PyTorch will ignore them.",
- )
- parser.add_argument(
- "--return_entity_level_metrics",
- action="store_true",
- help="Indication whether entity level metrics are to be returner.",
- )
- parser.add_argument(
- "--task_name",
- type=str,
- default="ner",
- choices=["ner", "pos", "chunk"],
- help="The name of the task.",
- )
- parser.add_argument(
- "--debug",
- action="store_true",
- help="Activate debug mode and run training only with a subset of data.",
- )
- parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
- parser.add_argument(
- "--hub_model_id", type=str, help="The name of the repository to keep in sync with the local `output_dir`."
- )
- parser.add_argument("--hub_token", type=str, help="The token to use to push to the Model Hub.")
- args = parser.parse_args()
-
- # Sanity checks
- if args.task_name is None and args.train_file is None and args.validation_file is None:
- raise ValueError("Need either a task name or a training/validation file.")
- else:
- if args.train_file is not None:
- extension = args.train_file.split(".")[-1]
- assert extension in ["csv", "json"], "`train_file` should be a csv or a json file."
- if args.validation_file is not None:
- extension = args.validation_file.split(".")[-1]
- assert extension in ["csv", "json"], "`validation_file` should be a csv or a json file."
-
- if args.push_to_hub:
- assert args.output_dir is not None, "Need an `output_dir` to create a repo when `--push_to_hub` is passed."
-
- return args
-
-
-def main():
- args = parse_args()
-
- # Initialize the accelerator. We will let the accelerator handle device placement for us in this example.
- handler = DistributedDataParallelKwargs(find_unused_parameters=True)
- accelerator = Accelerator(kwargs_handlers=[handler])
- # Make one log on every process with the configuration for debugging.
- logging.basicConfig(
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
- datefmt="%m/%d/%Y %H:%M:%S",
- level=logging.INFO,
- )
- logger.info(accelerator.state)
-
- # Setup logging, we only want one process per machine to log things on the screen.
- # accelerator.is_local_main_process is only True for one process per machine.
- logger.setLevel(logging.INFO if accelerator.is_local_main_process else logging.ERROR)
- if accelerator.is_local_main_process:
- datasets.utils.logging.set_verbosity_warning()
- transformers.utils.logging.set_verbosity_info()
- else:
- datasets.utils.logging.set_verbosity_error()
- transformers.utils.logging.set_verbosity_error()
-
- # If passed along, set the training seed now.
- if args.seed is not None:
- set_seed(args.seed)
-
- # Handle the repository creation
- if accelerator.is_main_process:
- if args.push_to_hub:
- if args.hub_model_id is None:
- repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token)
- else:
- repo_name = args.hub_model_id
- repo = Repository(args.output_dir, clone_from=repo_name)
- elif args.output_dir is not None:
- os.makedirs(args.output_dir, exist_ok=True)
- accelerator.wait_for_everyone()
-
- # Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
- # or just provide the name of one of the public datasets for token classification task available on the hub at https://huggingface.co/datasets/
- # (the dataset will be downloaded automatically from the datasets Hub).
- #
- # For CSV/JSON files, this script will use the column called 'tokens' or the first column if no column called
- # 'tokens' is found. You can easily tweak this behavior (see below).
- #
- # In distributed training, the load_dataset function guarantee that only one local process can concurrently
- # download the dataset.
- if args.dataset_name is not None:
- # Downloading and loading a dataset from the hub.
- raw_datasets = load_dataset(args.dataset_name, args.dataset_config_name)
- else:
- data_files = {}
- if args.train_file is not None:
- data_files["train"] = args.train_file
- if args.validation_file is not None:
- data_files["validation"] = args.validation_file
- extension = args.train_file.split(".")[-1]
- raw_datasets = load_dataset(extension, data_files=data_files)
- # Trim a number of training examples
- if args.debug:
- for split in raw_datasets.keys():
- raw_datasets[split] = raw_datasets[split].select(range(100))
- # See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
- # https://huggingface.co/docs/datasets/loading_datasets.html.
-
- if raw_datasets["train"] is not None:
- column_names = raw_datasets["train"].column_names
- features = raw_datasets["train"].features
- else:
- column_names = raw_datasets["validation"].column_names
- features = raw_datasets["validation"].features
-
- if args.text_column_name is not None:
- text_column_name = args.text_column_name
- elif "tokens" in column_names:
- text_column_name = "tokens"
- else:
- text_column_name = column_names[0]
-
- if args.label_column_name is not None:
- label_column_name = args.label_column_name
- elif f"{args.task_name}_tags" in column_names:
- label_column_name = f"{args.task_name}_tags"
- else:
- label_column_name = column_names[1]
-
- # In the event the labels are not a `Sequence[ClassLabel]`, we will need to go through the dataset to get the
- # unique labels.
- def get_label_list(labels):
- unique_labels = set()
- for label in labels:
- unique_labels = unique_labels | set(label)
- label_list = list(unique_labels)
- label_list.sort()
- return label_list
-
- if isinstance(features[label_column_name].feature, ClassLabel):
- label_list = features[label_column_name].feature.names
- # No need to convert the labels since they are already ints.
- else:
- label_list = get_label_list(raw_datasets["train"][label_column_name])
- num_labels = len(label_list)
-
- # Map that sends B-Xxx label to its I-Xxx counterpart
- b_to_i_label = []
-
- for idx, label in enumerate(label_list):
- if label.startswith("B-") and label.replace("B-", "I-") in label_list:
- b_to_i_label.append(label_list.index(label.replace("B-", "I-")))
- else:
- b_to_i_label.append(idx)
-
- # Load pretrained model and tokenizer
- #
- # In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently
- # download model & vocab.
- if args.config_name:
- config = LukeConfig.from_pretrained(args.config_name, num_labels=num_labels)
- elif args.model_name_or_path:
- config = LukeConfig.from_pretrained(args.model_name_or_path, num_labels=num_labels)
- else:
- logger.warning("You are instantiating a new config instance from scratch.")
-
- tokenizer_name_or_path = args.tokenizer_name if args.tokenizer_name else args.model_name_or_path
- if not tokenizer_name_or_path:
- raise ValueError(
- "You are instantiating a new tokenizer from scratch. This is not supported by this script."
- "You can do it from another script, save it, and load it from here, using --tokenizer_name."
- )
-
- tokenizer = LukeTokenizer.from_pretrained(
- tokenizer_name_or_path,
- use_fast=False,
- task="entity_span_classification",
- max_entity_length=args.max_entity_length,
- max_mention_length=args.max_mention_length,
- )
-
- if args.model_name_or_path:
- model = LukeForEntitySpanClassification.from_pretrained(
- args.model_name_or_path,
- from_tf=bool(".ckpt" in args.model_name_or_path),
- config=config,
- )
- else:
- logger.info("Training new model from scratch")
- model = LukeForEntitySpanClassification.from_config(config)
-
- model.resize_token_embeddings(len(tokenizer))
-
- # Preprocessing the datasets.
- # First we tokenize all the texts.
- padding = "max_length" if args.pad_to_max_length else False
-
- def compute_sentence_boundaries_for_luke(examples):
- sentence_boundaries = []
-
- for tokens in examples[text_column_name]:
- sentence_boundaries.append([0, len(tokens)])
-
- examples["sentence_boundaries"] = sentence_boundaries
-
- return examples
-
- def compute_entity_spans_for_luke(examples):
- all_entity_spans = []
- texts = []
- all_labels_entity_spans = []
- all_original_entity_spans = []
-
- for labels, tokens, sentence_boundaries in zip(
- examples[label_column_name], examples[text_column_name], examples["sentence_boundaries"]
- ):
- subword_lengths = [len(tokenizer.tokenize(token)) for token in tokens]
- total_subword_length = sum(subword_lengths)
- _, context_end = sentence_boundaries
-
- if total_subword_length > args.max_length - 2:
- cur_length = sum(subword_lengths[:context_end])
- idx = context_end - 1
-
- while cur_length > args.max_length - 2:
- cur_length -= subword_lengths[idx]
- context_end -= 1
- idx -= 1
-
- text = ""
- sentence_words = tokens[:context_end]
- sentence_subword_lengths = subword_lengths[:context_end]
- word_start_char_positions = []
- word_end_char_positions = []
- labels_positions = {}
-
- for word, label in zip(sentence_words, labels):
- if word[0] == "'" or (len(word) == 1 and is_punctuation(word)):
- text = text.rstrip()
-
- word_start_char_positions.append(len(text))
- text += word
- word_end_char_positions.append(len(text))
- text += " "
- labels_positions[(word_start_char_positions[-1], word_end_char_positions[-1])] = label
-
- text = text.rstrip()
- texts.append(text)
- entity_spans = []
- labels_entity_spans = []
- original_entity_spans = []
-
- for word_start in range(len(sentence_words)):
- for word_end in range(word_start, len(sentence_words)):
- if (
- sum(sentence_subword_lengths[word_start:word_end]) <= tokenizer.max_mention_length
- and len(entity_spans) < tokenizer.max_entity_length
- ):
- entity_spans.append((word_start_char_positions[word_start], word_end_char_positions[word_end]))
- original_entity_spans.append((word_start, word_end + 1))
- if (
- word_start_char_positions[word_start],
- word_end_char_positions[word_end],
- ) in labels_positions:
- labels_entity_spans.append(
- labels_positions[
- (word_start_char_positions[word_start], word_end_char_positions[word_end])
- ]
- )
- else:
- labels_entity_spans.append(0)
-
- all_entity_spans.append(entity_spans)
- all_labels_entity_spans.append(labels_entity_spans)
- all_original_entity_spans.append(original_entity_spans)
-
- examples["entity_spans"] = all_entity_spans
- examples["text"] = texts
- examples["labels_entity_spans"] = all_labels_entity_spans
- examples["original_entity_spans"] = all_original_entity_spans
-
- return examples
-
- def tokenize_and_align_labels(examples):
- entity_spans = []
-
- for v in examples["entity_spans"]:
- entity_spans.append(list(map(tuple, v)))
-
- tokenized_inputs = tokenizer(
- examples["text"],
- entity_spans=entity_spans,
- max_length=args.max_length,
- padding=padding,
- truncation=True,
- )
-
- if padding == "max_length":
- tokenized_inputs["labels"] = padding_tensor(
- examples["labels_entity_spans"], -100, tokenizer.padding_side, tokenizer.max_entity_length
- )
- tokenized_inputs["original_entity_spans"] = padding_tensor(
- examples["original_entity_spans"], (-1, -1), tokenizer.padding_side, tokenizer.max_entity_length
- )
- tokenized_inputs[label_column_name] = padding_tensor(
- examples[label_column_name], -1, tokenizer.padding_side, tokenizer.max_entity_length
- )
- else:
- tokenized_inputs["labels"] = [ex[: tokenizer.max_entity_length] for ex in examples["labels_entity_spans"]]
- tokenized_inputs["original_entity_spans"] = [
- ex[: tokenizer.max_entity_length] for ex in examples["original_entity_spans"]
- ]
- tokenized_inputs[label_column_name] = [
- ex[: tokenizer.max_entity_length] for ex in examples[label_column_name]
- ]
-
- return tokenized_inputs
-
- with accelerator.main_process_first():
- raw_datasets = raw_datasets.map(
- compute_sentence_boundaries_for_luke,
- batched=True,
- desc="Adding sentence boundaries",
- )
- raw_datasets = raw_datasets.map(
- compute_entity_spans_for_luke,
- batched=True,
- desc="Adding sentence spans",
- )
-
- processed_raw_datasets = raw_datasets.map(
- tokenize_and_align_labels,
- batched=True,
- remove_columns=raw_datasets["train"].column_names,
- desc="Running tokenizer on dataset",
- )
-
- train_dataset = processed_raw_datasets["train"]
- eval_dataset = processed_raw_datasets["validation"]
-
- # Log a few random samples from the training set:
- for index in random.sample(range(len(train_dataset)), 3):
- logger.info(f"Sample {index} of the training set: {train_dataset[index]}.")
-
- # DataLoaders creation:
- if args.pad_to_max_length:
- # If padding was already done ot max length, we use the default data collator that will just convert everything
- # to tensors.
- data_collator = default_data_collator
- else:
- # Otherwise, `DataCollatorForTokenClassification` will apply dynamic padding for us (by padding to the maximum length of
- # the samples passed). When using mixed precision, we add `pad_to_multiple_of=8` to pad all tensors to multiple
- # of 8s, which will enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
- data_collator = DataCollatorForLukeTokenClassification(
- tokenizer, pad_to_multiple_of=(8 if accelerator.use_fp16 else None)
- )
-
- train_dataloader = DataLoader(
- train_dataset, shuffle=True, collate_fn=data_collator, batch_size=args.per_device_train_batch_size
- )
- eval_dataloader = DataLoader(eval_dataset, collate_fn=data_collator, batch_size=args.per_device_eval_batch_size)
-
- # Optimizer
- # Split weights in two groups, one with weight decay and the other not.
- no_decay = ["bias", "LayerNorm.weight"]
- optimizer_grouped_parameters = [
- {
- "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
- "weight_decay": args.weight_decay,
- },
- {
- "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
- "weight_decay": 0.0,
- },
- ]
- optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate)
-
- # Use the device given by the `accelerator` object.
- device = accelerator.device
- model.to(device)
-
- # Prepare everything with our `accelerator`.
- model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(
- model, optimizer, train_dataloader, eval_dataloader
- )
-
- # Note -> the training dataloader needs to be prepared before we grab his length below (cause its length will be
- # shorter in multiprocess)
-
- # Scheduler and math around the number of training steps.
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
- if args.max_train_steps is None:
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
- else:
- args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
-
- lr_scheduler = get_scheduler(
- name=args.lr_scheduler_type,
- optimizer=optimizer,
- num_warmup_steps=args.num_warmup_steps,
- num_training_steps=args.max_train_steps,
- )
-
- # Metrics
- metric = load_metric("seqeval")
-
- def get_luke_labels(outputs, ner_tags, original_entity_spans):
- true_predictions = []
- true_labels = []
-
- for output, original_spans, tags in zip(outputs.logits, original_entity_spans, ner_tags):
- true_tags = [val for val in tags if val != -1]
- true_original_spans = [val for val in original_spans if val != (-1, -1)]
- max_indices = torch.argmax(output, axis=1)
- max_logits = torch.max(output, axis=1).values
- predictions = []
-
- for logit, index, span in zip(max_logits, max_indices, true_original_spans):
- if index != 0:
- predictions.append((logit, span, label_list[index]))
-
- predicted_sequence = [label_list[0]] * len(true_tags)
-
- for _, span, label in sorted(predictions, key=lambda o: o[0], reverse=True):
- if all([o == label_list[0] for o in predicted_sequence[span[0] : span[1]]]):
- predicted_sequence[span[0]] = label
- if span[1] - span[0] > 1:
- predicted_sequence[span[0] + 1 : span[1]] = [label] * (span[1] - span[0] - 1)
-
- true_predictions.append(predicted_sequence)
- true_labels.append([label_list[tag_id] for tag_id in true_tags])
-
- return true_predictions, true_labels
-
- def compute_metrics():
- results = metric.compute()
- if args.return_entity_level_metrics:
- # Unpack nested dictionaries
- final_results = {}
- for key, value in results.items():
- if isinstance(value, dict):
- for n, v in value.items():
- final_results[f"{key}_{n}"] = v
- else:
- final_results[key] = value
- return final_results
- else:
- return {
- "precision": results["overall_precision"],
- "recall": results["overall_recall"],
- "f1": results["overall_f1"],
- "accuracy": results["overall_accuracy"],
- }
-
- # Train!
- total_batch_size = args.per_device_train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
-
- logger.info("***** Running training *****")
- logger.info(f" Num examples = {len(train_dataset)}")
- logger.info(f" Num Epochs = {args.num_train_epochs}")
- logger.info(f" Instantaneous batch size per device = {args.per_device_train_batch_size}")
- logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
- logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
- logger.info(f" Total optimization steps = {args.max_train_steps}")
- # Only show the progress bar once on each machine.
- progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
- completed_steps = 0
-
- for epoch in range(args.num_train_epochs):
- model.train()
- for step, batch in enumerate(train_dataloader):
- _ = batch.pop("original_entity_spans")
- outputs = model(**batch)
- loss = outputs.loss
- loss = loss / args.gradient_accumulation_steps
- accelerator.backward(loss)
- if step % args.gradient_accumulation_steps == 0 or step == len(train_dataloader) - 1:
- optimizer.step()
- lr_scheduler.step()
- optimizer.zero_grad()
- progress_bar.update(1)
- completed_steps += 1
-
- if completed_steps >= args.max_train_steps:
- break
-
- model.eval()
- for step, batch in enumerate(eval_dataloader):
- original_entity_spans = batch.pop("original_entity_spans")
- with torch.no_grad():
- outputs = model(**batch)
-
- preds, refs = get_luke_labels(outputs, batch[label_column_name], original_entity_spans)
-
- metric.add_batch(
- predictions=preds,
- references=refs,
- ) # predictions and preferences are expected to be a nested list of labels, not label_ids
-
- eval_metric = compute_metrics()
- accelerator.print(f"epoch {epoch}:", eval_metric)
-
- if args.push_to_hub and epoch < args.num_train_epochs - 1:
- accelerator.wait_for_everyone()
- unwrapped_model = accelerator.unwrap_model(model)
- unwrapped_model.save_pretrained(args.output_dir, save_function=accelerator.save)
- if accelerator.is_main_process:
- tokenizer.save_pretrained(args.output_dir)
- repo.push_to_hub(
- commit_message=f"Training in progress epoch {epoch}", blocking=False, auto_lfs_prune=True
- )
-
- if args.output_dir is not None:
- accelerator.wait_for_everyone()
- unwrapped_model = accelerator.unwrap_model(model)
- unwrapped_model.save_pretrained(args.output_dir, save_function=accelerator.save)
- if accelerator.is_main_process:
- tokenizer.save_pretrained(args.output_dir)
- if args.push_to_hub:
- repo.push_to_hub(commit_message="End of training", auto_lfs_prune=True)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/chinhon/translation_eng2ch/app.py b/spaces/chinhon/translation_eng2ch/app.py
deleted file mode 100644
index f2b0232aaf8fd3473dddb1555e340ff5b75c2560..0000000000000000000000000000000000000000
--- a/spaces/chinhon/translation_eng2ch/app.py
+++ /dev/null
@@ -1,82 +0,0 @@
-import gradio as gr
-import nltk
-import numpy as np
-import re
-import warnings
-
-from nltk.tokenize import sent_tokenize
-from transformers import (
- MarianTokenizer,
- MarianMTModel,
-)
-
-nltk.download('punkt')
-
-#define function for text cleaning
-def clean_text(text):
- text = text.encode("ascii", errors="ignore").decode(
- "ascii"
- ) # remove non-ascii, Chinese characters
- text = re.sub(r"\n", " ", text)
- text = re.sub(r"\n\n", " ", text)
- text = re.sub(r"\t", " ", text)
- text = re.sub(r"http\S+", "", text)
- text = re.sub(r"ADVERTISEMENT", " ", text)
- text = re.sub(
- r"Download our app or subscribe to our Telegram channel for the latest updates on the coronavirus outbreak: https://cna.asia/telegram",
- " ",
- text,
- )
- text = re.sub(
- r"Download our app or subscribe to our Telegram channel for the latest updates on the COVID-19 outbreak: https://cna.asia/telegram",
- " ",
- text,
- )
- text = text.strip(" ")
- text = re.sub(
- " +", " ", text
- ).strip() # get rid of multiple spaces and replace with a single
- return text
-
-
-# define function for translation
-modchoice = "Helsinki-NLP/opus-mt-en-zh"
-
-
-def translate(text):
-
- input_text = clean_text(text)
-
- tokenizer = MarianTokenizer.from_pretrained(modchoice)
-
- model = MarianMTModel.from_pretrained(modchoice)
-
- if input_text is None or text == "":
- return ("Error",)
-
- translated = model.generate(
- **tokenizer.prepare_seq2seq_batch(
- sent_tokenize(input_text),
- truncation=True,
- padding="longest",
- return_tensors="pt"
- )
- )
-
- tgt_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
-
- return " ".join(tgt_text)
-
-
-gradio_ui = gr.Interface(
- fn=translate,
- title="English-to-Chinese translation",
- description="Translate English text into Chinese using MarianMT's opus-mt-en-zh model.",
- inputs=gr.inputs.Textbox(
- lines=20, label="Paste English text here"
- ),
- outputs=gr.outputs.Textbox(label="Chinese translation"),
- theme="huggingface",
-)
-
-gradio_ui.launch(enable_queue=True)
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cymem/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cymem/__init__.py
deleted file mode 100644
index a55014485a1e94a14df8dfaf1bce1c2921d047c3..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cymem/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .about import *
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dataclasses_json/core.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dataclasses_json/core.py
deleted file mode 100644
index fb7f0e6ba9ee543d503d9e1cf5e1a61c39648086..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dataclasses_json/core.py
+++ /dev/null
@@ -1,383 +0,0 @@
-import copy
-import json
-import warnings
-from collections import defaultdict, namedtuple
-# noinspection PyProtectedMember
-from dataclasses import (MISSING,
- _is_dataclass_instance,
- fields,
- is_dataclass # type: ignore
- )
-from datetime import datetime, timezone
-from decimal import Decimal
-from enum import Enum
-from typing import (Any, Collection, Mapping, Union, get_type_hints,
- Tuple, TypeVar)
-from uuid import UUID
-
-from typing_inspect import is_union_type # type: ignore
-
-from dataclasses_json import cfg
-from dataclasses_json.utils import (_get_type_cons, _get_type_origin,
- _handle_undefined_parameters_safe,
- _is_collection, _is_mapping, _is_new_type,
- _is_optional, _isinstance_safe,
- _get_type_arg_param,
- _get_type_args,
- _NO_ARGS,
- _issubclass_safe)
-
-Json = Union[dict, list, str, int, float, bool, None]
-
-confs = ['encoder', 'decoder', 'mm_field', 'letter_case', 'exclude']
-FieldOverride = namedtuple('FieldOverride', confs)
-
-
-class _ExtendedEncoder(json.JSONEncoder):
- def default(self, o) -> Json:
- result: Json
- if _isinstance_safe(o, Collection):
- if _isinstance_safe(o, Mapping):
- result = dict(o)
- else:
- result = list(o)
- elif _isinstance_safe(o, datetime):
- result = o.timestamp()
- elif _isinstance_safe(o, UUID):
- result = str(o)
- elif _isinstance_safe(o, Enum):
- result = o.value
- elif _isinstance_safe(o, Decimal):
- result = str(o)
- else:
- result = json.JSONEncoder.default(self, o)
- return result
-
-
-def _user_overrides_or_exts(cls):
- global_metadata = defaultdict(dict)
- encoders = cfg.global_config.encoders
- decoders = cfg.global_config.decoders
- mm_fields = cfg.global_config.mm_fields
- for field in fields(cls):
- if field.type in encoders:
- global_metadata[field.name]['encoder'] = encoders[field.type]
- if field.type in decoders:
- global_metadata[field.name]['decoder'] = decoders[field.type]
- if field.type in mm_fields:
- global_metadata[field.name]['mm_fields'] = mm_fields[field.type]
- try:
- cls_config = (cls.dataclass_json_config
- if cls.dataclass_json_config is not None else {})
- except AttributeError:
- cls_config = {}
-
- overrides = {}
- for field in fields(cls):
- field_config = {}
- # first apply global overrides or extensions
- field_metadata = global_metadata[field.name]
- if 'encoder' in field_metadata:
- field_config['encoder'] = field_metadata['encoder']
- if 'decoder' in field_metadata:
- field_config['decoder'] = field_metadata['decoder']
- if 'mm_field' in field_metadata:
- field_config['mm_field'] = field_metadata['mm_field']
- # then apply class-level overrides or extensions
- field_config.update(cls_config)
- # last apply field-level overrides or extensions
- field_config.update(field.metadata.get('dataclasses_json', {}))
- overrides[field.name] = FieldOverride(*map(field_config.get, confs))
- return overrides
-
-
-def _encode_json_type(value, default=_ExtendedEncoder().default):
- if isinstance(value, Json.__args__): # type: ignore
- if isinstance(value, list):
- return [_encode_json_type(i) for i in value]
- elif isinstance(value, dict):
- return {k: _encode_json_type(v) for k, v in value.items()}
- else:
- return value
- return default(value)
-
-
-def _encode_overrides(kvs, overrides, encode_json=False):
- override_kvs = {}
- for k, v in kvs.items():
- if k in overrides:
- exclude = overrides[k].exclude
- # If the exclude predicate returns true, the key should be
- # excluded from encoding, so skip the rest of the loop
- if exclude and exclude(v):
- continue
- letter_case = overrides[k].letter_case
- original_key = k
- k = letter_case(k) if letter_case is not None else k
-
- encoder = overrides[original_key].encoder
- v = encoder(v) if encoder is not None else v
-
- if encode_json:
- v = _encode_json_type(v)
- override_kvs[k] = v
- return override_kvs
-
-
-def _decode_letter_case_overrides(field_names, overrides):
- """Override letter case of field names for encode/decode"""
- names = {}
- for field_name in field_names:
- field_override = overrides.get(field_name)
- if field_override is not None:
- letter_case = field_override.letter_case
- if letter_case is not None:
- names[letter_case(field_name)] = field_name
- return names
-
-
-def _decode_dataclass(cls, kvs, infer_missing):
- if _isinstance_safe(kvs, cls):
- return kvs
- overrides = _user_overrides_or_exts(cls)
- kvs = {} if kvs is None and infer_missing else kvs
- field_names = [field.name for field in fields(cls)]
- decode_names = _decode_letter_case_overrides(field_names, overrides)
- kvs = {decode_names.get(k, k): v for k, v in kvs.items()}
- missing_fields = {field for field in fields(cls) if field.name not in kvs}
-
- for field in missing_fields:
- if field.default is not MISSING:
- kvs[field.name] = field.default
- elif field.default_factory is not MISSING:
- kvs[field.name] = field.default_factory()
- elif infer_missing:
- kvs[field.name] = None
-
- # Perform undefined parameter action
- kvs = _handle_undefined_parameters_safe(cls, kvs, usage="from")
-
- init_kwargs = {}
- types = get_type_hints(cls)
- for field in fields(cls):
- # The field should be skipped from being added
- # to init_kwargs as it's not intended as a constructor argument.
- if not field.init:
- continue
-
- field_value = kvs[field.name]
- field_type = types[field.name]
- if field_value is None:
- if not _is_optional(field_type):
- warning = (
- f"value of non-optional type {field.name} detected "
- f"when decoding {cls.__name__}"
- )
- if infer_missing:
- warnings.warn(
- f"Missing {warning} and was defaulted to None by "
- f"infer_missing=True. "
- f"Set infer_missing=False (the default) to prevent "
- f"this behavior.", RuntimeWarning
- )
- else:
- warnings.warn(
- f"`NoneType` object {warning}.", RuntimeWarning
- )
- init_kwargs[field.name] = field_value
- continue
-
- while True:
- if not _is_new_type(field_type):
- break
-
- field_type = field_type.__supertype__
-
- if (field.name in overrides
- and overrides[field.name].decoder is not None):
- # FIXME hack
- if field_type is type(field_value):
- init_kwargs[field.name] = field_value
- else:
- init_kwargs[field.name] = overrides[field.name].decoder(
- field_value)
- elif is_dataclass(field_type):
- # FIXME this is a band-aid to deal with the value already being
- # serialized when handling nested marshmallow schema
- # proper fix is to investigate the marshmallow schema generation
- # code
- if is_dataclass(field_value):
- value = field_value
- else:
- value = _decode_dataclass(field_type, field_value,
- infer_missing)
- init_kwargs[field.name] = value
- elif _is_supported_generic(field_type) and field_type != str:
- init_kwargs[field.name] = _decode_generic(field_type,
- field_value,
- infer_missing)
- else:
- init_kwargs[field.name] = _support_extended_types(field_type,
- field_value)
-
- return cls(**init_kwargs)
-
-
-def _support_extended_types(field_type, field_value):
- if _issubclass_safe(field_type, datetime):
- # FIXME this is a hack to deal with mm already decoding
- # the issue is we want to leverage mm fields' missing argument
- # but need this for the object creation hook
- if isinstance(field_value, datetime):
- res = field_value
- else:
- tz = datetime.now(timezone.utc).astimezone().tzinfo
- res = datetime.fromtimestamp(field_value, tz=tz)
- elif _issubclass_safe(field_type, Decimal):
- res = (field_value
- if isinstance(field_value, Decimal)
- else Decimal(field_value))
- elif _issubclass_safe(field_type, UUID):
- res = (field_value
- if isinstance(field_value, UUID)
- else UUID(field_value))
- elif _issubclass_safe(field_type, (int, float, str, bool)):
- res = (field_value
- if isinstance(field_value, field_type)
- else field_type(field_value))
- else:
- res = field_value
- return res
-
-
-def _is_supported_generic(type_):
- if type_ is _NO_ARGS:
- return False
- not_str = not _issubclass_safe(type_, str)
- is_enum = _issubclass_safe(type_, Enum)
- return (not_str and _is_collection(type_)) or _is_optional(
- type_) or is_union_type(type_) or is_enum
-
-
-def _decode_generic(type_, value, infer_missing):
- if value is None:
- res = value
- elif _issubclass_safe(type_, Enum):
- # Convert to an Enum using the type as a constructor.
- # Assumes a direct match is found.
- res = type_(value)
- # FIXME this is a hack to fix a deeper underlying issue. A refactor is due.
- elif _is_collection(type_):
- if _is_mapping(type_):
- k_type, v_type = _get_type_args(type_, (Any, Any))
- # a mapping type has `.keys()` and `.values()`
- # (see collections.abc)
- ks = _decode_dict_keys(k_type, value.keys(), infer_missing)
- vs = _decode_items(v_type, value.values(), infer_missing)
- xs = zip(ks, vs)
- else:
- xs = _decode_items(_get_type_arg_param(type_, 0),
- value, infer_missing)
-
- # get the constructor if using corresponding generic type in `typing`
- # otherwise fallback on constructing using type_ itself
- try:
- res = _get_type_cons(type_)(xs)
- except (TypeError, AttributeError):
- res = type_(xs)
- else: # Optional or Union
- _args = _get_type_args(type_)
- if _args is _NO_ARGS:
- # Any, just accept
- res = value
- elif _is_optional(type_) and len(_args) == 2: # Optional
- type_arg = _get_type_arg_param(type_, 0)
- if is_dataclass(type_arg) or is_dataclass(value):
- res = _decode_dataclass(type_arg, value, infer_missing)
- elif _is_supported_generic(type_arg):
- res = _decode_generic(type_arg, value, infer_missing)
- else:
- res = _support_extended_types(type_arg, value)
- else: # Union (already decoded or unsupported 'from_json' used)
- res = value
- return res
-
-
-def _decode_dict_keys(key_type, xs, infer_missing):
- """
- Because JSON object keys must be strs, we need the extra step of decoding
- them back into the user's chosen python type
- """
- decode_function = key_type
- # handle NoneType keys... it's weird to type a Dict as NoneType keys
- # but it's valid...
- # Issue #341 and PR #346:
- # This is a special case for Python 3.7 and Python 3.8.
- # By some reason, "unbound" dicts are counted
- # as having key type parameter to be TypeVar('KT')
- if key_type is None or key_type == Any or isinstance(key_type, TypeVar):
- decode_function = key_type = (lambda x: x)
- # handle a nested python dict that has tuples for keys. E.g. for
- # Dict[Tuple[int], int], key_type will be typing.Tuple[int], but
- # decode_function should be tuple, so map() doesn't break.
- #
- # Note: _get_type_origin() will return typing.Tuple for python
- # 3.6 and tuple for 3.7 and higher.
- elif _get_type_origin(key_type) in {tuple, Tuple}:
- decode_function = tuple
- key_type = key_type
-
- return map(decode_function, _decode_items(key_type, xs, infer_missing))
-
-
-def _decode_items(type_arg, xs, infer_missing):
- """
- This is a tricky situation where we need to check both the annotated
- type info (which is usually a type from `typing`) and check the
- value's type directly using `type()`.
-
- If the type_arg is a generic we can use the annotated type, but if the
- type_arg is a typevar we need to extract the reified type information
- hence the check of `is_dataclass(vs)`
- """
- if is_dataclass(type_arg) or is_dataclass(xs):
- items = (_decode_dataclass(type_arg, x, infer_missing)
- for x in xs)
- elif _is_supported_generic(type_arg):
- items = (_decode_generic(type_arg, x, infer_missing) for x in xs)
- else:
- items = xs
- return items
-
-
-def _asdict(obj, encode_json=False):
- """
- A re-implementation of `asdict` (based on the original in the `dataclasses`
- source) to support arbitrary Collection and Mapping types.
- """
- if _is_dataclass_instance(obj):
- result = []
- overrides = _user_overrides_or_exts(obj)
- for field in fields(obj):
- if overrides[field.name].encoder:
- value = getattr(obj, field.name)
- else:
- value = _asdict(
- getattr(obj, field.name),
- encode_json=encode_json
- )
- result.append((field.name, value))
-
- result = _handle_undefined_parameters_safe(cls=obj, kvs=dict(result),
- usage="to")
- return _encode_overrides(dict(result), _user_overrides_or_exts(obj),
- encode_json=encode_json)
- elif isinstance(obj, Mapping):
- return dict((_asdict(k, encode_json=encode_json),
- _asdict(v, encode_json=encode_json)) for k, v in
- obj.items())
- elif isinstance(obj, Collection) and not isinstance(obj, str) \
- and not isinstance(obj, bytes):
- return list(_asdict(v, encode_json=encode_json) for v in obj)
- else:
- return copy.deepcopy(obj)
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/M_E_T_A_.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/M_E_T_A_.py
deleted file mode 100644
index 6631e2f30c3b24b952ee9a9c57c7355ba09a0885..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/M_E_T_A_.py
+++ /dev/null
@@ -1,346 +0,0 @@
-from fontTools.misc import sstruct
-from fontTools.misc.textTools import byteord, safeEval
-from . import DefaultTable
-import pdb
-import struct
-
-
-METAHeaderFormat = """
- > # big endian
- tableVersionMajor: H
- tableVersionMinor: H
- metaEntriesVersionMajor: H
- metaEntriesVersionMinor: H
- unicodeVersion: L
- metaFlags: H
- nMetaRecs: H
-"""
-# This record is followed by nMetaRecs of METAGlyphRecordFormat.
-# This in turn is followd by as many METAStringRecordFormat entries
-# as specified by the METAGlyphRecordFormat entries
-# this is followed by the strings specifried in the METAStringRecordFormat
-METAGlyphRecordFormat = """
- > # big endian
- glyphID: H
- nMetaEntry: H
-"""
-# This record is followd by a variable data length field:
-# USHORT or ULONG hdrOffset
-# Offset from start of META table to the beginning
-# of this glyphs array of ns Metadata string entries.
-# Size determined by metaFlags field
-# METAGlyphRecordFormat entries must be sorted by glyph ID
-
-METAStringRecordFormat = """
- > # big endian
- labelID: H
- stringLen: H
-"""
-# This record is followd by a variable data length field:
-# USHORT or ULONG stringOffset
-# METAStringRecordFormat entries must be sorted in order of labelID
-# There may be more than one entry with the same labelID
-# There may be more than one strign with the same content.
-
-# Strings shall be Unicode UTF-8 encoded, and null-terminated.
-
-METALabelDict = {
- 0: "MojikumiX4051", # An integer in the range 1-20
- 1: "UNIUnifiedBaseChars",
- 2: "BaseFontName",
- 3: "Language",
- 4: "CreationDate",
- 5: "FoundryName",
- 6: "FoundryCopyright",
- 7: "OwnerURI",
- 8: "WritingScript",
- 10: "StrokeCount",
- 11: "IndexingRadical",
-}
-
-
-def getLabelString(labelID):
- try:
- label = METALabelDict[labelID]
- except KeyError:
- label = "Unknown label"
- return str(label)
-
-
-class table_M_E_T_A_(DefaultTable.DefaultTable):
-
- dependencies = []
-
- def decompile(self, data, ttFont):
- dummy, newData = sstruct.unpack2(METAHeaderFormat, data, self)
- self.glyphRecords = []
- for i in range(self.nMetaRecs):
- glyphRecord, newData = sstruct.unpack2(
- METAGlyphRecordFormat, newData, GlyphRecord()
- )
- if self.metaFlags == 0:
- [glyphRecord.offset] = struct.unpack(">H", newData[:2])
- newData = newData[2:]
- elif self.metaFlags == 1:
- [glyphRecord.offset] = struct.unpack(">H", newData[:4])
- newData = newData[4:]
- else:
- assert 0, (
- "The metaFlags field in the META table header has a value other than 0 or 1 :"
- + str(self.metaFlags)
- )
- glyphRecord.stringRecs = []
- newData = data[glyphRecord.offset :]
- for j in range(glyphRecord.nMetaEntry):
- stringRec, newData = sstruct.unpack2(
- METAStringRecordFormat, newData, StringRecord()
- )
- if self.metaFlags == 0:
- [stringRec.offset] = struct.unpack(">H", newData[:2])
- newData = newData[2:]
- else:
- [stringRec.offset] = struct.unpack(">H", newData[:4])
- newData = newData[4:]
- stringRec.string = data[
- stringRec.offset : stringRec.offset + stringRec.stringLen
- ]
- glyphRecord.stringRecs.append(stringRec)
- self.glyphRecords.append(glyphRecord)
-
- def compile(self, ttFont):
- offsetOK = 0
- self.nMetaRecs = len(self.glyphRecords)
- count = 0
- while offsetOK != 1:
- count = count + 1
- if count > 4:
- pdb.set_trace()
- metaData = sstruct.pack(METAHeaderFormat, self)
- stringRecsOffset = len(metaData) + self.nMetaRecs * (
- 6 + 2 * (self.metaFlags & 1)
- )
- stringRecSize = 6 + 2 * (self.metaFlags & 1)
- for glyphRec in self.glyphRecords:
- glyphRec.offset = stringRecsOffset
- if (glyphRec.offset > 65535) and ((self.metaFlags & 1) == 0):
- self.metaFlags = self.metaFlags + 1
- offsetOK = -1
- break
- metaData = metaData + glyphRec.compile(self)
- stringRecsOffset = stringRecsOffset + (
- glyphRec.nMetaEntry * stringRecSize
- )
- # this will be the String Record offset for the next GlyphRecord.
- if offsetOK == -1:
- offsetOK = 0
- continue
-
- # metaData now contains the header and all of the GlyphRecords. Its length should bw
- # the offset to the first StringRecord.
- stringOffset = stringRecsOffset
- for glyphRec in self.glyphRecords:
- assert glyphRec.offset == len(
- metaData
- ), "Glyph record offset did not compile correctly! for rec:" + str(
- glyphRec
- )
- for stringRec in glyphRec.stringRecs:
- stringRec.offset = stringOffset
- if (stringRec.offset > 65535) and ((self.metaFlags & 1) == 0):
- self.metaFlags = self.metaFlags + 1
- offsetOK = -1
- break
- metaData = metaData + stringRec.compile(self)
- stringOffset = stringOffset + stringRec.stringLen
- if offsetOK == -1:
- offsetOK = 0
- continue
-
- if ((self.metaFlags & 1) == 1) and (stringOffset < 65536):
- self.metaFlags = self.metaFlags - 1
- continue
- else:
- offsetOK = 1
-
- # metaData now contains the header and all of the GlyphRecords and all of the String Records.
- # Its length should be the offset to the first string datum.
- for glyphRec in self.glyphRecords:
- for stringRec in glyphRec.stringRecs:
- assert stringRec.offset == len(
- metaData
- ), "String offset did not compile correctly! for string:" + str(
- stringRec.string
- )
- metaData = metaData + stringRec.string
-
- return metaData
-
- def toXML(self, writer, ttFont):
- writer.comment(
- "Lengths and number of entries in this table will be recalculated by the compiler"
- )
- writer.newline()
- formatstring, names, fixes = sstruct.getformat(METAHeaderFormat)
- for name in names:
- value = getattr(self, name)
- writer.simpletag(name, value=value)
- writer.newline()
- for glyphRec in self.glyphRecords:
- glyphRec.toXML(writer, ttFont)
-
- def fromXML(self, name, attrs, content, ttFont):
- if name == "GlyphRecord":
- if not hasattr(self, "glyphRecords"):
- self.glyphRecords = []
- glyphRec = GlyphRecord()
- self.glyphRecords.append(glyphRec)
- for element in content:
- if isinstance(element, str):
- continue
- name, attrs, content = element
- glyphRec.fromXML(name, attrs, content, ttFont)
- glyphRec.offset = -1
- glyphRec.nMetaEntry = len(glyphRec.stringRecs)
- else:
- setattr(self, name, safeEval(attrs["value"]))
-
-
-class GlyphRecord(object):
- def __init__(self):
- self.glyphID = -1
- self.nMetaEntry = -1
- self.offset = -1
- self.stringRecs = []
-
- def toXML(self, writer, ttFont):
- writer.begintag("GlyphRecord")
- writer.newline()
- writer.simpletag("glyphID", value=self.glyphID)
- writer.newline()
- writer.simpletag("nMetaEntry", value=self.nMetaEntry)
- writer.newline()
- for stringRec in self.stringRecs:
- stringRec.toXML(writer, ttFont)
- writer.endtag("GlyphRecord")
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- if name == "StringRecord":
- stringRec = StringRecord()
- self.stringRecs.append(stringRec)
- for element in content:
- if isinstance(element, str):
- continue
- stringRec.fromXML(name, attrs, content, ttFont)
- stringRec.stringLen = len(stringRec.string)
- else:
- setattr(self, name, safeEval(attrs["value"]))
-
- def compile(self, parentTable):
- data = sstruct.pack(METAGlyphRecordFormat, self)
- if parentTable.metaFlags == 0:
- datum = struct.pack(">H", self.offset)
- elif parentTable.metaFlags == 1:
- datum = struct.pack(">L", self.offset)
- data = data + datum
- return data
-
- def __repr__(self):
- return (
- "GlyphRecord[ glyphID: "
- + str(self.glyphID)
- + ", nMetaEntry: "
- + str(self.nMetaEntry)
- + ", offset: "
- + str(self.offset)
- + " ]"
- )
-
-
-# XXX The following two functions are really broken around UTF-8 vs Unicode
-
-
-def mapXMLToUTF8(string):
- uString = str()
- strLen = len(string)
- i = 0
- while i < strLen:
- prefixLen = 0
- if string[i : i + 3] == "":
- prefixLen = 3
- elif string[i : i + 7] == "&#x":
- prefixLen = 7
- if prefixLen:
- i = i + prefixLen
- j = i
- while string[i] != ";":
- i = i + 1
- valStr = string[j:i]
-
- uString = uString + chr(eval("0x" + valStr))
- else:
- uString = uString + chr(byteord(string[i]))
- i = i + 1
-
- return uString.encode("utf_8")
-
-
-def mapUTF8toXML(string):
- uString = string.decode("utf_8")
- string = ""
- for uChar in uString:
- i = ord(uChar)
- if (i < 0x80) and (i > 0x1F):
- string = string + uChar
- else:
- string = string + "" + hex(i)[2:] + ";"
- return string
-
-
-class StringRecord(object):
- def toXML(self, writer, ttFont):
- writer.begintag("StringRecord")
- writer.newline()
- writer.simpletag("labelID", value=self.labelID)
- writer.comment(getLabelString(self.labelID))
- writer.newline()
- writer.newline()
- writer.simpletag("string", value=mapUTF8toXML(self.string))
- writer.newline()
- writer.endtag("StringRecord")
- writer.newline()
-
- def fromXML(self, name, attrs, content, ttFont):
- for element in content:
- if isinstance(element, str):
- continue
- name, attrs, content = element
- value = attrs["value"]
- if name == "string":
- self.string = mapXMLToUTF8(value)
- else:
- setattr(self, name, safeEval(value))
-
- def compile(self, parentTable):
- data = sstruct.pack(METAStringRecordFormat, self)
- if parentTable.metaFlags == 0:
- datum = struct.pack(">H", self.offset)
- elif parentTable.metaFlags == 1:
- datum = struct.pack(">L", self.offset)
- data = data + datum
- return data
-
- def __repr__(self):
- return (
- "StringRecord [ labelID: "
- + str(self.labelID)
- + " aka "
- + getLabelString(self.labelID)
- + ", offset: "
- + str(self.offset)
- + ", length: "
- + str(self.stringLen)
- + ", string: "
- + self.string
- + " ]"
- )
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/BlockTitle-8596cf63.js b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/BlockTitle-8596cf63.js
deleted file mode 100644
index 8e02dd7401fc7513a8ed6ef1e2674f469d0a703c..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/BlockTitle-8596cf63.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as h,e as k,s as g,a9 as w,N as $,O as B,m as I,K as d,U as _,p as c,ab as N,ac as S,ad as j,z as r,u as q,v as m,y as v,A as p,k as z,o as A,x as C,P as K,R as O}from"./index-f877dfd5.js";import{I as P}from"./Info-f92267f9.js";import"./Button-11a87b79.js";function b(a){let e,l;return e=new P({props:{$$slots:{default:[R]},$$scope:{ctx:a}}}),{c(){z(e.$$.fragment)},m(n,o){A(e,n,o),l=!0},p(n,o){const u={};o&10&&(u.$$scope={dirty:o,ctx:n}),e.$set(u)},i(n){l||(r(e.$$.fragment,n),l=!0)},o(n){m(e.$$.fragment,n),l=!1},d(n){C(e,n)}}}function R(a){let e;return{c(){e=K(a[1])},m(l,n){c(l,e,n)},p(l,n){n&2&&O(e,l[1])},d(l){l&&p(e)}}}function T(a){let e,l,n,o;const u=a[2].default,f=w(u,a,a[3],null);let s=a[1]&&b(a);return{c(){e=$("span"),f&&f.c(),l=B(),s&&s.c(),n=I(),d(e,"data-testid","block-info"),d(e,"class","svelte-1gfkn6j"),_(e,"sr-only",!a[0]),_(e,"hide",!a[0]),_(e,"has-info",a[1]!=null)},m(t,i){c(t,e,i),f&&f.m(e,null),c(t,l,i),s&&s.m(t,i),c(t,n,i),o=!0},p(t,[i]){f&&f.p&&(!o||i&8)&&N(f,u,t,t[3],o?j(u,t[3],i,null):S(t[3]),null),(!o||i&1)&&_(e,"sr-only",!t[0]),(!o||i&1)&&_(e,"hide",!t[0]),(!o||i&2)&&_(e,"has-info",t[1]!=null),t[1]?s?(s.p(t,i),i&2&&r(s,1)):(s=b(t),s.c(),r(s,1),s.m(n.parentNode,n)):s&&(q(),m(s,1,1,()=>{s=null}),v())},i(t){o||(r(f,t),r(s),o=!0)},o(t){m(f,t),m(s),o=!1},d(t){t&&(p(e),p(l),p(n)),f&&f.d(t),s&&s.d(t)}}}function U(a,e,l){let{$$slots:n={},$$scope:o}=e,{show_label:u=!0}=e,{info:f=void 0}=e;return a.$$set=s=>{"show_label"in s&&l(0,u=s.show_label),"info"in s&&l(1,f=s.info),"$$scope"in s&&l(3,o=s.$$scope)},[u,f,n,o]}class G extends h{constructor(e){super(),k(this,e,U,T,g,{show_label:0,info:1})}}export{G as B};
-//# sourceMappingURL=BlockTitle-8596cf63.js.map
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Column-2853eb31.css b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Column-2853eb31.css
deleted file mode 100644
index 8657e4c7112cc9a8232f875b00f9cf9aaac5e9f6..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Column-2853eb31.css
+++ /dev/null
@@ -1 +0,0 @@
-div.svelte-vt1mxs{display:flex;position:relative;flex-direction:column}div.svelte-vt1mxs>*,div.svelte-vt1mxs>.form>*{width:var(--size-full)}.gap.svelte-vt1mxs{gap:var(--layout-gap)}.hide.svelte-vt1mxs{display:none}.compact.svelte-vt1mxs>*,.compact.svelte-vt1mxs .box{border-radius:0}.compact.svelte-vt1mxs,.panel.svelte-vt1mxs{border:solid var(--panel-border-width) var(--panel-border-color);border-radius:var(--container-radius);background:var(--panel-background-fill);padding:var(--spacing-lg)}
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js
deleted file mode 100644
index ea59a3c30d1a396de1e3dcd8e62be35a7e273f73..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js
+++ /dev/null
@@ -1,2 +0,0 @@
-function l(e,n,a){if(e==null)return null;if(typeof e=="string")return{name:"file_data",data:e};if(Array.isArray(e)){const s=[];for(const t of e)t===null?s.push(null):s.push(l(t,n,a));return s}else e.is_file&&(a==null?e.data=n+"/file="+e.name:e.data="/proxy="+a+"file="+e.name);return e}const r=e=>{const n=new FileReader;return n.readAsDataURL(e),new Promise(a=>{n.onloadend=()=>{a(n.result)}})};export{r as b,l as n};
-//# sourceMappingURL=ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js.map
diff --git a/spaces/cihyFjudo/fairness-paper-search/Applemacsoft Drm Converter Keygen Music How to Crack iTunes DRM and Enjoy Your Movies.md b/spaces/cihyFjudo/fairness-paper-search/Applemacsoft Drm Converter Keygen Music How to Crack iTunes DRM and Enjoy Your Movies.md
deleted file mode 100644
index d0b3125d0df4eb22cf31f16260667b3dfc3cf5e5..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Applemacsoft Drm Converter Keygen Music How to Crack iTunes DRM and Enjoy Your Movies.md
+++ /dev/null
@@ -1,12 +0,0 @@
-
-
In such cases, is it possible to remove DRM restrictions from iTunes movies, TV shows, and music videos so that you can play your iTunes purchased and even rented items on any devices offline without compatibility limitation? This article is written to deal with the problem. I'll cover a lot of information you need to know about removing DRM from iTunes videos. I'll also run you through a 5-step process for stripping DRM from iTunes movies with the best M4V converter as well.
Combined with TunesKit DRM M4V Converter for Mac, DRM Audiobook Converter for Mac, iBook Copy for Mac, and Apple Music Converter for Mac, this 4-in-one DRMmedia converter bundle is able to assist you bypass DRM lock from iTunes M4V movies, TV shows, music videos, audiobooks, iBooks as well as Apple Music M4P tracks on Mac OS X and macOS 10.12 with ease.
-
Combined with TunesKit DRM M4V Converter for Windows and iTunes DRM M4V Converter for Mac, this DRM M4V converter bundle will help you remove DRM protection from encrypted iTunes M4V movies, TV shows and music videos losslessly to MP4, MOV, AVI, WMV, MP3, etc on both Windows and Mac platforms.
-
To crack the Apple Music DRM lock, an optimal DRM removal tool is indispensable. While throughout most of the DRM media converter on the market, nothing much softwares that are capable of bypassing Apple Music DRM protections, except for MacX MediaTrans. It works like a charm to remove DRM from Apple Music, iTunes, auto convert Apple Music M4P to MP3, AAC for free playback on Android, Google, Windows mobiles, VLC players or other non-Apple devices. And if you don't want to decrypt the Apple Music tracks or albums, you can use this M4P DRM converter as a music transfer App to transfer purchases from iPhone to Mac and vice versa with original quality reserved.
-
To crack DRM from iTunes protected music, you can seek help from NoteBurner iTunes Audio Converter , it is a quite professional DRM audio converter, which can remove or crack DRM from iTunes music, and convert any audio which can be played in iTunes, such as iTunes music, audiobooks, Apple Music files to MP3, AAC, FLAC, AIFF, WAV, or ALAC format.
-
-
Free Apple Music Converter by ThunderSoft is a music converter tool for Windows that helps convert DRM-protected Apple music into audio formats that could be played on non-Apple audio players such as Zune, PSP and also mobile devices. The music files can be directly imported from iTunes.
-
The UkeySoft Spotify Music Converter application works on both Windows and Mac computer, with this Spotify to MP3 converter, you can download and convert Spotify music to MP3, M4A, WAV, FLAC or any other format, whether Free or Premium subscription. UkeySoft Spotify Music Converter is a very simple and easy-to-use the software. It has an intuitive and clean UI as well. It is perhaps the best and most effective converter software we have ever used. Here are all the features the application provides.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/How RegCure Pro 3.3.30 Crack Can Make Your PC Run Like New Again.md b/spaces/cihyFjudo/fairness-paper-search/How RegCure Pro 3.3.30 Crack Can Make Your PC Run Like New Again.md
deleted file mode 100644
index 64cc256f60a279bf4866974408bbb6cb2a52ab2b..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/How RegCure Pro 3.3.30 Crack Can Make Your PC Run Like New Again.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Toad Oracle 64 bit Free Download Crack.12 How to Perform Daily Tasks Efficiently and Accurately with Toad.md b/spaces/cihyFjudo/fairness-paper-search/Toad Oracle 64 bit Free Download Crack.12 How to Perform Daily Tasks Efficiently and Accurately with Toad.md
deleted file mode 100644
index b09a2a378918ce05360979eccf7c87369f31a033..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Toad Oracle 64 bit Free Download Crack.12 How to Perform Daily Tasks Efficiently and Accurately with Toad.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-
Congratulations! You have successfully completed your Toad for Oracle download and installation. The Toad for Oracle download process involved selecting the appropriate Edition, according to your needs, obtaining a license key, choosing an installer type, downloading the installer, and using the installer to install Toad for Oracle 13. This article demonstrates the complete procedure for Toad for Oracle download using the free Trial version.
Freeware programs can be downloaded used free of charge and without any time limitations. Freeware products can be used free of charge for both personal and professional (commercial use).
-
This license is commonly used for video games and it allows users to download and play the game for free. Basically, a product is offered Free to Play (Freemium) and the user can decide if he wants to pay the money (Premium) for additional features, services, virtual or physical goods that expand the functionality of the game. In some cases, ads may be show to the users.
-
TOAD for Oracle is a Developer Tools application like PyCharm, RazorSQL, and Node.js from Quest Software Inc.. It has a simple and basic user interface, and most importantly, it is free to download. TOAD for Oracle is an efficient software that is recommended by many Windows PC users.
-
TOAD for Oracle is one of the most popular Developer Tools alongside JustDecompile, Artifactory, and Balsamiq. This app has its advantages compared to other Developer Tools applications. TOAD for Oracle is lightweight and easy to use, simple for beginners and powerful for professionals. TOAD for Oracle application is free to download and offers easy-to-install, easy-to-use, secure, and reliable Developer Tools applications.
-
-
Q: How do I access the free TOAD for Oracle download for Windows PC? A: It is easy! Just click the free TOAD for Oracle download button in the above of this page. Clicking the download button will start the installer to download TOAD for Oracle free for a PC/laptop.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/qtPen.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/qtPen.py
deleted file mode 100644
index eb13d03d2f611de4ce0b29ce3995f85e8f9e491a..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/pens/qtPen.py
+++ /dev/null
@@ -1,29 +0,0 @@
-from fontTools.pens.basePen import BasePen
-
-
-__all__ = ["QtPen"]
-
-
-class QtPen(BasePen):
- def __init__(self, glyphSet, path=None):
- BasePen.__init__(self, glyphSet)
- if path is None:
- from PyQt5.QtGui import QPainterPath
-
- path = QPainterPath()
- self.path = path
-
- def _moveTo(self, p):
- self.path.moveTo(*p)
-
- def _lineTo(self, p):
- self.path.lineTo(*p)
-
- def _curveToOne(self, p1, p2, p3):
- self.path.cubicTo(*p1, *p2, *p3)
-
- def _qCurveToOne(self, p1, p2):
- self.path.quadTo(*p1, *p2)
-
- def _closePath(self):
- self.path.closeSubpath()
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacsbr_template.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacsbr_template.c
deleted file mode 100644
index cdca402f04c10114052e15674a6fabf2bee2d5e2..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacsbr_template.c
+++ /dev/null
@@ -1,1604 +0,0 @@
-/*
- * AAC Spectral Band Replication decoding functions
- * Copyright (c) 2008-2009 Robert Swain ( rob opendot cl )
- * Copyright (c) 2009-2010 Alex Converse
- *
- * Fixed point code
- * Copyright (c) 2013
- * MIPS Technologies, Inc., California.
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * AAC Spectral Band Replication decoding functions
- * @author Robert Swain ( rob opendot cl )
- * @author Stanislav Ocovaj ( stanislav.ocovaj@imgtec.com )
- * @author Zoran Basaric ( zoran.basaric@imgtec.com )
- */
-
-#include "libavutil/qsort.h"
-
-static av_cold void aacsbr_tableinit(void)
-{
- int n;
-
- for (n = 0; n < 320; n++)
- sbr_qmf_window_ds[n] = sbr_qmf_window_us[2*n];
-}
-
-av_cold void AAC_RENAME(ff_aac_sbr_init)(void)
-{
- static const struct {
- const void *sbr_codes, *sbr_bits;
- const unsigned int table_size, elem_size;
- } sbr_tmp[] = {
- SBR_VLC_ROW(t_huffman_env_1_5dB),
- SBR_VLC_ROW(f_huffman_env_1_5dB),
- SBR_VLC_ROW(t_huffman_env_bal_1_5dB),
- SBR_VLC_ROW(f_huffman_env_bal_1_5dB),
- SBR_VLC_ROW(t_huffman_env_3_0dB),
- SBR_VLC_ROW(f_huffman_env_3_0dB),
- SBR_VLC_ROW(t_huffman_env_bal_3_0dB),
- SBR_VLC_ROW(f_huffman_env_bal_3_0dB),
- SBR_VLC_ROW(t_huffman_noise_3_0dB),
- SBR_VLC_ROW(t_huffman_noise_bal_3_0dB),
- };
-
- // SBR VLC table initialization
- SBR_INIT_VLC_STATIC(0, 1098);
- SBR_INIT_VLC_STATIC(1, 1092);
- SBR_INIT_VLC_STATIC(2, 768);
- SBR_INIT_VLC_STATIC(3, 1026);
- SBR_INIT_VLC_STATIC(4, 1058);
- SBR_INIT_VLC_STATIC(5, 1052);
- SBR_INIT_VLC_STATIC(6, 544);
- SBR_INIT_VLC_STATIC(7, 544);
- SBR_INIT_VLC_STATIC(8, 592);
- SBR_INIT_VLC_STATIC(9, 512);
-
- aacsbr_tableinit();
-
- AAC_RENAME(ff_ps_init)();
-}
-
-/** Places SBR in pure upsampling mode. */
-static void sbr_turnoff(SpectralBandReplication *sbr) {
- sbr->start = 0;
- sbr->ready_for_dequant = 0;
- // Init defults used in pure upsampling mode
- sbr->kx[1] = 32; //Typo in spec, kx' inits to 32
- sbr->m[1] = 0;
- // Reset values for first SBR header
- sbr->data[0].e_a[1] = sbr->data[1].e_a[1] = -1;
- memset(&sbr->spectrum_params, -1, sizeof(SpectrumParameters));
-}
-
-av_cold int AAC_RENAME(ff_aac_sbr_ctx_init)(AACContext *ac, SpectralBandReplication *sbr, int id_aac)
-{
- int ret;
- float scale;
-
- if (sbr->mdct)
- return 0;
-
- sbr->kx[0] = sbr->kx[1];
- sbr->id_aac = id_aac;
- sbr_turnoff(sbr);
- sbr->data[0].synthesis_filterbank_samples_offset = SBR_SYNTHESIS_BUF_SIZE - (1280 - 128);
- sbr->data[1].synthesis_filterbank_samples_offset = SBR_SYNTHESIS_BUF_SIZE - (1280 - 128);
- /* SBR requires samples to be scaled to +/-32768.0 to work correctly.
- * mdct scale factors are adjusted to scale up from +/-1.0 at analysis
- * and scale back down at synthesis. */
-
- scale = USE_FIXED ? 1 : 1.0 / (64 * 32768);
- ret = av_tx_init(&sbr->mdct, &sbr->mdct_fn,
- USE_FIXED ? AV_TX_INT32_MDCT : AV_TX_FLOAT_MDCT,
- 1, 64, &scale, 0);
- if (ret < 0)
- return ret;
-
- scale = USE_FIXED ? -1.0 : -2.0 * 32768;
- ret = av_tx_init(&sbr->mdct_ana, &sbr->mdct_ana_fn,
- USE_FIXED ? AV_TX_INT32_MDCT : AV_TX_FLOAT_MDCT,
- 1, 64, &scale, 0);
- if (ret < 0)
- return ret;
-
- AAC_RENAME(ff_ps_ctx_init)(&sbr->ps);
- AAC_RENAME(ff_sbrdsp_init)(&sbr->dsp);
- aacsbr_func_ptr_init(&sbr->c);
-
- return 0;
-}
-
-av_cold void AAC_RENAME(ff_aac_sbr_ctx_close)(SpectralBandReplication *sbr)
-{
- av_tx_uninit(&sbr->mdct);
- av_tx_uninit(&sbr->mdct_ana);
-}
-
-static int qsort_comparison_function_int16(const void *a, const void *b)
-{
- return *(const int16_t *)a - *(const int16_t *)b;
-}
-
-static inline int in_table_int16(const int16_t *table, int last_el, int16_t needle)
-{
- int i;
- for (i = 0; i <= last_el; i++)
- if (table[i] == needle)
- return 1;
- return 0;
-}
-
-/// Limiter Frequency Band Table (14496-3 sp04 p198)
-static void sbr_make_f_tablelim(SpectralBandReplication *sbr)
-{
- int k;
- if (sbr->bs_limiter_bands > 0) {
- static const INTFLOAT bands_warped[3] = { Q23(1.32715174233856803909f), //2^(0.49/1.2)
- Q23(1.18509277094158210129f), //2^(0.49/2)
- Q23(1.11987160404675912501f) }; //2^(0.49/3)
- const INTFLOAT lim_bands_per_octave_warped = bands_warped[sbr->bs_limiter_bands - 1];
- int16_t patch_borders[7];
- uint16_t *in = sbr->f_tablelim + 1, *out = sbr->f_tablelim;
-
- patch_borders[0] = sbr->kx[1];
- for (k = 1; k <= sbr->num_patches; k++)
- patch_borders[k] = patch_borders[k-1] + sbr->patch_num_subbands[k-1];
-
- memcpy(sbr->f_tablelim, sbr->f_tablelow,
- (sbr->n[0] + 1) * sizeof(sbr->f_tablelow[0]));
- if (sbr->num_patches > 1)
- memcpy(sbr->f_tablelim + sbr->n[0] + 1, patch_borders + 1,
- (sbr->num_patches - 1) * sizeof(patch_borders[0]));
-
- AV_QSORT(sbr->f_tablelim, sbr->num_patches + sbr->n[0],
- uint16_t,
- qsort_comparison_function_int16);
-
- sbr->n_lim = sbr->n[0] + sbr->num_patches - 1;
- while (out < sbr->f_tablelim + sbr->n_lim) {
-#if USE_FIXED
- if ((*in << 23) >= *out * lim_bands_per_octave_warped) {
-#else
- if (*in >= *out * lim_bands_per_octave_warped) {
-#endif /* USE_FIXED */
- *++out = *in++;
- } else if (*in == *out ||
- !in_table_int16(patch_borders, sbr->num_patches, *in)) {
- in++;
- sbr->n_lim--;
- } else if (!in_table_int16(patch_borders, sbr->num_patches, *out)) {
- *out = *in++;
- sbr->n_lim--;
- } else {
- *++out = *in++;
- }
- }
- } else {
- sbr->f_tablelim[0] = sbr->f_tablelow[0];
- sbr->f_tablelim[1] = sbr->f_tablelow[sbr->n[0]];
- sbr->n_lim = 1;
- }
-}
-
-static unsigned int read_sbr_header(SpectralBandReplication *sbr, GetBitContext *gb)
-{
- unsigned int cnt = get_bits_count(gb);
- uint8_t bs_header_extra_1;
- uint8_t bs_header_extra_2;
- int old_bs_limiter_bands = sbr->bs_limiter_bands;
- SpectrumParameters old_spectrum_params;
-
- sbr->start = 1;
- sbr->ready_for_dequant = 0;
-
- // Save last spectrum parameters variables to compare to new ones
- memcpy(&old_spectrum_params, &sbr->spectrum_params, sizeof(SpectrumParameters));
-
- sbr->bs_amp_res_header = get_bits1(gb);
- sbr->spectrum_params.bs_start_freq = get_bits(gb, 4);
- sbr->spectrum_params.bs_stop_freq = get_bits(gb, 4);
- sbr->spectrum_params.bs_xover_band = get_bits(gb, 3);
- skip_bits(gb, 2); // bs_reserved
-
- bs_header_extra_1 = get_bits1(gb);
- bs_header_extra_2 = get_bits1(gb);
-
- if (bs_header_extra_1) {
- sbr->spectrum_params.bs_freq_scale = get_bits(gb, 2);
- sbr->spectrum_params.bs_alter_scale = get_bits1(gb);
- sbr->spectrum_params.bs_noise_bands = get_bits(gb, 2);
- } else {
- sbr->spectrum_params.bs_freq_scale = 2;
- sbr->spectrum_params.bs_alter_scale = 1;
- sbr->spectrum_params.bs_noise_bands = 2;
- }
-
- // Check if spectrum parameters changed
- if (memcmp(&old_spectrum_params, &sbr->spectrum_params, sizeof(SpectrumParameters)))
- sbr->reset = 1;
-
- if (bs_header_extra_2) {
- sbr->bs_limiter_bands = get_bits(gb, 2);
- sbr->bs_limiter_gains = get_bits(gb, 2);
- sbr->bs_interpol_freq = get_bits1(gb);
- sbr->bs_smoothing_mode = get_bits1(gb);
- } else {
- sbr->bs_limiter_bands = 2;
- sbr->bs_limiter_gains = 2;
- sbr->bs_interpol_freq = 1;
- sbr->bs_smoothing_mode = 1;
- }
-
- if (sbr->bs_limiter_bands != old_bs_limiter_bands && !sbr->reset)
- sbr_make_f_tablelim(sbr);
-
- return get_bits_count(gb) - cnt;
-}
-
-static int array_min_int16(const int16_t *array, int nel)
-{
- int i, min = array[0];
- for (i = 1; i < nel; i++)
- min = FFMIN(array[i], min);
- return min;
-}
-
-static int check_n_master(AVCodecContext *avctx, int n_master, int bs_xover_band)
-{
- // Requirements (14496-3 sp04 p205)
- if (n_master <= 0) {
- av_log(avctx, AV_LOG_ERROR, "Invalid n_master: %d\n", n_master);
- return -1;
- }
- if (bs_xover_band >= n_master) {
- av_log(avctx, AV_LOG_ERROR,
- "Invalid bitstream, crossover band index beyond array bounds: %d\n",
- bs_xover_band);
- return -1;
- }
- return 0;
-}
-
-/// Master Frequency Band Table (14496-3 sp04 p194)
-static int sbr_make_f_master(AACContext *ac, SpectralBandReplication *sbr,
- SpectrumParameters *spectrum)
-{
- unsigned int temp, max_qmf_subbands = 0;
- unsigned int start_min, stop_min;
- int k;
- const int8_t *sbr_offset_ptr;
- int16_t stop_dk[13];
-
- switch (sbr->sample_rate) {
- case 16000:
- sbr_offset_ptr = sbr_offset[0];
- break;
- case 22050:
- sbr_offset_ptr = sbr_offset[1];
- break;
- case 24000:
- sbr_offset_ptr = sbr_offset[2];
- break;
- case 32000:
- sbr_offset_ptr = sbr_offset[3];
- break;
- case 44100: case 48000: case 64000:
- sbr_offset_ptr = sbr_offset[4];
- break;
- case 88200: case 96000: case 128000: case 176400: case 192000:
- sbr_offset_ptr = sbr_offset[5];
- break;
- default:
- av_log(ac->avctx, AV_LOG_ERROR,
- "Unsupported sample rate for SBR: %d\n", sbr->sample_rate);
- return -1;
- }
-
- if (sbr->sample_rate < 32000) {
- temp = 3000;
- } else if (sbr->sample_rate < 64000) {
- temp = 4000;
- } else
- temp = 5000;
-
- start_min = ((temp << 7) + (sbr->sample_rate >> 1)) / sbr->sample_rate;
- stop_min = ((temp << 8) + (sbr->sample_rate >> 1)) / sbr->sample_rate;
-
- sbr->k[0] = start_min + sbr_offset_ptr[spectrum->bs_start_freq];
-
- if (spectrum->bs_stop_freq < 14) {
- sbr->k[2] = stop_min;
- make_bands(stop_dk, stop_min, 64, 13);
- AV_QSORT(stop_dk, 13, int16_t, qsort_comparison_function_int16);
- for (k = 0; k < spectrum->bs_stop_freq; k++)
- sbr->k[2] += stop_dk[k];
- } else if (spectrum->bs_stop_freq == 14) {
- sbr->k[2] = 2*sbr->k[0];
- } else if (spectrum->bs_stop_freq == 15) {
- sbr->k[2] = 3*sbr->k[0];
- } else {
- av_log(ac->avctx, AV_LOG_ERROR,
- "Invalid bs_stop_freq: %d\n", spectrum->bs_stop_freq);
- return -1;
- }
- sbr->k[2] = FFMIN(64, sbr->k[2]);
-
- // Requirements (14496-3 sp04 p205)
- if (sbr->sample_rate <= 32000) {
- max_qmf_subbands = 48;
- } else if (sbr->sample_rate == 44100) {
- max_qmf_subbands = 35;
- } else if (sbr->sample_rate >= 48000)
- max_qmf_subbands = 32;
- else
- av_assert0(0);
-
- if (sbr->k[2] - sbr->k[0] > max_qmf_subbands) {
- av_log(ac->avctx, AV_LOG_ERROR,
- "Invalid bitstream, too many QMF subbands: %d\n", sbr->k[2] - sbr->k[0]);
- return -1;
- }
-
- if (!spectrum->bs_freq_scale) {
- int dk, k2diff;
-
- dk = spectrum->bs_alter_scale + 1;
- sbr->n_master = ((sbr->k[2] - sbr->k[0] + (dk&2)) >> dk) << 1;
- if (check_n_master(ac->avctx, sbr->n_master, sbr->spectrum_params.bs_xover_band))
- return -1;
-
- for (k = 1; k <= sbr->n_master; k++)
- sbr->f_master[k] = dk;
-
- k2diff = sbr->k[2] - sbr->k[0] - sbr->n_master * dk;
- if (k2diff < 0) {
- sbr->f_master[1]--;
- sbr->f_master[2]-= (k2diff < -1);
- } else if (k2diff) {
- sbr->f_master[sbr->n_master]++;
- }
-
- sbr->f_master[0] = sbr->k[0];
- for (k = 1; k <= sbr->n_master; k++)
- sbr->f_master[k] += sbr->f_master[k - 1];
-
- } else {
- int half_bands = 7 - spectrum->bs_freq_scale; // bs_freq_scale = {1,2,3}
- int two_regions, num_bands_0;
- int vdk0_max, vdk1_min;
- int16_t vk0[49];
-#if USE_FIXED
- int tmp, nz = 0;
-#endif /* USE_FIXED */
-
- if (49 * sbr->k[2] > 110 * sbr->k[0]) {
- two_regions = 1;
- sbr->k[1] = 2 * sbr->k[0];
- } else {
- two_regions = 0;
- sbr->k[1] = sbr->k[2];
- }
-
-#if USE_FIXED
- tmp = (sbr->k[1] << 23) / sbr->k[0];
- while (tmp < 0x40000000) {
- tmp <<= 1;
- nz++;
- }
- tmp = fixed_log(tmp - 0x80000000);
- tmp = (int)(((int64_t)tmp * CONST_RECIP_LN2 + 0x20000000) >> 30);
- tmp = (((tmp + 0x80) >> 8) + ((8 - nz) << 23)) * half_bands;
- num_bands_0 = ((tmp + 0x400000) >> 23) * 2;
-#else
- num_bands_0 = lrintf(half_bands * log2f(sbr->k[1] / (float)sbr->k[0])) * 2;
-#endif /* USE_FIXED */
-
- if (num_bands_0 <= 0) { // Requirements (14496-3 sp04 p205)
- av_log(ac->avctx, AV_LOG_ERROR, "Invalid num_bands_0: %d\n", num_bands_0);
- return -1;
- }
-
- vk0[0] = 0;
-
- make_bands(vk0+1, sbr->k[0], sbr->k[1], num_bands_0);
-
- AV_QSORT(vk0 + 1, num_bands_0, int16_t, qsort_comparison_function_int16);
- vdk0_max = vk0[num_bands_0];
-
- vk0[0] = sbr->k[0];
- for (k = 1; k <= num_bands_0; k++) {
- if (vk0[k] <= 0) { // Requirements (14496-3 sp04 p205)
- av_log(ac->avctx, AV_LOG_ERROR, "Invalid vDk0[%d]: %d\n", k, vk0[k]);
- return -1;
- }
- vk0[k] += vk0[k-1];
- }
-
- if (two_regions) {
- int16_t vk1[49];
-#if USE_FIXED
- int num_bands_1;
-
- tmp = (sbr->k[2] << 23) / sbr->k[1];
- nz = 0;
- while (tmp < 0x40000000) {
- tmp <<= 1;
- nz++;
- }
- tmp = fixed_log(tmp - 0x80000000);
- tmp = (int)(((int64_t)tmp * CONST_RECIP_LN2 + 0x20000000) >> 30);
- tmp = (((tmp + 0x80) >> 8) + ((8 - nz) << 23)) * half_bands;
- if (spectrum->bs_alter_scale)
- tmp = (int)(((int64_t)tmp * CONST_076923 + 0x40000000) >> 31);
- num_bands_1 = ((tmp + 0x400000) >> 23) * 2;
-#else
- float invwarp = spectrum->bs_alter_scale ? 0.76923076923076923077f
- : 1.0f; // bs_alter_scale = {0,1}
- int num_bands_1 = lrintf(half_bands * invwarp *
- log2f(sbr->k[2] / (float)sbr->k[1])) * 2;
-#endif /* USE_FIXED */
- make_bands(vk1+1, sbr->k[1], sbr->k[2], num_bands_1);
-
- vdk1_min = array_min_int16(vk1 + 1, num_bands_1);
-
- if (vdk1_min < vdk0_max) {
- int change;
- AV_QSORT(vk1 + 1, num_bands_1, int16_t, qsort_comparison_function_int16);
- change = FFMIN(vdk0_max - vk1[1], (vk1[num_bands_1] - vk1[1]) >> 1);
- vk1[1] += change;
- vk1[num_bands_1] -= change;
- }
-
- AV_QSORT(vk1 + 1, num_bands_1, int16_t, qsort_comparison_function_int16);
-
- vk1[0] = sbr->k[1];
- for (k = 1; k <= num_bands_1; k++) {
- if (vk1[k] <= 0) { // Requirements (14496-3 sp04 p205)
- av_log(ac->avctx, AV_LOG_ERROR, "Invalid vDk1[%d]: %d\n", k, vk1[k]);
- return -1;
- }
- vk1[k] += vk1[k-1];
- }
-
- sbr->n_master = num_bands_0 + num_bands_1;
- if (check_n_master(ac->avctx, sbr->n_master, sbr->spectrum_params.bs_xover_band))
- return -1;
- memcpy(&sbr->f_master[0], vk0,
- (num_bands_0 + 1) * sizeof(sbr->f_master[0]));
- memcpy(&sbr->f_master[num_bands_0 + 1], vk1 + 1,
- num_bands_1 * sizeof(sbr->f_master[0]));
-
- } else {
- sbr->n_master = num_bands_0;
- if (check_n_master(ac->avctx, sbr->n_master, sbr->spectrum_params.bs_xover_band))
- return -1;
- memcpy(sbr->f_master, vk0, (num_bands_0 + 1) * sizeof(sbr->f_master[0]));
- }
- }
-
- return 0;
-}
-
-/// High Frequency Generation - Patch Construction (14496-3 sp04 p216 fig. 4.46)
-static int sbr_hf_calc_npatches(AACContext *ac, SpectralBandReplication *sbr)
-{
- int i, k, last_k = -1, last_msb = -1, sb = 0;
- int msb = sbr->k[0];
- int usb = sbr->kx[1];
- int goal_sb = ((1000 << 11) + (sbr->sample_rate >> 1)) / sbr->sample_rate;
-
- sbr->num_patches = 0;
-
- if (goal_sb < sbr->kx[1] + sbr->m[1]) {
- for (k = 0; sbr->f_master[k] < goal_sb; k++) ;
- } else
- k = sbr->n_master;
-
- do {
- int odd = 0;
- if (k == last_k && msb == last_msb) {
- av_log(ac->avctx, AV_LOG_ERROR, "patch construction failed\n");
- return AVERROR_INVALIDDATA;
- }
- last_k = k;
- last_msb = msb;
- for (i = k; i == k || sb > (sbr->k[0] - 1 + msb - odd); i--) {
- sb = sbr->f_master[i];
- odd = (sb + sbr->k[0]) & 1;
- }
-
- // Requirements (14496-3 sp04 p205) sets the maximum number of patches to 5.
- // After this check the final number of patches can still be six which is
- // illegal however the Coding Technologies decoder check stream has a final
- // count of 6 patches
- if (sbr->num_patches > 5) {
- av_log(ac->avctx, AV_LOG_ERROR, "Too many patches: %d\n", sbr->num_patches);
- return -1;
- }
-
- sbr->patch_num_subbands[sbr->num_patches] = FFMAX(sb - usb, 0);
- sbr->patch_start_subband[sbr->num_patches] = sbr->k[0] - odd - sbr->patch_num_subbands[sbr->num_patches];
-
- if (sbr->patch_num_subbands[sbr->num_patches] > 0) {
- usb = sb;
- msb = sb;
- sbr->num_patches++;
- } else
- msb = sbr->kx[1];
-
- if (sbr->f_master[k] - sb < 3)
- k = sbr->n_master;
- } while (sb != sbr->kx[1] + sbr->m[1]);
-
- if (sbr->num_patches > 1 &&
- sbr->patch_num_subbands[sbr->num_patches - 1] < 3)
- sbr->num_patches--;
-
- return 0;
-}
-
-/// Derived Frequency Band Tables (14496-3 sp04 p197)
-static int sbr_make_f_derived(AACContext *ac, SpectralBandReplication *sbr)
-{
- int k, temp;
-#if USE_FIXED
- int nz = 0;
-#endif /* USE_FIXED */
-
- sbr->n[1] = sbr->n_master - sbr->spectrum_params.bs_xover_band;
- sbr->n[0] = (sbr->n[1] + 1) >> 1;
-
- memcpy(sbr->f_tablehigh, &sbr->f_master[sbr->spectrum_params.bs_xover_band],
- (sbr->n[1] + 1) * sizeof(sbr->f_master[0]));
- sbr->m[1] = sbr->f_tablehigh[sbr->n[1]] - sbr->f_tablehigh[0];
- sbr->kx[1] = sbr->f_tablehigh[0];
-
- // Requirements (14496-3 sp04 p205)
- if (sbr->kx[1] + sbr->m[1] > 64) {
- av_log(ac->avctx, AV_LOG_ERROR,
- "Stop frequency border too high: %d\n", sbr->kx[1] + sbr->m[1]);
- return -1;
- }
- if (sbr->kx[1] > 32) {
- av_log(ac->avctx, AV_LOG_ERROR, "Start frequency border too high: %d\n", sbr->kx[1]);
- return -1;
- }
-
- sbr->f_tablelow[0] = sbr->f_tablehigh[0];
- temp = sbr->n[1] & 1;
- for (k = 1; k <= sbr->n[0]; k++)
- sbr->f_tablelow[k] = sbr->f_tablehigh[2 * k - temp];
-#if USE_FIXED
- temp = (sbr->k[2] << 23) / sbr->kx[1];
- while (temp < 0x40000000) {
- temp <<= 1;
- nz++;
- }
- temp = fixed_log(temp - 0x80000000);
- temp = (int)(((int64_t)temp * CONST_RECIP_LN2 + 0x20000000) >> 30);
- temp = (((temp + 0x80) >> 8) + ((8 - nz) << 23)) * sbr->spectrum_params.bs_noise_bands;
-
- sbr->n_q = (temp + 0x400000) >> 23;
- if (sbr->n_q < 1)
- sbr->n_q = 1;
-#else
- sbr->n_q = FFMAX(1, lrintf(sbr->spectrum_params.bs_noise_bands *
- log2f(sbr->k[2] / (float)sbr->kx[1]))); // 0 <= bs_noise_bands <= 3
-#endif /* USE_FIXED */
-
- if (sbr->n_q > 5) {
- av_log(ac->avctx, AV_LOG_ERROR, "Too many noise floor scale factors: %d\n", sbr->n_q);
- return -1;
- }
-
- sbr->f_tablenoise[0] = sbr->f_tablelow[0];
- temp = 0;
- for (k = 1; k <= sbr->n_q; k++) {
- temp += (sbr->n[0] - temp) / (sbr->n_q + 1 - k);
- sbr->f_tablenoise[k] = sbr->f_tablelow[temp];
- }
-
- if (sbr_hf_calc_npatches(ac, sbr) < 0)
- return -1;
-
- sbr_make_f_tablelim(sbr);
-
- sbr->data[0].f_indexnoise = 0;
- sbr->data[1].f_indexnoise = 0;
-
- return 0;
-}
-
-static av_always_inline void get_bits1_vector(GetBitContext *gb, uint8_t *vec,
- int elements)
-{
- int i;
- for (i = 0; i < elements; i++) {
- vec[i] = get_bits1(gb);
- }
-}
-
-/** ceil(log2(index+1)) */
-static const int8_t ceil_log2[] = {
- 0, 1, 2, 2, 3, 3,
-};
-
-static int read_sbr_grid(AACContext *ac, SpectralBandReplication *sbr,
- GetBitContext *gb, SBRData *ch_data)
-{
- int i;
- int bs_pointer = 0;
- // frameLengthFlag ? 15 : 16; 960 sample length frames unsupported; this value is numTimeSlots
- int abs_bord_trail = 16;
- int num_rel_lead, num_rel_trail;
- unsigned bs_num_env_old = ch_data->bs_num_env;
- int bs_frame_class, bs_num_env;
-
- ch_data->bs_freq_res[0] = ch_data->bs_freq_res[ch_data->bs_num_env];
- ch_data->bs_amp_res = sbr->bs_amp_res_header;
- ch_data->t_env_num_env_old = ch_data->t_env[bs_num_env_old];
-
- switch (bs_frame_class = get_bits(gb, 2)) {
- case FIXFIX:
- bs_num_env = 1 << get_bits(gb, 2);
- if (bs_num_env > 4) {
- av_log(ac->avctx, AV_LOG_ERROR,
- "Invalid bitstream, too many SBR envelopes in FIXFIX type SBR frame: %d\n",
- bs_num_env);
- return -1;
- }
- ch_data->bs_num_env = bs_num_env;
- num_rel_lead = ch_data->bs_num_env - 1;
- if (ch_data->bs_num_env == 1)
- ch_data->bs_amp_res = 0;
-
-
- ch_data->t_env[0] = 0;
- ch_data->t_env[ch_data->bs_num_env] = abs_bord_trail;
-
- abs_bord_trail = (abs_bord_trail + (ch_data->bs_num_env >> 1)) /
- ch_data->bs_num_env;
- for (i = 0; i < num_rel_lead; i++)
- ch_data->t_env[i + 1] = ch_data->t_env[i] + abs_bord_trail;
-
- ch_data->bs_freq_res[1] = get_bits1(gb);
- for (i = 1; i < ch_data->bs_num_env; i++)
- ch_data->bs_freq_res[i + 1] = ch_data->bs_freq_res[1];
- break;
- case FIXVAR:
- abs_bord_trail += get_bits(gb, 2);
- num_rel_trail = get_bits(gb, 2);
- ch_data->bs_num_env = num_rel_trail + 1;
- ch_data->t_env[0] = 0;
- ch_data->t_env[ch_data->bs_num_env] = abs_bord_trail;
-
- for (i = 0; i < num_rel_trail; i++)
- ch_data->t_env[ch_data->bs_num_env - 1 - i] =
- ch_data->t_env[ch_data->bs_num_env - i] - 2 * get_bits(gb, 2) - 2;
-
- bs_pointer = get_bits(gb, ceil_log2[ch_data->bs_num_env]);
-
- for (i = 0; i < ch_data->bs_num_env; i++)
- ch_data->bs_freq_res[ch_data->bs_num_env - i] = get_bits1(gb);
- break;
- case VARFIX:
- ch_data->t_env[0] = get_bits(gb, 2);
- num_rel_lead = get_bits(gb, 2);
- ch_data->bs_num_env = num_rel_lead + 1;
- ch_data->t_env[ch_data->bs_num_env] = abs_bord_trail;
-
- for (i = 0; i < num_rel_lead; i++)
- ch_data->t_env[i + 1] = ch_data->t_env[i] + 2 * get_bits(gb, 2) + 2;
-
- bs_pointer = get_bits(gb, ceil_log2[ch_data->bs_num_env]);
-
- get_bits1_vector(gb, ch_data->bs_freq_res + 1, ch_data->bs_num_env);
- break;
- case VARVAR:
- ch_data->t_env[0] = get_bits(gb, 2);
- abs_bord_trail += get_bits(gb, 2);
- num_rel_lead = get_bits(gb, 2);
- num_rel_trail = get_bits(gb, 2);
- bs_num_env = num_rel_lead + num_rel_trail + 1;
-
- if (bs_num_env > 5) {
- av_log(ac->avctx, AV_LOG_ERROR,
- "Invalid bitstream, too many SBR envelopes in VARVAR type SBR frame: %d\n",
- bs_num_env);
- return -1;
- }
- ch_data->bs_num_env = bs_num_env;
-
- ch_data->t_env[ch_data->bs_num_env] = abs_bord_trail;
-
- for (i = 0; i < num_rel_lead; i++)
- ch_data->t_env[i + 1] = ch_data->t_env[i] + 2 * get_bits(gb, 2) + 2;
- for (i = 0; i < num_rel_trail; i++)
- ch_data->t_env[ch_data->bs_num_env - 1 - i] =
- ch_data->t_env[ch_data->bs_num_env - i] - 2 * get_bits(gb, 2) - 2;
-
- bs_pointer = get_bits(gb, ceil_log2[ch_data->bs_num_env]);
-
- get_bits1_vector(gb, ch_data->bs_freq_res + 1, ch_data->bs_num_env);
- break;
- }
- ch_data->bs_frame_class = bs_frame_class;
-
- av_assert0(bs_pointer >= 0);
- if (bs_pointer > ch_data->bs_num_env + 1) {
- av_log(ac->avctx, AV_LOG_ERROR,
- "Invalid bitstream, bs_pointer points to a middle noise border outside the time borders table: %d\n",
- bs_pointer);
- return -1;
- }
-
- for (i = 1; i <= ch_data->bs_num_env; i++) {
- if (ch_data->t_env[i-1] >= ch_data->t_env[i]) {
- av_log(ac->avctx, AV_LOG_ERROR, "Not strictly monotone time borders\n");
- return -1;
- }
- }
-
- ch_data->bs_num_noise = (ch_data->bs_num_env > 1) + 1;
-
- ch_data->t_q[0] = ch_data->t_env[0];
- ch_data->t_q[ch_data->bs_num_noise] = ch_data->t_env[ch_data->bs_num_env];
- if (ch_data->bs_num_noise > 1) {
- int idx;
- if (ch_data->bs_frame_class == FIXFIX) {
- idx = ch_data->bs_num_env >> 1;
- } else if (ch_data->bs_frame_class & 1) { // FIXVAR or VARVAR
- idx = ch_data->bs_num_env - FFMAX(bs_pointer - 1, 1);
- } else { // VARFIX
- if (!bs_pointer)
- idx = 1;
- else if (bs_pointer == 1)
- idx = ch_data->bs_num_env - 1;
- else // bs_pointer > 1
- idx = bs_pointer - 1;
- }
- ch_data->t_q[1] = ch_data->t_env[idx];
- }
-
- ch_data->e_a[0] = -(ch_data->e_a[1] != bs_num_env_old); // l_APrev
- ch_data->e_a[1] = -1;
- if ((ch_data->bs_frame_class & 1) && bs_pointer) { // FIXVAR or VARVAR and bs_pointer != 0
- ch_data->e_a[1] = ch_data->bs_num_env + 1 - bs_pointer;
- } else if ((ch_data->bs_frame_class == 2) && (bs_pointer > 1)) // VARFIX and bs_pointer > 1
- ch_data->e_a[1] = bs_pointer - 1;
-
- return 0;
-}
-
-static void copy_sbr_grid(SBRData *dst, const SBRData *src) {
- //These variables are saved from the previous frame rather than copied
- dst->bs_freq_res[0] = dst->bs_freq_res[dst->bs_num_env];
- dst->t_env_num_env_old = dst->t_env[dst->bs_num_env];
- dst->e_a[0] = -(dst->e_a[1] != dst->bs_num_env);
-
- //These variables are read from the bitstream and therefore copied
- memcpy(dst->bs_freq_res+1, src->bs_freq_res+1, sizeof(dst->bs_freq_res)-sizeof(*dst->bs_freq_res));
- memcpy(dst->t_env, src->t_env, sizeof(dst->t_env));
- memcpy(dst->t_q, src->t_q, sizeof(dst->t_q));
- dst->bs_num_env = src->bs_num_env;
- dst->bs_amp_res = src->bs_amp_res;
- dst->bs_num_noise = src->bs_num_noise;
- dst->bs_frame_class = src->bs_frame_class;
- dst->e_a[1] = src->e_a[1];
-}
-
-/// Read how the envelope and noise floor data is delta coded
-static void read_sbr_dtdf(SpectralBandReplication *sbr, GetBitContext *gb,
- SBRData *ch_data)
-{
- get_bits1_vector(gb, ch_data->bs_df_env, ch_data->bs_num_env);
- get_bits1_vector(gb, ch_data->bs_df_noise, ch_data->bs_num_noise);
-}
-
-/// Read inverse filtering data
-static void read_sbr_invf(SpectralBandReplication *sbr, GetBitContext *gb,
- SBRData *ch_data)
-{
- int i;
-
- memcpy(ch_data->bs_invf_mode[1], ch_data->bs_invf_mode[0], 5 * sizeof(uint8_t));
- for (i = 0; i < sbr->n_q; i++)
- ch_data->bs_invf_mode[0][i] = get_bits(gb, 2);
-}
-
-static int read_sbr_envelope(AACContext *ac, SpectralBandReplication *sbr, GetBitContext *gb,
- SBRData *ch_data, int ch)
-{
- int bits;
- int i, j, k;
- const VLCElem *t_huff, *f_huff;
- int t_lav, f_lav;
- const int delta = (ch == 1 && sbr->bs_coupling == 1) + 1;
- const int odd = sbr->n[1] & 1;
-
- if (sbr->bs_coupling && ch) {
- if (ch_data->bs_amp_res) {
- bits = 5;
- t_huff = vlc_sbr[T_HUFFMAN_ENV_BAL_3_0DB].table;
- t_lav = vlc_sbr_lav[T_HUFFMAN_ENV_BAL_3_0DB];
- f_huff = vlc_sbr[F_HUFFMAN_ENV_BAL_3_0DB].table;
- f_lav = vlc_sbr_lav[F_HUFFMAN_ENV_BAL_3_0DB];
- } else {
- bits = 6;
- t_huff = vlc_sbr[T_HUFFMAN_ENV_BAL_1_5DB].table;
- t_lav = vlc_sbr_lav[T_HUFFMAN_ENV_BAL_1_5DB];
- f_huff = vlc_sbr[F_HUFFMAN_ENV_BAL_1_5DB].table;
- f_lav = vlc_sbr_lav[F_HUFFMAN_ENV_BAL_1_5DB];
- }
- } else {
- if (ch_data->bs_amp_res) {
- bits = 6;
- t_huff = vlc_sbr[T_HUFFMAN_ENV_3_0DB].table;
- t_lav = vlc_sbr_lav[T_HUFFMAN_ENV_3_0DB];
- f_huff = vlc_sbr[F_HUFFMAN_ENV_3_0DB].table;
- f_lav = vlc_sbr_lav[F_HUFFMAN_ENV_3_0DB];
- } else {
- bits = 7;
- t_huff = vlc_sbr[T_HUFFMAN_ENV_1_5DB].table;
- t_lav = vlc_sbr_lav[T_HUFFMAN_ENV_1_5DB];
- f_huff = vlc_sbr[F_HUFFMAN_ENV_1_5DB].table;
- f_lav = vlc_sbr_lav[F_HUFFMAN_ENV_1_5DB];
- }
- }
-
- for (i = 0; i < ch_data->bs_num_env; i++) {
- if (ch_data->bs_df_env[i]) {
- // bs_freq_res[0] == bs_freq_res[bs_num_env] from prev frame
- if (ch_data->bs_freq_res[i + 1] == ch_data->bs_freq_res[i]) {
- for (j = 0; j < sbr->n[ch_data->bs_freq_res[i + 1]]; j++) {
- ch_data->env_facs_q[i + 1][j] = ch_data->env_facs_q[i][j] + delta * (get_vlc2(gb, t_huff, 9, 3) - t_lav);
- if (ch_data->env_facs_q[i + 1][j] > 127U) {
- av_log(ac->avctx, AV_LOG_ERROR, "env_facs_q %d is invalid\n", ch_data->env_facs_q[i + 1][j]);
- return AVERROR_INVALIDDATA;
- }
- }
- } else if (ch_data->bs_freq_res[i + 1]) {
- for (j = 0; j < sbr->n[ch_data->bs_freq_res[i + 1]]; j++) {
- k = (j + odd) >> 1; // find k such that f_tablelow[k] <= f_tablehigh[j] < f_tablelow[k + 1]
- ch_data->env_facs_q[i + 1][j] = ch_data->env_facs_q[i][k] + delta * (get_vlc2(gb, t_huff, 9, 3) - t_lav);
- if (ch_data->env_facs_q[i + 1][j] > 127U) {
- av_log(ac->avctx, AV_LOG_ERROR, "env_facs_q %d is invalid\n", ch_data->env_facs_q[i + 1][j]);
- return AVERROR_INVALIDDATA;
- }
- }
- } else {
- for (j = 0; j < sbr->n[ch_data->bs_freq_res[i + 1]]; j++) {
- k = j ? 2*j - odd : 0; // find k such that f_tablehigh[k] == f_tablelow[j]
- ch_data->env_facs_q[i + 1][j] = ch_data->env_facs_q[i][k] + delta * (get_vlc2(gb, t_huff, 9, 3) - t_lav);
- if (ch_data->env_facs_q[i + 1][j] > 127U) {
- av_log(ac->avctx, AV_LOG_ERROR, "env_facs_q %d is invalid\n", ch_data->env_facs_q[i + 1][j]);
- return AVERROR_INVALIDDATA;
- }
- }
- }
- } else {
- ch_data->env_facs_q[i + 1][0] = delta * get_bits(gb, bits); // bs_env_start_value_balance
- for (j = 1; j < sbr->n[ch_data->bs_freq_res[i + 1]]; j++) {
- ch_data->env_facs_q[i + 1][j] = ch_data->env_facs_q[i + 1][j - 1] + delta * (get_vlc2(gb, f_huff, 9, 3) - f_lav);
- if (ch_data->env_facs_q[i + 1][j] > 127U) {
- av_log(ac->avctx, AV_LOG_ERROR, "env_facs_q %d is invalid\n", ch_data->env_facs_q[i + 1][j]);
- return AVERROR_INVALIDDATA;
- }
- }
- }
- }
-
- //assign 0th elements of env_facs_q from last elements
- memcpy(ch_data->env_facs_q[0], ch_data->env_facs_q[ch_data->bs_num_env],
- sizeof(ch_data->env_facs_q[0]));
-
- return 0;
-}
-
-static int read_sbr_noise(AACContext *ac, SpectralBandReplication *sbr, GetBitContext *gb,
- SBRData *ch_data, int ch)
-{
- int i, j;
- const VLCElem *t_huff, *f_huff;
- int t_lav, f_lav;
- int delta = (ch == 1 && sbr->bs_coupling == 1) + 1;
-
- if (sbr->bs_coupling && ch) {
- t_huff = vlc_sbr[T_HUFFMAN_NOISE_BAL_3_0DB].table;
- t_lav = vlc_sbr_lav[T_HUFFMAN_NOISE_BAL_3_0DB];
- f_huff = vlc_sbr[F_HUFFMAN_ENV_BAL_3_0DB].table;
- f_lav = vlc_sbr_lav[F_HUFFMAN_ENV_BAL_3_0DB];
- } else {
- t_huff = vlc_sbr[T_HUFFMAN_NOISE_3_0DB].table;
- t_lav = vlc_sbr_lav[T_HUFFMAN_NOISE_3_0DB];
- f_huff = vlc_sbr[F_HUFFMAN_ENV_3_0DB].table;
- f_lav = vlc_sbr_lav[F_HUFFMAN_ENV_3_0DB];
- }
-
- for (i = 0; i < ch_data->bs_num_noise; i++) {
- if (ch_data->bs_df_noise[i]) {
- for (j = 0; j < sbr->n_q; j++) {
- ch_data->noise_facs_q[i + 1][j] = ch_data->noise_facs_q[i][j] + delta * (get_vlc2(gb, t_huff, 9, 2) - t_lav);
- if (ch_data->noise_facs_q[i + 1][j] > 30U) {
- av_log(ac->avctx, AV_LOG_ERROR, "noise_facs_q %d is invalid\n", ch_data->noise_facs_q[i + 1][j]);
- return AVERROR_INVALIDDATA;
- }
- }
- } else {
- ch_data->noise_facs_q[i + 1][0] = delta * get_bits(gb, 5); // bs_noise_start_value_balance or bs_noise_start_value_level
- for (j = 1; j < sbr->n_q; j++) {
- ch_data->noise_facs_q[i + 1][j] = ch_data->noise_facs_q[i + 1][j - 1] + delta * (get_vlc2(gb, f_huff, 9, 3) - f_lav);
- if (ch_data->noise_facs_q[i + 1][j] > 30U) {
- av_log(ac->avctx, AV_LOG_ERROR, "noise_facs_q %d is invalid\n", ch_data->noise_facs_q[i + 1][j]);
- return AVERROR_INVALIDDATA;
- }
- }
- }
- }
-
- //assign 0th elements of noise_facs_q from last elements
- memcpy(ch_data->noise_facs_q[0], ch_data->noise_facs_q[ch_data->bs_num_noise],
- sizeof(ch_data->noise_facs_q[0]));
- return 0;
-}
-
-static void read_sbr_extension(AACContext *ac, SpectralBandReplication *sbr,
- GetBitContext *gb,
- int bs_extension_id, int *num_bits_left)
-{
- switch (bs_extension_id) {
- case EXTENSION_ID_PS:
- if (!ac->oc[1].m4ac.ps) {
- av_log(ac->avctx, AV_LOG_ERROR, "Parametric Stereo signaled to be not-present but was found in the bitstream.\n");
- skip_bits_long(gb, *num_bits_left); // bs_fill_bits
- *num_bits_left = 0;
- } else {
- *num_bits_left -= ff_ps_read_data(ac->avctx, gb, &sbr->ps.common, *num_bits_left);
- ac->avctx->profile = FF_PROFILE_AAC_HE_V2;
- // ensure the warning is not printed if PS extension is present
- ac->warned_he_aac_mono = 1;
- }
- break;
- default:
- // some files contain 0-padding
- if (bs_extension_id || *num_bits_left > 16 || show_bits(gb, *num_bits_left))
- avpriv_request_sample(ac->avctx, "Reserved SBR extensions");
- skip_bits_long(gb, *num_bits_left); // bs_fill_bits
- *num_bits_left = 0;
- break;
- }
-}
-
-static int read_sbr_single_channel_element(AACContext *ac,
- SpectralBandReplication *sbr,
- GetBitContext *gb)
-{
- int ret;
-
- if (get_bits1(gb)) // bs_data_extra
- skip_bits(gb, 4); // bs_reserved
-
- if (read_sbr_grid(ac, sbr, gb, &sbr->data[0]))
- return -1;
- read_sbr_dtdf(sbr, gb, &sbr->data[0]);
- read_sbr_invf(sbr, gb, &sbr->data[0]);
- if((ret = read_sbr_envelope(ac, sbr, gb, &sbr->data[0], 0)) < 0)
- return ret;
- if((ret = read_sbr_noise(ac, sbr, gb, &sbr->data[0], 0)) < 0)
- return ret;
-
- if ((sbr->data[0].bs_add_harmonic_flag = get_bits1(gb)))
- get_bits1_vector(gb, sbr->data[0].bs_add_harmonic, sbr->n[1]);
-
- return 0;
-}
-
-static int read_sbr_channel_pair_element(AACContext *ac,
- SpectralBandReplication *sbr,
- GetBitContext *gb)
-{
- int ret;
-
- if (get_bits1(gb)) // bs_data_extra
- skip_bits(gb, 8); // bs_reserved
-
- if ((sbr->bs_coupling = get_bits1(gb))) {
- if (read_sbr_grid(ac, sbr, gb, &sbr->data[0]))
- return -1;
- copy_sbr_grid(&sbr->data[1], &sbr->data[0]);
- read_sbr_dtdf(sbr, gb, &sbr->data[0]);
- read_sbr_dtdf(sbr, gb, &sbr->data[1]);
- read_sbr_invf(sbr, gb, &sbr->data[0]);
- memcpy(sbr->data[1].bs_invf_mode[1], sbr->data[1].bs_invf_mode[0], sizeof(sbr->data[1].bs_invf_mode[0]));
- memcpy(sbr->data[1].bs_invf_mode[0], sbr->data[0].bs_invf_mode[0], sizeof(sbr->data[1].bs_invf_mode[0]));
- if((ret = read_sbr_envelope(ac, sbr, gb, &sbr->data[0], 0)) < 0)
- return ret;
- if((ret = read_sbr_noise(ac, sbr, gb, &sbr->data[0], 0)) < 0)
- return ret;
- if((ret = read_sbr_envelope(ac, sbr, gb, &sbr->data[1], 1)) < 0)
- return ret;
- if((ret = read_sbr_noise(ac, sbr, gb, &sbr->data[1], 1)) < 0)
- return ret;
- } else {
- if (read_sbr_grid(ac, sbr, gb, &sbr->data[0]) ||
- read_sbr_grid(ac, sbr, gb, &sbr->data[1]))
- return -1;
- read_sbr_dtdf(sbr, gb, &sbr->data[0]);
- read_sbr_dtdf(sbr, gb, &sbr->data[1]);
- read_sbr_invf(sbr, gb, &sbr->data[0]);
- read_sbr_invf(sbr, gb, &sbr->data[1]);
- if((ret = read_sbr_envelope(ac, sbr, gb, &sbr->data[0], 0)) < 0)
- return ret;
- if((ret = read_sbr_envelope(ac, sbr, gb, &sbr->data[1], 1)) < 0)
- return ret;
- if((ret = read_sbr_noise(ac, sbr, gb, &sbr->data[0], 0)) < 0)
- return ret;
- if((ret = read_sbr_noise(ac, sbr, gb, &sbr->data[1], 1)) < 0)
- return ret;
- }
-
- if ((sbr->data[0].bs_add_harmonic_flag = get_bits1(gb)))
- get_bits1_vector(gb, sbr->data[0].bs_add_harmonic, sbr->n[1]);
- if ((sbr->data[1].bs_add_harmonic_flag = get_bits1(gb)))
- get_bits1_vector(gb, sbr->data[1].bs_add_harmonic, sbr->n[1]);
-
- return 0;
-}
-
-static unsigned int read_sbr_data(AACContext *ac, SpectralBandReplication *sbr,
- GetBitContext *gb, int id_aac)
-{
- unsigned int cnt = get_bits_count(gb);
-
- sbr->id_aac = id_aac;
- sbr->ready_for_dequant = 1;
-
- if (id_aac == TYPE_SCE || id_aac == TYPE_CCE) {
- if (read_sbr_single_channel_element(ac, sbr, gb)) {
- sbr_turnoff(sbr);
- return get_bits_count(gb) - cnt;
- }
- } else if (id_aac == TYPE_CPE) {
- if (read_sbr_channel_pair_element(ac, sbr, gb)) {
- sbr_turnoff(sbr);
- return get_bits_count(gb) - cnt;
- }
- } else {
- av_log(ac->avctx, AV_LOG_ERROR,
- "Invalid bitstream - cannot apply SBR to element type %d\n", id_aac);
- sbr_turnoff(sbr);
- return get_bits_count(gb) - cnt;
- }
- if (get_bits1(gb)) { // bs_extended_data
- int num_bits_left = get_bits(gb, 4); // bs_extension_size
- if (num_bits_left == 15)
- num_bits_left += get_bits(gb, 8); // bs_esc_count
-
- num_bits_left <<= 3;
- while (num_bits_left > 7) {
- num_bits_left -= 2;
- read_sbr_extension(ac, sbr, gb, get_bits(gb, 2), &num_bits_left); // bs_extension_id
- }
- if (num_bits_left < 0) {
- av_log(ac->avctx, AV_LOG_ERROR, "SBR Extension over read.\n");
- }
- if (num_bits_left > 0)
- skip_bits(gb, num_bits_left);
- }
-
- return get_bits_count(gb) - cnt;
-}
-
-static void sbr_reset(AACContext *ac, SpectralBandReplication *sbr)
-{
- int err;
- err = sbr_make_f_master(ac, sbr, &sbr->spectrum_params);
- if (err >= 0)
- err = sbr_make_f_derived(ac, sbr);
- if (err < 0) {
- av_log(ac->avctx, AV_LOG_ERROR,
- "SBR reset failed. Switching SBR to pure upsampling mode.\n");
- sbr_turnoff(sbr);
- }
-}
-
-/**
- * Decode Spectral Band Replication extension data; reference: table 4.55.
- *
- * @param crc flag indicating the presence of CRC checksum
- * @param cnt length of TYPE_FIL syntactic element in bytes
- *
- * @return Returns number of bytes consumed from the TYPE_FIL element.
- */
-int AAC_RENAME(ff_decode_sbr_extension)(AACContext *ac, SpectralBandReplication *sbr,
- GetBitContext *gb_host, int crc, int cnt, int id_aac)
-{
- unsigned int num_sbr_bits = 0, num_align_bits;
- unsigned bytes_read;
- GetBitContext gbc = *gb_host, *gb = &gbc;
- skip_bits_long(gb_host, cnt*8 - 4);
-
- sbr->reset = 0;
-
- if (!sbr->sample_rate)
- sbr->sample_rate = 2 * ac->oc[1].m4ac.sample_rate; //TODO use the nominal sample rate for arbitrary sample rate support
- if (!ac->oc[1].m4ac.ext_sample_rate)
- ac->oc[1].m4ac.ext_sample_rate = 2 * ac->oc[1].m4ac.sample_rate;
-
- if (crc) {
- skip_bits(gb, 10); // bs_sbr_crc_bits; TODO - implement CRC check
- num_sbr_bits += 10;
- }
-
- //Save some state from the previous frame.
- sbr->kx[0] = sbr->kx[1];
- sbr->m[0] = sbr->m[1];
- sbr->kx_and_m_pushed = 1;
-
- num_sbr_bits++;
- if (get_bits1(gb)) // bs_header_flag
- num_sbr_bits += read_sbr_header(sbr, gb);
-
- if (sbr->reset)
- sbr_reset(ac, sbr);
-
- if (sbr->start)
- num_sbr_bits += read_sbr_data(ac, sbr, gb, id_aac);
-
- num_align_bits = ((cnt << 3) - 4 - num_sbr_bits) & 7;
- bytes_read = ((num_sbr_bits + num_align_bits + 4) >> 3);
-
- if (bytes_read > cnt) {
- av_log(ac->avctx, AV_LOG_ERROR,
- "Expected to read %d SBR bytes actually read %d.\n", cnt, bytes_read);
- sbr_turnoff(sbr);
- }
- return cnt;
-}
-
-/**
- * Analysis QMF Bank (14496-3 sp04 p206)
- *
- * @param x pointer to the beginning of the first sample window
- * @param W array of complex-valued samples split into subbands
- */
-#ifndef sbr_qmf_analysis
-#if USE_FIXED
-static void sbr_qmf_analysis(AVFixedDSPContext *dsp, AVTXContext *mdct,
- av_tx_fn mdct_fn,
-#else
-static void sbr_qmf_analysis(AVFloatDSPContext *dsp, AVTXContext *mdct,
- av_tx_fn mdct_fn,
-#endif /* USE_FIXED */
- SBRDSPContext *sbrdsp, const INTFLOAT *in, INTFLOAT *x,
- INTFLOAT z[320], INTFLOAT W[2][32][32][2], int buf_idx)
-{
- int i;
-#if USE_FIXED
- int j;
-#endif
- memcpy(x , x+1024, (320-32)*sizeof(x[0]));
- memcpy(x+288, in, 1024*sizeof(x[0]));
- for (i = 0; i < 32; i++) { // numTimeSlots*RATE = 16*2 as 960 sample frames
- // are not supported
- dsp->vector_fmul_reverse(z, sbr_qmf_window_ds, x, 320);
- sbrdsp->sum64x5(z);
- sbrdsp->qmf_pre_shuffle(z);
-#if USE_FIXED
- for (j = 64; j < 128; j++) {
- if (z[j] > 1<<24) {
- av_log(NULL, AV_LOG_WARNING,
- "sbr_qmf_analysis: value %09d too large, setting to %09d\n",
- z[j], 1<<24);
- z[j] = 1<<24;
- } else if (z[j] < -(1<<24)) {
- av_log(NULL, AV_LOG_WARNING,
- "sbr_qmf_analysis: value %09d too small, setting to %09d\n",
- z[j], -(1<<24));
- z[j] = -(1<<24);
- }
- }
-#endif
- mdct_fn(mdct, z, z + 64, sizeof(INTFLOAT));
- sbrdsp->qmf_post_shuffle(W[buf_idx][i], z);
- x += 32;
- }
-}
-#endif
-
-/**
- * Synthesis QMF Bank (14496-3 sp04 p206) and Downsampled Synthesis QMF Bank
- * (14496-3 sp04 p206)
- */
-#ifndef sbr_qmf_synthesis
-static void sbr_qmf_synthesis(AVTXContext *mdct, av_tx_fn mdct_fn,
-#if USE_FIXED
- SBRDSPContext *sbrdsp, AVFixedDSPContext *dsp,
-#else
- SBRDSPContext *sbrdsp, AVFloatDSPContext *dsp,
-#endif /* USE_FIXED */
- INTFLOAT *out, INTFLOAT X[2][38][64],
- INTFLOAT mdct_buf[2][64],
- INTFLOAT *v0, int *v_off, const unsigned int div)
-{
- int i, n;
- const INTFLOAT *sbr_qmf_window = div ? sbr_qmf_window_ds : sbr_qmf_window_us;
- const int step = 128 >> div;
- INTFLOAT *v;
- for (i = 0; i < 32; i++) {
- if (*v_off < step) {
- int saved_samples = (1280 - 128) >> div;
- memcpy(&v0[SBR_SYNTHESIS_BUF_SIZE - saved_samples], v0, saved_samples * sizeof(INTFLOAT));
- *v_off = SBR_SYNTHESIS_BUF_SIZE - saved_samples - step;
- } else {
- *v_off -= step;
- }
- v = v0 + *v_off;
- if (div) {
- for (n = 0; n < 32; n++) {
- X[0][i][ n] = -X[0][i][n];
- X[0][i][32+n] = X[1][i][31-n];
- }
- mdct_fn(mdct, mdct_buf[0], X[0][i], sizeof(INTFLOAT));
- sbrdsp->qmf_deint_neg(v, mdct_buf[0]);
- } else {
- sbrdsp->neg_odd_64(X[1][i]);
- mdct_fn(mdct, mdct_buf[0], X[0][i], sizeof(INTFLOAT));
- mdct_fn(mdct, mdct_buf[1], X[1][i], sizeof(INTFLOAT));
- sbrdsp->qmf_deint_bfly(v, mdct_buf[1], mdct_buf[0]);
- }
- dsp->vector_fmul (out, v , sbr_qmf_window , 64 >> div);
- dsp->vector_fmul_add(out, v + ( 192 >> div), sbr_qmf_window + ( 64 >> div), out , 64 >> div);
- dsp->vector_fmul_add(out, v + ( 256 >> div), sbr_qmf_window + (128 >> div), out , 64 >> div);
- dsp->vector_fmul_add(out, v + ( 448 >> div), sbr_qmf_window + (192 >> div), out , 64 >> div);
- dsp->vector_fmul_add(out, v + ( 512 >> div), sbr_qmf_window + (256 >> div), out , 64 >> div);
- dsp->vector_fmul_add(out, v + ( 704 >> div), sbr_qmf_window + (320 >> div), out , 64 >> div);
- dsp->vector_fmul_add(out, v + ( 768 >> div), sbr_qmf_window + (384 >> div), out , 64 >> div);
- dsp->vector_fmul_add(out, v + ( 960 >> div), sbr_qmf_window + (448 >> div), out , 64 >> div);
- dsp->vector_fmul_add(out, v + (1024 >> div), sbr_qmf_window + (512 >> div), out , 64 >> div);
- dsp->vector_fmul_add(out, v + (1216 >> div), sbr_qmf_window + (576 >> div), out , 64 >> div);
- out += 64 >> div;
- }
-}
-#endif
-
-/// Generate the subband filtered lowband
-static int sbr_lf_gen(AACContext *ac, SpectralBandReplication *sbr,
- INTFLOAT X_low[32][40][2], const INTFLOAT W[2][32][32][2],
- int buf_idx)
-{
- int i, k;
- const int t_HFGen = 8;
- const int i_f = 32;
- memset(X_low, 0, 32*sizeof(*X_low));
- for (k = 0; k < sbr->kx[1]; k++) {
- for (i = t_HFGen; i < i_f + t_HFGen; i++) {
- X_low[k][i][0] = W[buf_idx][i - t_HFGen][k][0];
- X_low[k][i][1] = W[buf_idx][i - t_HFGen][k][1];
- }
- }
- buf_idx = 1-buf_idx;
- for (k = 0; k < sbr->kx[0]; k++) {
- for (i = 0; i < t_HFGen; i++) {
- X_low[k][i][0] = W[buf_idx][i + i_f - t_HFGen][k][0];
- X_low[k][i][1] = W[buf_idx][i + i_f - t_HFGen][k][1];
- }
- }
- return 0;
-}
-
-/// High Frequency Generator (14496-3 sp04 p215)
-static int sbr_hf_gen(AACContext *ac, SpectralBandReplication *sbr,
- INTFLOAT X_high[64][40][2], const INTFLOAT X_low[32][40][2],
- const INTFLOAT (*alpha0)[2], const INTFLOAT (*alpha1)[2],
- const INTFLOAT bw_array[5], const uint8_t *t_env,
- int bs_num_env)
-{
- int j, x;
- int g = 0;
- int k = sbr->kx[1];
- for (j = 0; j < sbr->num_patches; j++) {
- for (x = 0; x < sbr->patch_num_subbands[j]; x++, k++) {
- const int p = sbr->patch_start_subband[j] + x;
- while (g <= sbr->n_q && k >= sbr->f_tablenoise[g])
- g++;
- g--;
-
- if (g < 0) {
- av_log(ac->avctx, AV_LOG_ERROR,
- "ERROR : no subband found for frequency %d\n", k);
- return -1;
- }
-
- sbr->dsp.hf_gen(X_high[k] + ENVELOPE_ADJUSTMENT_OFFSET,
- X_low[p] + ENVELOPE_ADJUSTMENT_OFFSET,
- alpha0[p], alpha1[p], bw_array[g],
- 2 * t_env[0], 2 * t_env[bs_num_env]);
- }
- }
- if (k < sbr->m[1] + sbr->kx[1])
- memset(X_high + k, 0, (sbr->m[1] + sbr->kx[1] - k) * sizeof(*X_high));
-
- return 0;
-}
-
-/// Generate the subband filtered lowband
-static int sbr_x_gen(SpectralBandReplication *sbr, INTFLOAT X[2][38][64],
- const INTFLOAT Y0[38][64][2], const INTFLOAT Y1[38][64][2],
- const INTFLOAT X_low[32][40][2], int ch)
-{
- int k, i;
- const int i_f = 32;
- const int i_Temp = FFMAX(2*sbr->data[ch].t_env_num_env_old - i_f, 0);
- memset(X, 0, 2*sizeof(*X));
- for (k = 0; k < sbr->kx[0]; k++) {
- for (i = 0; i < i_Temp; i++) {
- X[0][i][k] = X_low[k][i + ENVELOPE_ADJUSTMENT_OFFSET][0];
- X[1][i][k] = X_low[k][i + ENVELOPE_ADJUSTMENT_OFFSET][1];
- }
- }
- for (; k < sbr->kx[0] + sbr->m[0]; k++) {
- for (i = 0; i < i_Temp; i++) {
- X[0][i][k] = Y0[i + i_f][k][0];
- X[1][i][k] = Y0[i + i_f][k][1];
- }
- }
-
- for (k = 0; k < sbr->kx[1]; k++) {
- for (i = i_Temp; i < 38; i++) {
- X[0][i][k] = X_low[k][i + ENVELOPE_ADJUSTMENT_OFFSET][0];
- X[1][i][k] = X_low[k][i + ENVELOPE_ADJUSTMENT_OFFSET][1];
- }
- }
- for (; k < sbr->kx[1] + sbr->m[1]; k++) {
- for (i = i_Temp; i < i_f; i++) {
- X[0][i][k] = Y1[i][k][0];
- X[1][i][k] = Y1[i][k][1];
- }
- }
- return 0;
-}
-
-/** High Frequency Adjustment (14496-3 sp04 p217) and Mapping
- * (14496-3 sp04 p217)
- */
-static int sbr_mapping(AACContext *ac, SpectralBandReplication *sbr,
- SBRData *ch_data, int e_a[2])
-{
- int e, i, m;
-
- memset(ch_data->s_indexmapped[1], 0, 7*sizeof(ch_data->s_indexmapped[1]));
- for (e = 0; e < ch_data->bs_num_env; e++) {
- const unsigned int ilim = sbr->n[ch_data->bs_freq_res[e + 1]];
- uint16_t *table = ch_data->bs_freq_res[e + 1] ? sbr->f_tablehigh : sbr->f_tablelow;
- int k;
-
- if (sbr->kx[1] != table[0]) {
- av_log(ac->avctx, AV_LOG_ERROR, "kx != f_table{high,low}[0]. "
- "Derived frequency tables were not regenerated.\n");
- sbr_turnoff(sbr);
- return AVERROR_BUG;
- }
- for (i = 0; i < ilim; i++)
- for (m = table[i]; m < table[i + 1]; m++)
- sbr->e_origmapped[e][m - sbr->kx[1]] = ch_data->env_facs[e+1][i];
-
- // ch_data->bs_num_noise > 1 => 2 noise floors
- k = (ch_data->bs_num_noise > 1) && (ch_data->t_env[e] >= ch_data->t_q[1]);
- for (i = 0; i < sbr->n_q; i++)
- for (m = sbr->f_tablenoise[i]; m < sbr->f_tablenoise[i + 1]; m++)
- sbr->q_mapped[e][m - sbr->kx[1]] = ch_data->noise_facs[k+1][i];
-
- for (i = 0; i < sbr->n[1]; i++) {
- if (ch_data->bs_add_harmonic_flag) {
- const unsigned int m_midpoint =
- (sbr->f_tablehigh[i] + sbr->f_tablehigh[i + 1]) >> 1;
-
- ch_data->s_indexmapped[e + 1][m_midpoint - sbr->kx[1]] = ch_data->bs_add_harmonic[i] *
- (e >= e_a[1] || (ch_data->s_indexmapped[0][m_midpoint - sbr->kx[1]] == 1));
- }
- }
-
- for (i = 0; i < ilim; i++) {
- int additional_sinusoid_present = 0;
- for (m = table[i]; m < table[i + 1]; m++) {
- if (ch_data->s_indexmapped[e + 1][m - sbr->kx[1]]) {
- additional_sinusoid_present = 1;
- break;
- }
- }
- memset(&sbr->s_mapped[e][table[i] - sbr->kx[1]], additional_sinusoid_present,
- (table[i + 1] - table[i]) * sizeof(sbr->s_mapped[e][0]));
- }
- }
-
- memcpy(ch_data->s_indexmapped[0], ch_data->s_indexmapped[ch_data->bs_num_env], sizeof(ch_data->s_indexmapped[0]));
- return 0;
-}
-
-/// Estimation of current envelope (14496-3 sp04 p218)
-static void sbr_env_estimate(AAC_FLOAT (*e_curr)[48], INTFLOAT X_high[64][40][2],
- SpectralBandReplication *sbr, SBRData *ch_data)
-{
- int e, m;
- int kx1 = sbr->kx[1];
-
- if (sbr->bs_interpol_freq) {
- for (e = 0; e < ch_data->bs_num_env; e++) {
-#if USE_FIXED
- const SoftFloat recip_env_size = av_int2sf(0x20000000 / (ch_data->t_env[e + 1] - ch_data->t_env[e]), 30);
-#else
- const float recip_env_size = 0.5f / (ch_data->t_env[e + 1] - ch_data->t_env[e]);
-#endif /* USE_FIXED */
- int ilb = ch_data->t_env[e] * 2 + ENVELOPE_ADJUSTMENT_OFFSET;
- int iub = ch_data->t_env[e + 1] * 2 + ENVELOPE_ADJUSTMENT_OFFSET;
-
- for (m = 0; m < sbr->m[1]; m++) {
- AAC_FLOAT sum = sbr->dsp.sum_square(X_high[m+kx1] + ilb, iub - ilb);
-#if USE_FIXED
- e_curr[e][m] = av_mul_sf(sum, recip_env_size);
-#else
- e_curr[e][m] = sum * recip_env_size;
-#endif /* USE_FIXED */
- }
- }
- } else {
- int k, p;
-
- for (e = 0; e < ch_data->bs_num_env; e++) {
- const int env_size = 2 * (ch_data->t_env[e + 1] - ch_data->t_env[e]);
- int ilb = ch_data->t_env[e] * 2 + ENVELOPE_ADJUSTMENT_OFFSET;
- int iub = ch_data->t_env[e + 1] * 2 + ENVELOPE_ADJUSTMENT_OFFSET;
- const uint16_t *table = ch_data->bs_freq_res[e + 1] ? sbr->f_tablehigh : sbr->f_tablelow;
-
- for (p = 0; p < sbr->n[ch_data->bs_freq_res[e + 1]]; p++) {
-#if USE_FIXED
- SoftFloat sum = FLOAT_0;
- const SoftFloat den = av_int2sf(0x20000000 / (env_size * (table[p + 1] - table[p])), 29);
- for (k = table[p]; k < table[p + 1]; k++) {
- sum = av_add_sf(sum, sbr->dsp.sum_square(X_high[k] + ilb, iub - ilb));
- }
- sum = av_mul_sf(sum, den);
-#else
- float sum = 0.0f;
- const int den = env_size * (table[p + 1] - table[p]);
-
- for (k = table[p]; k < table[p + 1]; k++) {
- sum += sbr->dsp.sum_square(X_high[k] + ilb, iub - ilb);
- }
- sum /= den;
-#endif /* USE_FIXED */
- for (k = table[p]; k < table[p + 1]; k++) {
- e_curr[e][k - kx1] = sum;
- }
- }
- }
- }
-}
-
-void AAC_RENAME(ff_sbr_apply)(AACContext *ac, SpectralBandReplication *sbr, int id_aac,
- INTFLOAT* L, INTFLOAT* R)
-{
- int downsampled = ac->oc[1].m4ac.ext_sample_rate < sbr->sample_rate;
- int ch;
- int nch = (id_aac == TYPE_CPE) ? 2 : 1;
- int err;
-
- if (id_aac != sbr->id_aac) {
- av_log(ac->avctx, id_aac == TYPE_LFE ? AV_LOG_VERBOSE : AV_LOG_WARNING,
- "element type mismatch %d != %d\n", id_aac, sbr->id_aac);
- sbr_turnoff(sbr);
- }
-
- if (sbr->start && !sbr->ready_for_dequant) {
- av_log(ac->avctx, AV_LOG_ERROR,
- "No quantized data read for sbr_dequant.\n");
- sbr_turnoff(sbr);
- }
-
- if (!sbr->kx_and_m_pushed) {
- sbr->kx[0] = sbr->kx[1];
- sbr->m[0] = sbr->m[1];
- } else {
- sbr->kx_and_m_pushed = 0;
- }
-
- if (sbr->start) {
- sbr_dequant(sbr, id_aac);
- sbr->ready_for_dequant = 0;
- }
- for (ch = 0; ch < nch; ch++) {
- /* decode channel */
- sbr_qmf_analysis(ac->fdsp, sbr->mdct_ana, sbr->mdct_ana_fn, &sbr->dsp,
- ch ? R : L, sbr->data[ch].analysis_filterbank_samples,
- (INTFLOAT*)sbr->qmf_filter_scratch,
- sbr->data[ch].W, sbr->data[ch].Ypos);
- sbr->c.sbr_lf_gen(ac, sbr, sbr->X_low,
- (const INTFLOAT (*)[32][32][2]) sbr->data[ch].W,
- sbr->data[ch].Ypos);
- sbr->data[ch].Ypos ^= 1;
- if (sbr->start) {
- sbr->c.sbr_hf_inverse_filter(&sbr->dsp, sbr->alpha0, sbr->alpha1,
- (const INTFLOAT (*)[40][2]) sbr->X_low, sbr->k[0]);
- sbr_chirp(sbr, &sbr->data[ch]);
- av_assert0(sbr->data[ch].bs_num_env > 0);
- sbr_hf_gen(ac, sbr, sbr->X_high,
- (const INTFLOAT (*)[40][2]) sbr->X_low,
- (const INTFLOAT (*)[2]) sbr->alpha0,
- (const INTFLOAT (*)[2]) sbr->alpha1,
- sbr->data[ch].bw_array, sbr->data[ch].t_env,
- sbr->data[ch].bs_num_env);
-
- // hf_adj
- err = sbr_mapping(ac, sbr, &sbr->data[ch], sbr->data[ch].e_a);
- if (!err) {
- sbr_env_estimate(sbr->e_curr, sbr->X_high, sbr, &sbr->data[ch]);
- sbr_gain_calc(ac, sbr, &sbr->data[ch], sbr->data[ch].e_a);
- sbr->c.sbr_hf_assemble(sbr->data[ch].Y[sbr->data[ch].Ypos],
- (const INTFLOAT (*)[40][2]) sbr->X_high,
- sbr, &sbr->data[ch],
- sbr->data[ch].e_a);
- }
- }
-
- /* synthesis */
- sbr->c.sbr_x_gen(sbr, sbr->X[ch],
- (const INTFLOAT (*)[64][2]) sbr->data[ch].Y[1-sbr->data[ch].Ypos],
- (const INTFLOAT (*)[64][2]) sbr->data[ch].Y[ sbr->data[ch].Ypos],
- (const INTFLOAT (*)[40][2]) sbr->X_low, ch);
- }
-
- if (ac->oc[1].m4ac.ps == 1) {
- if (sbr->ps.common.start) {
- AAC_RENAME(ff_ps_apply)(ac->avctx, &sbr->ps, sbr->X[0], sbr->X[1], sbr->kx[1] + sbr->m[1]);
- } else {
- memcpy(sbr->X[1], sbr->X[0], sizeof(sbr->X[0]));
- }
- nch = 2;
- }
-
- sbr_qmf_synthesis(sbr->mdct, sbr->mdct_fn, &sbr->dsp, ac->fdsp,
- L, sbr->X[0], sbr->qmf_filter_scratch,
- sbr->data[0].synthesis_filterbank_samples,
- &sbr->data[0].synthesis_filterbank_samples_offset,
- downsampled);
- if (nch == 2)
- sbr_qmf_synthesis(sbr->mdct, sbr->mdct_fn, &sbr->dsp, ac->fdsp,
- R, sbr->X[1], sbr->qmf_filter_scratch,
- sbr->data[1].synthesis_filterbank_samples,
- &sbr->data[1].synthesis_filterbank_samples_offset,
- downsampled);
-}
-
-static void aacsbr_func_ptr_init(AACSBRContext *c)
-{
- c->sbr_lf_gen = sbr_lf_gen;
- c->sbr_hf_assemble = sbr_hf_assemble;
- c->sbr_x_gen = sbr_x_gen;
- c->sbr_hf_inverse_filter = sbr_hf_inverse_filter;
-
-#if !USE_FIXED
-#if ARCH_MIPS
- ff_aacsbr_func_ptr_init_mips(c);
-#endif
-#endif
-}
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/codec.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/codec.h
deleted file mode 100644
index 3b1995bcfefae2e984b64c3ea621e0e29b9f1ab0..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/codec.h
+++ /dev/null
@@ -1,375 +0,0 @@
-/*
- * AVCodec public API
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_CODEC_H
-#define AVCODEC_CODEC_H
-
-#include
-
-#include "libavutil/avutil.h"
-#include "libavutil/hwcontext.h"
-#include "libavutil/log.h"
-#include "libavutil/pixfmt.h"
-#include "libavutil/rational.h"
-#include "libavutil/samplefmt.h"
-
-#include "libavcodec/codec_id.h"
-#include "libavcodec/version_major.h"
-
-/**
- * @addtogroup lavc_core
- * @{
- */
-
-/**
- * Decoder can use draw_horiz_band callback.
- */
-#define AV_CODEC_CAP_DRAW_HORIZ_BAND (1 << 0)
-/**
- * Codec uses get_buffer() or get_encode_buffer() for allocating buffers and
- * supports custom allocators.
- * If not set, it might not use get_buffer() or get_encode_buffer() at all, or
- * use operations that assume the buffer was allocated by
- * avcodec_default_get_buffer2 or avcodec_default_get_encode_buffer.
- */
-#define AV_CODEC_CAP_DR1 (1 << 1)
-/**
- * Encoder or decoder requires flushing with NULL input at the end in order to
- * give the complete and correct output.
- *
- * NOTE: If this flag is not set, the codec is guaranteed to never be fed with
- * with NULL data. The user can still send NULL data to the public encode
- * or decode function, but libavcodec will not pass it along to the codec
- * unless this flag is set.
- *
- * Decoders:
- * The decoder has a non-zero delay and needs to be fed with avpkt->data=NULL,
- * avpkt->size=0 at the end to get the delayed data until the decoder no longer
- * returns frames.
- *
- * Encoders:
- * The encoder needs to be fed with NULL data at the end of encoding until the
- * encoder no longer returns data.
- *
- * NOTE: For encoders implementing the AVCodec.encode2() function, setting this
- * flag also means that the encoder must set the pts and duration for
- * each output packet. If this flag is not set, the pts and duration will
- * be determined by libavcodec from the input frame.
- */
-#define AV_CODEC_CAP_DELAY (1 << 5)
-/**
- * Codec can be fed a final frame with a smaller size.
- * This can be used to prevent truncation of the last audio samples.
- */
-#define AV_CODEC_CAP_SMALL_LAST_FRAME (1 << 6)
-
-/**
- * Codec can output multiple frames per AVPacket
- * Normally demuxers return one frame at a time, demuxers which do not do
- * are connected to a parser to split what they return into proper frames.
- * This flag is reserved to the very rare category of codecs which have a
- * bitstream that cannot be split into frames without timeconsuming
- * operations like full decoding. Demuxers carrying such bitstreams thus
- * may return multiple frames in a packet. This has many disadvantages like
- * prohibiting stream copy in many cases thus it should only be considered
- * as a last resort.
- */
-#define AV_CODEC_CAP_SUBFRAMES (1 << 8)
-/**
- * Codec is experimental and is thus avoided in favor of non experimental
- * encoders
- */
-#define AV_CODEC_CAP_EXPERIMENTAL (1 << 9)
-/**
- * Codec should fill in channel configuration and samplerate instead of container
- */
-#define AV_CODEC_CAP_CHANNEL_CONF (1 << 10)
-/**
- * Codec supports frame-level multithreading.
- */
-#define AV_CODEC_CAP_FRAME_THREADS (1 << 12)
-/**
- * Codec supports slice-based (or partition-based) multithreading.
- */
-#define AV_CODEC_CAP_SLICE_THREADS (1 << 13)
-/**
- * Codec supports changed parameters at any point.
- */
-#define AV_CODEC_CAP_PARAM_CHANGE (1 << 14)
-/**
- * Codec supports multithreading through a method other than slice- or
- * frame-level multithreading. Typically this marks wrappers around
- * multithreading-capable external libraries.
- */
-#define AV_CODEC_CAP_OTHER_THREADS (1 << 15)
-/**
- * Audio encoder supports receiving a different number of samples in each call.
- */
-#define AV_CODEC_CAP_VARIABLE_FRAME_SIZE (1 << 16)
-/**
- * Decoder is not a preferred choice for probing.
- * This indicates that the decoder is not a good choice for probing.
- * It could for example be an expensive to spin up hardware decoder,
- * or it could simply not provide a lot of useful information about
- * the stream.
- * A decoder marked with this flag should only be used as last resort
- * choice for probing.
- */
-#define AV_CODEC_CAP_AVOID_PROBING (1 << 17)
-
-/**
- * Codec is backed by a hardware implementation. Typically used to
- * identify a non-hwaccel hardware decoder. For information about hwaccels, use
- * avcodec_get_hw_config() instead.
- */
-#define AV_CODEC_CAP_HARDWARE (1 << 18)
-
-/**
- * Codec is potentially backed by a hardware implementation, but not
- * necessarily. This is used instead of AV_CODEC_CAP_HARDWARE, if the
- * implementation provides some sort of internal fallback.
- */
-#define AV_CODEC_CAP_HYBRID (1 << 19)
-
-/**
- * This encoder can reorder user opaque values from input AVFrames and return
- * them with corresponding output packets.
- * @see AV_CODEC_FLAG_COPY_OPAQUE
- */
-#define AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE (1 << 20)
-
-/**
- * This encoder can be flushed using avcodec_flush_buffers(). If this flag is
- * not set, the encoder must be closed and reopened to ensure that no frames
- * remain pending.
- */
-#define AV_CODEC_CAP_ENCODER_FLUSH (1 << 21)
-
-/**
- * The encoder is able to output reconstructed frame data, i.e. raw frames that
- * would be produced by decoding the encoded bitstream.
- *
- * Reconstructed frame output is enabled by the AV_CODEC_FLAG_RECON_FRAME flag.
- */
-#define AV_CODEC_CAP_ENCODER_RECON_FRAME (1 << 22)
-
-/**
- * AVProfile.
- */
-typedef struct AVProfile {
- int profile;
- const char *name; ///< short name for the profile
-} AVProfile;
-
-/**
- * AVCodec.
- */
-typedef struct AVCodec {
- /**
- * Name of the codec implementation.
- * The name is globally unique among encoders and among decoders (but an
- * encoder and a decoder can share the same name).
- * This is the primary way to find a codec from the user perspective.
- */
- const char *name;
- /**
- * Descriptive name for the codec, meant to be more human readable than name.
- * You should use the NULL_IF_CONFIG_SMALL() macro to define it.
- */
- const char *long_name;
- enum AVMediaType type;
- enum AVCodecID id;
- /**
- * Codec capabilities.
- * see AV_CODEC_CAP_*
- */
- int capabilities;
- uint8_t max_lowres; ///< maximum value for lowres supported by the decoder
- const AVRational *supported_framerates; ///< array of supported framerates, or NULL if any, array is terminated by {0,0}
- const enum AVPixelFormat *pix_fmts; ///< array of supported pixel formats, or NULL if unknown, array is terminated by -1
- const int *supported_samplerates; ///< array of supported audio samplerates, or NULL if unknown, array is terminated by 0
- const enum AVSampleFormat *sample_fmts; ///< array of supported sample formats, or NULL if unknown, array is terminated by -1
-#if FF_API_OLD_CHANNEL_LAYOUT
- /**
- * @deprecated use ch_layouts instead
- */
- attribute_deprecated
- const uint64_t *channel_layouts; ///< array of support channel layouts, or NULL if unknown. array is terminated by 0
-#endif
- const AVClass *priv_class; ///< AVClass for the private context
- const AVProfile *profiles; ///< array of recognized profiles, or NULL if unknown, array is terminated by {FF_PROFILE_UNKNOWN}
-
- /**
- * Group name of the codec implementation.
- * This is a short symbolic name of the wrapper backing this codec. A
- * wrapper uses some kind of external implementation for the codec, such
- * as an external library, or a codec implementation provided by the OS or
- * the hardware.
- * If this field is NULL, this is a builtin, libavcodec native codec.
- * If non-NULL, this will be the suffix in AVCodec.name in most cases
- * (usually AVCodec.name will be of the form "_").
- */
- const char *wrapper_name;
-
- /**
- * Array of supported channel layouts, terminated with a zeroed layout.
- */
- const AVChannelLayout *ch_layouts;
-} AVCodec;
-
-/**
- * Iterate over all registered codecs.
- *
- * @param opaque a pointer where libavcodec will store the iteration state. Must
- * point to NULL to start the iteration.
- *
- * @return the next registered codec or NULL when the iteration is
- * finished
- */
-const AVCodec *av_codec_iterate(void **opaque);
-
-/**
- * Find a registered decoder with a matching codec ID.
- *
- * @param id AVCodecID of the requested decoder
- * @return A decoder if one was found, NULL otherwise.
- */
-const AVCodec *avcodec_find_decoder(enum AVCodecID id);
-
-/**
- * Find a registered decoder with the specified name.
- *
- * @param name name of the requested decoder
- * @return A decoder if one was found, NULL otherwise.
- */
-const AVCodec *avcodec_find_decoder_by_name(const char *name);
-
-/**
- * Find a registered encoder with a matching codec ID.
- *
- * @param id AVCodecID of the requested encoder
- * @return An encoder if one was found, NULL otherwise.
- */
-const AVCodec *avcodec_find_encoder(enum AVCodecID id);
-
-/**
- * Find a registered encoder with the specified name.
- *
- * @param name name of the requested encoder
- * @return An encoder if one was found, NULL otherwise.
- */
-const AVCodec *avcodec_find_encoder_by_name(const char *name);
-/**
- * @return a non-zero number if codec is an encoder, zero otherwise
- */
-int av_codec_is_encoder(const AVCodec *codec);
-
-/**
- * @return a non-zero number if codec is a decoder, zero otherwise
- */
-int av_codec_is_decoder(const AVCodec *codec);
-
-/**
- * Return a name for the specified profile, if available.
- *
- * @param codec the codec that is searched for the given profile
- * @param profile the profile value for which a name is requested
- * @return A name for the profile if found, NULL otherwise.
- */
-const char *av_get_profile_name(const AVCodec *codec, int profile);
-
-enum {
- /**
- * The codec supports this format via the hw_device_ctx interface.
- *
- * When selecting this format, AVCodecContext.hw_device_ctx should
- * have been set to a device of the specified type before calling
- * avcodec_open2().
- */
- AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX = 0x01,
- /**
- * The codec supports this format via the hw_frames_ctx interface.
- *
- * When selecting this format for a decoder,
- * AVCodecContext.hw_frames_ctx should be set to a suitable frames
- * context inside the get_format() callback. The frames context
- * must have been created on a device of the specified type.
- *
- * When selecting this format for an encoder,
- * AVCodecContext.hw_frames_ctx should be set to the context which
- * will be used for the input frames before calling avcodec_open2().
- */
- AV_CODEC_HW_CONFIG_METHOD_HW_FRAMES_CTX = 0x02,
- /**
- * The codec supports this format by some internal method.
- *
- * This format can be selected without any additional configuration -
- * no device or frames context is required.
- */
- AV_CODEC_HW_CONFIG_METHOD_INTERNAL = 0x04,
- /**
- * The codec supports this format by some ad-hoc method.
- *
- * Additional settings and/or function calls are required. See the
- * codec-specific documentation for details. (Methods requiring
- * this sort of configuration are deprecated and others should be
- * used in preference.)
- */
- AV_CODEC_HW_CONFIG_METHOD_AD_HOC = 0x08,
-};
-
-typedef struct AVCodecHWConfig {
- /**
- * For decoders, a hardware pixel format which that decoder may be
- * able to decode to if suitable hardware is available.
- *
- * For encoders, a pixel format which the encoder may be able to
- * accept. If set to AV_PIX_FMT_NONE, this applies to all pixel
- * formats supported by the codec.
- */
- enum AVPixelFormat pix_fmt;
- /**
- * Bit set of AV_CODEC_HW_CONFIG_METHOD_* flags, describing the possible
- * setup methods which can be used with this configuration.
- */
- int methods;
- /**
- * The device type associated with the configuration.
- *
- * Must be set for AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX and
- * AV_CODEC_HW_CONFIG_METHOD_HW_FRAMES_CTX, otherwise unused.
- */
- enum AVHWDeviceType device_type;
-} AVCodecHWConfig;
-
-/**
- * Retrieve supported hardware configurations for a codec.
- *
- * Values of index from zero to some maximum return the indexed configuration
- * descriptor; all other values return NULL. If the codec does not support
- * any hardware configurations then it will always return NULL.
- */
-const AVCodecHWConfig *avcodec_get_hw_config(const AVCodec *codec, int index);
-
-/**
- * @}
- */
-
-#endif /* AVCODEC_CODEC_H */
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Summertime Saga APK for Windows The Best Way to Play the Adult Adventure Game.md b/spaces/congsaPfin/Manga-OCR/logs/Summertime Saga APK for Windows The Best Way to Play the Adult Adventure Game.md
deleted file mode 100644
index 2bbe8663ed04c2625ec7b514228a041afe055a4c..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Summertime Saga APK for Windows The Best Way to Play the Adult Adventure Game.md
+++ /dev/null
@@ -1,116 +0,0 @@
-
-
Summertime Saga APK Windows: How to Download and Play This Popular Dating Sim on Your PC
-
If you are looking for a fun and engaging dating simulation game with a twist, you might want to check out Summertime Saga. This game is not your typical romance story. It is full of humor, mystery, drama, and adult content. In this article, we will show you how to download and play Summertime Saga APK Windows on your PC using an emulator.
-
What is Summertime Saga?
-
Summertime Saga is a point-and-click graphical adventure game developed by Kompas Productions. It is inspired by classics of this genre like Leisure Suit Larry and Monkey Island, but with a modern setting and graphics. The game is set in a small suburban town where you play as a young man who is trying to cope with the sudden death of his father. Along the way, you will meet and interact with various characters, each with their own personality, backstory, and secrets. You will also have to deal with school, work, money, hobbies, and romance.
The game features over 65 characters to meet and interact with, over 30 locations to explore, over 20 mini-games to play, and over 70 hours of gameplay. The game also has a lot of adult content, including nudity, sexual scenes, fetishes, violence, drugs, and profanity. The game is rated 18+ for mature audiences only.
-
Why Play Summertime Saga on Windows?
-
Summertime Saga is available for Android devices, but you might want to play it on your Windows PC for several reasons. Here are some of them:
-
-
Playing on a PC gives you a better gaming experience. You can enjoy the game's high-quality graphics, sound, and animation on a larger screen and with better performance. You can also use a keyboard and mouse to control the game, which might be more comfortable and convenient than tapping on a small touchscreen.
-
Playing on a PC gives you more options and flexibility. You can customize the game's settings, such as the resolution, the language, the sound volume, and the text speed. You can also save and load your progress at any point, and even create multiple save files to explore different paths and outcomes. You can also access the game's debug menu, which allows you to cheat and unlock everything in the game.
-
Playing on a PC gives you more security and privacy. You don't have to worry about losing your data or your device getting damaged or stolen. You can also play the game discreetly without anyone seeing what you are doing on your phone.
-
-
However, Summertime Saga is not officially available for Windows. The game is only released as an APK file, which is an Android application package. To run an APK file on your PC, you need to use an emulator.
-
How to Download Summertime Saga APK Windows?
-
An emulator is a software that mimics the Android operating system on your PC. It allows you to run Android apps and games on your Windows computer as if they were native applications. There are many emulators available online, but one of the most popular and reliable ones is BlueStacks.
-
BlueStacks is a free and easy-to-use emulator that has millions of users worldwide. It has a user-friendly interface, a fast performance, and a wide range of features. It also supports Summertime Saga APK Windows and other Android games and apps.
-
summertime saga apk windows download
-summertime saga apk windows 10
-summertime saga apk windows 7
-summertime saga apk windows 8
-summertime saga apk windows xp
-summertime saga apk windows free
-summertime saga apk windows pc
-summertime saga apk windows laptop
-summertime saga apk windows offline
-summertime saga apk windows online
-summertime saga apk windows latest version
-summertime saga apk windows update
-summertime saga apk windows full game
-summertime saga apk windows cheats
-summertime saga apk windows walkthrough
-summertime saga apk windows guide
-summertime saga apk windows tips
-summertime saga apk windows tricks
-summertime saga apk windows hack
-summertime saga apk windows mod
-summertime saga apk windows install
-summertime saga apk windows setup
-summertime saga apk windows exe
-summertime saga apk windows file
-summertime saga apk windows link
-summertime saga apk windows play store
-summertime saga apk windows bluestacks
-summertime saga apk windows emulator
-summertime saga apk windows android
-summertime saga apk windows ios
-summertime saga apk windows mac
-summertime saga apk windows linux
-summertime saga apk windows steam
-summertime saga apk windows patreon
-summertime saga apk windows reddit
-summertime saga apk windows review
-summertime saga apk windows rating
-summertime saga apk windows gameplay
-summertime saga apk windows trailer
-summertime saga apk windows video
-summertime saga apk windows screenshots
-summertime saga apk windows characters
-summertime saga apk windows story
-summertime saga apk windows plot
-summertime saga apk windows genre
-summertime saga apk windows theme
-summertime saga apk windows graphics
-summertime saga apk windows sound
-summertime saga apk windows music
-
To download and play Summertime Saga APK Windows on your PC using BlueStacks, follow these steps:
-
-
Download and install BlueStacks on your PC from its official website: https://www.bluestacks.com/. The installation process is simple and straightforward. Just follow the instructions on the screen.
-
Download Summertime Saga APK Windows from its official website: https://summertimesaga.com/download. The latest version of the game is 0.20.11 as of June 2023. The file size is about 1 GB.
-
Launch BlueStacks on your PC and sign in with your Google account. If you don't have one, you can create one for free.
-
Drag and drop the Summertime Saga APK file onto the BlueStacks home screen. Alternatively, you can click on the "Install APK" button at the bottom right corner of the screen and select the Summertime Saga APK file from your computer.
-
Wait for BlueStacks to install Summertime Saga APK Windows on your PC. This might take a few minutes depending on your internet speed and your PC's specifications.
-
Once the installation is complete, you will see the Summertime Saga icon on the BlueStacks home screen. Click on it to launch and play Summertime Saga APK Windows on your PC.
-
-
How to Play Summertime Saga on Windows?
-
Summertime Saga is a point-and-click graphical adventure game that follows a branching storyline with multiple endings. You can choose how to interact with different characters and situations, and shape your own destiny in the game.
-
The game has a simple and intuitive interface that consists of three main elements:
-
-
The main screen, where you can see the graphics, the dialogue, and the choices.
-
The menu bar, where you can access the settings, the save/load function, the skip function, the inventory, the map, the stats, and the phone.
-
The mouse cursor, which changes shape depending on what you can do or interact with in the game.
-
-
To play Summertime Saga on Windows using BlueStacks, you can use either your mouse or your keyboard to control the game. Here are some basic controls:
-
-
To move around in the game world, click on the arrows at the edges of the screen or use the arrow keys on your keyboard.
-
To interact with objects or characters in the game world, click on them or press the spacebar or enter key on your keyboard.
-
To advance or skip dialogue in the game, click anywhere on the screen or press any key on your keyboard.
-
To make choices in the game, click on the options that appear on the screen or use the number keys on your keyboard.
-
To access the menu bar, move your mouse cursor to the top of the screen or press the escape key on your keyboard.
-
To pause or resume the game, press the P key on your keyboard.
-
-
Summertime Saga is a game that requires a lot of exploration, experimentation, and patience. You will have to talk to different characters, find clues, solve puzzles, complete tasks, and make decisions that will affect your relationships and the outcome of the game. You will also have to manage your time, money, energy, and stats in the game.
-
Here are some tips and tricks on how to play Summertime Saga on Windows:
-
-
Save your game often. The game has a lot of branching paths and different endings, so you might want to save your progress before making important choices or doing risky actions. You can save up to 10 files in the game.
-
Use the skip function. The game has a lot of dialogue and scenes that you might want to skip if you have already seen them before or if you are not interested in them. You can use the skip function to fast-forward through them. You can also adjust the skip settings in the menu bar.
-
Check your phone. Your phone is an important tool in the game. It allows you to communicate with other characters, check your messages, take photos, browse the internet, and play mini-games. You can access your phone by clicking on its icon in the menu bar or pressing the F1 key on your keyboard.
-
Use the map. The map is another useful tool in the game. It allows you to travel to different locations in the game world. You can access the map by clicking on its icon in the menu bar or pressing the M key on your keyboard. You can also see which characters are available at each location by hovering over them with your mouse cursor.
-
Upgrade your stats. Your stats are your attributes that affect your performance and interactions in the game. They include intelligence, charisma, strength, dexterity, and luck. You can upgrade your stats by doing various activities in the game, such as studying, working out, playing games, or reading books. You can check your stats by clicking on their icons in the menu bar or pressing the S key on your keyboard.
-
-
Conclusion
-
Summertime Saga is a fun and engaging dating simulation game that offers a lot of content and variety for players of all tastes and preferences. It is a game that you can play for hours and hours without getting bored or running out of things to do. It is also a game that you can enjoy more on your Windows PC using an emulator like BlueStacks.
-
If you are interested in playing Summertime Saga APK Windows on your PC, you can download it from its official website and follow our guide on how to install and play it using BlueStacks. You will not regret it!
-
To give you an idea of how Summertime Saga compares with other similar games, here is a table that shows some of their features and differences:
- | Game | Genre | Platform | Price | Adult Content | Length | |------|-------|----------|-------|---------------|--------| | Summertime Saga | Dating sim/graphical adventure | Android/Windows (via emulator) | Free | Yes | Over 70 hours | | Dream Daddy | Dating sim/visual novel | Windows/Mac/Linux/iOS/Android/Switch/PS4 | $14.99 | No | About 10 hours | | HuniePop | Dating sim/puzzle | Windows/Mac/Linux | $9.99 | Yes | About 8 hours | | Monster Prom | Dating sim/multiplayer | Windows/Mac/Linux/Switch/Xbox One/PS4 | $11.99 | No | About 6 hours | | Doki Doki Literature Club | Dating sim/psychological horror | Windows/Mac/Linux/Switch/Xbox One/PS4/iOS/Android | Free ($14.99 for Plus version) | Yes (in Plus version) | About 4 hours |
FAQs about Summertime Saga APK Windows
-
-
Q: Is Summertime Saga APK Windows safe to download and play? A: Yes, Summertime Saga APK Windows is safe to download and play as long as you get it from its official website and use a trusted emulator like BlueStacks. However, you should be careful about where you play it and who you share it with, as it contains adult content that might not be suitable for everyone.
-
Q: How often is Summertime Saga APK Windows updated? A: Summertime Saga APK Windows is updated regularly by the developers, who release new versions every few months. The latest version of the game is 0.20.11 as of June 2023, which added new characters, locations, events, and features. You can check the official website for the latest news and updates on the game.
-
Q: How can I support the development of Summertime Saga APK Windows? A: Summertime Saga APK Windows is a free game that is funded by donations from fans and patrons. If you enjoy the game and want to support its development, you can donate to the developers via PayPal or Patreon. You can also follow them on social media and share your feedback and suggestions with them.
-
Q: How can I mod Summertime Saga APK Windows? A: Summertime Saga APK Windows is a game that supports modding, which means that you can create and install custom content and features for the game. You can use the game's built-in mod manager to download and install mods from the official website or from other sources. You can also use the game's source code and tools to create your own mods and share them with other players.
-
Q: Where can I find more information and help about Summertime Saga APK Windows? A: Summertime Saga APK Windows is a game that has a large and active community of fans and players. You can find more information and help about the game on its official website, wiki, forum, discord, reddit, and YouTube channel. You can also ask questions and get answers from other players on these platforms.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Assetto Corsa Pc Crack 17.md b/spaces/contluForse/HuggingGPT/assets/Assetto Corsa Pc Crack 17.md
deleted file mode 100644
index 04f47558c333ccad30cb9af68eaff8a53ae8f8bd..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Assetto Corsa Pc Crack 17.md
+++ /dev/null
@@ -1,111 +0,0 @@
-
-
Assetto Corsa PC Crack 17: How to Download and Play the Ultimate Racing Simulator
-
-
If you are a fan of racing games, you might have heard of Assetto Corsa, a realistic and immersive driving simulator that features advanced physics, graphics and gameplay. Assetto Corsa is developed by Kunos Simulazioni, an Italian studio that has a long history of creating racing simulations for professional and amateur drivers. Assetto Corsa offers a variety of modes, cars and tracks to suit your preferences and skills. You can race against AI opponents, online players or yourself in time trials. You can also customize your cars with different setups, liveries and mods. Assetto Corsa is a game that will challenge you and reward you with a satisfying driving experience.
However, Assetto Corsa is not a cheap game. It costs $29.99 on Steam, and that does not include the DLCs that add more content and features to the game. The DLCs are sold separately or in bundles, and they can cost up to $69.99 in total. That means you might have to spend almost $100 to enjoy the full potential of Assetto Corsa. That is a lot of money for some people, especially if you are not sure if you will like the game or not.
-
-
Fortunately, there is a way to play Assetto Corsa for free on your PC. You can download a cracked version of the game that includes all the DLCs and updates. A cracked version is a modified version of the game that bypasses the DRM protection and allows you to play without paying or activating the game. You can find cracked versions of Assetto Corsa on various websites that offer torrent downloads or direct links. However, not all cracked versions are safe and reliable. Some might contain viruses, malware or errors that can harm your PC or ruin your gaming experience.
-
-
That is why we have prepared this guide for you. We will show you how to download and play Assetto Corsa PC Crack 17, which is one of the best cracked versions available online. Assetto Corsa PC Crack 17 is based on the RELOADED ISO release of the game, which is updated to version 1.16.3 and includes all 10 DLCs. The crack is replaced with a 3DM one, which is tested and working by many users. The download size is only 6.8 GB, which is significantly smaller than the original 13.2 GB. The installation is easy and fast, and you can change the language in the game options.
-
-
-
How to Download Assetto Corsa PC Crack 17
-
-
To download Assetto Corsa PC Crack 17, you will need a torrent client such as uTorrent or BitTorrent. A torrent client is a software that allows you to download files from other users who are sharing them on a peer-to-peer network. You will also need a VPN service such as NordVPN or ExpressVPN to protect your privacy and security while downloading torrents.
-
-
Here are the steps to download Assetto Corsa PC Crack 17:
-
-
-
Download and install a torrent client and a VPN service on your PC.
-
Go to this link: https://fitgirl-repacks.site/assetto-corsa/
-
Scroll down to the bottom of the page and click on one of the download links under "DOWNLOAD (torrents, magnets, direct links)". You can choose any link you want, but we recommend using magnet links as they are more convenient and faster.
-
A new tab will open with a magnet link that looks like this: magnet:?xt=urn:btih:...
-
Copy the magnet link and paste it into your torrent client.
-
Start your VPN service and connect to a server in a country where torrenting is legal.
-
Wait for the torrent to finish downloading.
-
-
-
How to Install and Play Assetto Corsa PC Crack 17
-
-
Once you have downloaded Assetto Corsa PC Crack 17, you can install and play it on your PC. Here are the steps to install and play Assetto Corsa PC Crack 17:
-
-
-
Open the folder where you downloaded Assetto Corsa PC Crack 17.
-
Run setup.exe as administrator.
-
Select your installation directory and language.
-
Follow the instructions on the screen.
-
Wait for the installation to complete.
-
Run AssettoCorsa.exe from your installation directory or from your desktop shortcut.
-
Enjoy playing Assetto Corsa PC Crack 17!
-
-
-
Note: If you encounter any problems while playing Assetto Corsa PC Crack 17, such as crashes or errors, you can try these solutions:
-
-
-
Go to game options and enable "32-bit mode" for racing.
-
Disable your antivirus or firewall while playing.
-
Update your graphics drivers.
-
Run the game as administrator.
-
-
-
Conclusion
-
-
Assetto Corsa is one of the best racing simulators ever made, but it can be expensive to buy it with all its DLCs. That is why we have shown you how to download and play Assetto Corsa PC Crack 17 for free on your PC. Assetto Corsa PC Crack 17 is a high-quality cracked version that includes all the updates and DLCs of the game. It is easy to download and install, and it works perfectly on most PCs. However, we still recommend buying the game if you like it and want to support the developers.
-
-
We hope this guide was helpful for you. If you have any questions or feedback, please leave them in the comments below. Thank you for reading!
-
What is Assetto Corsa PC Crack 17?
-
-
Assetto Corsa PC Crack 17 is a cracked version of Assetto Corsa, a racing simulator game for PC. A cracked version is a version that has been modified to bypass the DRM protection and allow you to play without paying or activating the game. Assetto Corsa PC Crack 17 is based on the RELOADED ISO release of the game, which is updated to version 1.16.3 and includes all 10 DLCs. The DLCs are additional content and features that enhance the game, such as new cars, tracks, modes and events. Assetto Corsa PC Crack 17 also has a 3DM crack, which is a tool that allows you to run the game without any problems.
-
-
Assetto Corsa PC Crack 17 is one of the best cracked versions of Assetto Corsa available online. It has many advantages over other cracked versions, such as:
-
-
-
It has a smaller download size than the original game.
-
It has all the updates and DLCs of the game.
-
It has a working crack that does not cause crashes or errors.
-
It has an optional Russian localization setup.
-
It has an after-install integrity check that ensures everything is installed properly.
-
-
-
Assetto Corsa PC Crack 17 is a great way to enjoy Assetto Corsa for free on your PC. However, it is not a legal or official version of the game. It is a pirated version that violates the copyright and license of the game. Therefore, we do not recommend or endorse using Assetto Corsa PC Crack 17. We advise you to buy the game from Steam or other authorized platforms if you like it and want to support the developers.
-
-
Why Should You Play Assetto Corsa PC Crack 17?
-
-
If you are still interested in playing Assetto Corsa PC Crack 17, you might be wondering why you should choose this game over other racing games. Assetto Corsa is not just another racing game. It is a racing simulator that aims to provide a realistic and immersive driving experience. Assetto Corsa has many features and aspects that make it stand out from other racing games, such as:
-
-
-
It has an advanced DirectX 11 graphics engine that recreates an immersive environment, dynamic lighting and realistic materials and surfaces.
-
It has an advanced physics engine that provides a very realistic driving experience, including features and aspects of real cars, never seen on any other racing simulator such as tyre flat spots, heat cycles including graining and blistering, very advanced aerodynamic simulation with active movable aerodynamics parts controlled in real time by telemetry input channels, hybrid systems with kers and energy recovery simulation.
-
It has exclusive licensed cars reproduced with the best accuracy possible, thanks to the official cooperation of car manufacturers.
-
It has a variety of modes, cars and tracks to suit your preferences and skills. You can race against AI opponents, online players or yourself in time trials. You can also customize your cars with different setups, liveries and mods.
-
It has a modding community that creates and shares new content and features for the game.
-
-
-
Assetto Corsa PC Crack 17 is a game that will challenge you and reward you with a satisfying driving experience. It is a game that will make you feel like you are driving a real car on a real track. It is a game that will test your skills and improve your performance. It is a game that will give you hours of fun and entertainment.
-
-
How to Get Started with Assetto Corsa PC Crack 17?
-
-
If you have decided to play Assetto Corsa PC Crack 17, you will need to download and install it on your PC first. You can follow our guide above on how to download and install Assetto Corsa PC Crack 17. Once you have installed the game, you can run it from your installation directory or from your desktop shortcut. You will see the main menu of the game, where you can choose your options and start playing.
-
-
Before you start playing, you might want to adjust some settings to optimize your gaming experience. You can go to Options > General > Video Settings to change your resolution, fullscreen mode, anti-aliasing, shadows, reflections and other graphics options. You can also go to Options > Controls > Controller Settings to configure your input device, whether it is a keyboard, mouse, gamepad or wheel. You can also go to Options > Audio Settings to adjust your volume levels and sound effects.
-
-
Once you have set up your preferences, you can start playing Assetto Corsa PC Crack 17. You can choose from different modes such as Practice, Quick Race, Special Events or Career Mode. You can also join or create online sessions with other players around the world. You can select your car from over 100 models available in the game, ranging from road cars to race cars to concept cars. You can also select your track from over 20 locations available in the game, including famous circuits such as Silverstone, Spa-Francorchamps or Nürburgring.
-
-
When you start racing, you will notice how realistic and immersive Assetto Corsa PC Crack 17 is. You will feel every bump on the road, every turn of the wheel, every shift of the gear. You will see every detail on your car and on your surroundings. You will hear every sound of your engine and of your opponents. You will have to use your skills and strategy to win each race and improve your performance.
-
-
Conclusion
-
-
Assetto Corsa PC Crack 17 is one of the best racing simulators ever made for PC. It offers a realistic and immersive driving experience that will challenge you and reward you with satisfaction. It features advanced graphics, physics and gameplay that make it stand out from other racing games. It also includes all the updates and DLCs of the game that add more content and features to enhance your enjoyment.
-
-
However, Assetto Corsa PC Crack 17 is not a legal or official version of the game. It is a cracked version that violates the copyright and license of the game. Therefore, we do not recommend or endorse using Assetto Corsa PC Crack 17. We advise you to buy the game from Steam or other authorized platforms if you like it and want to support the developers.
-
-
We hope this article was helpful for you. If you have any questions or feedback, please leave them in the comments below. Thank you for reading!
-
Conclusion
-
-
Assetto Corsa PC Crack 17 is a great way to enjoy Assetto Corsa for free on your PC. It is a high-quality cracked version that includes all the updates and DLCs of the game. It is easy to download and install, and it works perfectly on most PCs. However, it is not a legal or official version of the game. It is a pirated version that violates the copyright and license of the game. Therefore, we do not recommend or endorse using Assetto Corsa PC Crack 17. We advise you to buy the game from Steam or other authorized platforms if you like it and want to support the developers.
-
-
We hope this article was helpful for you. If you have any questions or feedback, please leave them in the comments below. Thank you for reading!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Download Mardaani Movies In Hindi Hd.md b/spaces/contluForse/HuggingGPT/assets/Download Mardaani Movies In Hindi Hd.md
deleted file mode 100644
index 64388f485f386c950234970b23df4f5c5f06befd..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Download Mardaani Movies In Hindi Hd.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
- . . and . . . . . and discover why and how Shivani Shivaji Roy got divorced. Watch Mardaani movie online on Desi Cinemas. Subscribe to Dailymotion for more.
-
-Watch Mardaani full movie online. The movie Mardaani can be watched in high definition on Dailymotion below. Meet Shivani Shivaji Roy, . . . and . . . . . and discover why and how Shivani Shivaji Roy got divorced. Watch Mardaani full movie online. The movie Mardaani can be watched in high definition on Dailymotion below. Subscribe to Dailymotion for more.
-
-Watch Mardaani full movie online. The movie Mardaani can be watched in high definition on Dailymotion below. Meet Shivani Shivaji Roy, . . . and . . . . . and discover why and how Shivani Shiv 4fefd39f24
-
-
-
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/bricks/generalized_attention.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/bricks/generalized_attention.py
deleted file mode 100644
index 988d9adf2f289ef223bd1c680a5ae1d3387f0269..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/bricks/generalized_attention.py
+++ /dev/null
@@ -1,412 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import math
-
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from ..utils import kaiming_init
-from .registry import PLUGIN_LAYERS
-
-
-@PLUGIN_LAYERS.register_module()
-class GeneralizedAttention(nn.Module):
- """GeneralizedAttention module.
-
- See 'An Empirical Study of Spatial Attention Mechanisms in Deep Networks'
- (https://arxiv.org/abs/1711.07971) for details.
-
- Args:
- in_channels (int): Channels of the input feature map.
- spatial_range (int): The spatial range. -1 indicates no spatial range
- constraint. Default: -1.
- num_heads (int): The head number of empirical_attention module.
- Default: 9.
- position_embedding_dim (int): The position embedding dimension.
- Default: -1.
- position_magnitude (int): A multiplier acting on coord difference.
- Default: 1.
- kv_stride (int): The feature stride acting on key/value feature map.
- Default: 2.
- q_stride (int): The feature stride acting on query feature map.
- Default: 1.
- attention_type (str): A binary indicator string for indicating which
- items in generalized empirical_attention module are used.
- Default: '1111'.
-
- - '1000' indicates 'query and key content' (appr - appr) item,
- - '0100' indicates 'query content and relative position'
- (appr - position) item,
- - '0010' indicates 'key content only' (bias - appr) item,
- - '0001' indicates 'relative position only' (bias - position) item.
- """
-
- _abbr_ = 'gen_attention_block'
-
- def __init__(self,
- in_channels,
- spatial_range=-1,
- num_heads=9,
- position_embedding_dim=-1,
- position_magnitude=1,
- kv_stride=2,
- q_stride=1,
- attention_type='1111'):
-
- super(GeneralizedAttention, self).__init__()
-
- # hard range means local range for non-local operation
- self.position_embedding_dim = (
- position_embedding_dim
- if position_embedding_dim > 0 else in_channels)
-
- self.position_magnitude = position_magnitude
- self.num_heads = num_heads
- self.in_channels = in_channels
- self.spatial_range = spatial_range
- self.kv_stride = kv_stride
- self.q_stride = q_stride
- self.attention_type = [bool(int(_)) for _ in attention_type]
- self.qk_embed_dim = in_channels // num_heads
- out_c = self.qk_embed_dim * num_heads
-
- if self.attention_type[0] or self.attention_type[1]:
- self.query_conv = nn.Conv2d(
- in_channels=in_channels,
- out_channels=out_c,
- kernel_size=1,
- bias=False)
- self.query_conv.kaiming_init = True
-
- if self.attention_type[0] or self.attention_type[2]:
- self.key_conv = nn.Conv2d(
- in_channels=in_channels,
- out_channels=out_c,
- kernel_size=1,
- bias=False)
- self.key_conv.kaiming_init = True
-
- self.v_dim = in_channels // num_heads
- self.value_conv = nn.Conv2d(
- in_channels=in_channels,
- out_channels=self.v_dim * num_heads,
- kernel_size=1,
- bias=False)
- self.value_conv.kaiming_init = True
-
- if self.attention_type[1] or self.attention_type[3]:
- self.appr_geom_fc_x = nn.Linear(
- self.position_embedding_dim // 2, out_c, bias=False)
- self.appr_geom_fc_x.kaiming_init = True
-
- self.appr_geom_fc_y = nn.Linear(
- self.position_embedding_dim // 2, out_c, bias=False)
- self.appr_geom_fc_y.kaiming_init = True
-
- if self.attention_type[2]:
- stdv = 1.0 / math.sqrt(self.qk_embed_dim * 2)
- appr_bias_value = -2 * stdv * torch.rand(out_c) + stdv
- self.appr_bias = nn.Parameter(appr_bias_value)
-
- if self.attention_type[3]:
- stdv = 1.0 / math.sqrt(self.qk_embed_dim * 2)
- geom_bias_value = -2 * stdv * torch.rand(out_c) + stdv
- self.geom_bias = nn.Parameter(geom_bias_value)
-
- self.proj_conv = nn.Conv2d(
- in_channels=self.v_dim * num_heads,
- out_channels=in_channels,
- kernel_size=1,
- bias=True)
- self.proj_conv.kaiming_init = True
- self.gamma = nn.Parameter(torch.zeros(1))
-
- if self.spatial_range >= 0:
- # only works when non local is after 3*3 conv
- if in_channels == 256:
- max_len = 84
- elif in_channels == 512:
- max_len = 42
-
- max_len_kv = int((max_len - 1.0) / self.kv_stride + 1)
- local_constraint_map = np.ones(
- (max_len, max_len, max_len_kv, max_len_kv), dtype=np.int)
- for iy in range(max_len):
- for ix in range(max_len):
- local_constraint_map[
- iy, ix,
- max((iy - self.spatial_range) //
- self.kv_stride, 0):min((iy + self.spatial_range +
- 1) // self.kv_stride +
- 1, max_len),
- max((ix - self.spatial_range) //
- self.kv_stride, 0):min((ix + self.spatial_range +
- 1) // self.kv_stride +
- 1, max_len)] = 0
-
- self.local_constraint_map = nn.Parameter(
- torch.from_numpy(local_constraint_map).byte(),
- requires_grad=False)
-
- if self.q_stride > 1:
- self.q_downsample = nn.AvgPool2d(
- kernel_size=1, stride=self.q_stride)
- else:
- self.q_downsample = None
-
- if self.kv_stride > 1:
- self.kv_downsample = nn.AvgPool2d(
- kernel_size=1, stride=self.kv_stride)
- else:
- self.kv_downsample = None
-
- self.init_weights()
-
- def get_position_embedding(self,
- h,
- w,
- h_kv,
- w_kv,
- q_stride,
- kv_stride,
- device,
- dtype,
- feat_dim,
- wave_length=1000):
- # the default type of Tensor is float32, leading to type mismatch
- # in fp16 mode. Cast it to support fp16 mode.
- h_idxs = torch.linspace(0, h - 1, h).to(device=device, dtype=dtype)
- h_idxs = h_idxs.view((h, 1)) * q_stride
-
- w_idxs = torch.linspace(0, w - 1, w).to(device=device, dtype=dtype)
- w_idxs = w_idxs.view((w, 1)) * q_stride
-
- h_kv_idxs = torch.linspace(0, h_kv - 1, h_kv).to(
- device=device, dtype=dtype)
- h_kv_idxs = h_kv_idxs.view((h_kv, 1)) * kv_stride
-
- w_kv_idxs = torch.linspace(0, w_kv - 1, w_kv).to(
- device=device, dtype=dtype)
- w_kv_idxs = w_kv_idxs.view((w_kv, 1)) * kv_stride
-
- # (h, h_kv, 1)
- h_diff = h_idxs.unsqueeze(1) - h_kv_idxs.unsqueeze(0)
- h_diff *= self.position_magnitude
-
- # (w, w_kv, 1)
- w_diff = w_idxs.unsqueeze(1) - w_kv_idxs.unsqueeze(0)
- w_diff *= self.position_magnitude
-
- feat_range = torch.arange(0, feat_dim / 4).to(
- device=device, dtype=dtype)
-
- dim_mat = torch.Tensor([wave_length]).to(device=device, dtype=dtype)
- dim_mat = dim_mat**((4. / feat_dim) * feat_range)
- dim_mat = dim_mat.view((1, 1, -1))
-
- embedding_x = torch.cat(
- ((w_diff / dim_mat).sin(), (w_diff / dim_mat).cos()), dim=2)
-
- embedding_y = torch.cat(
- ((h_diff / dim_mat).sin(), (h_diff / dim_mat).cos()), dim=2)
-
- return embedding_x, embedding_y
-
- def forward(self, x_input):
- num_heads = self.num_heads
-
- # use empirical_attention
- if self.q_downsample is not None:
- x_q = self.q_downsample(x_input)
- else:
- x_q = x_input
- n, _, h, w = x_q.shape
-
- if self.kv_downsample is not None:
- x_kv = self.kv_downsample(x_input)
- else:
- x_kv = x_input
- _, _, h_kv, w_kv = x_kv.shape
-
- if self.attention_type[0] or self.attention_type[1]:
- proj_query = self.query_conv(x_q).view(
- (n, num_heads, self.qk_embed_dim, h * w))
- proj_query = proj_query.permute(0, 1, 3, 2)
-
- if self.attention_type[0] or self.attention_type[2]:
- proj_key = self.key_conv(x_kv).view(
- (n, num_heads, self.qk_embed_dim, h_kv * w_kv))
-
- if self.attention_type[1] or self.attention_type[3]:
- position_embed_x, position_embed_y = self.get_position_embedding(
- h, w, h_kv, w_kv, self.q_stride, self.kv_stride,
- x_input.device, x_input.dtype, self.position_embedding_dim)
- # (n, num_heads, w, w_kv, dim)
- position_feat_x = self.appr_geom_fc_x(position_embed_x).\
- view(1, w, w_kv, num_heads, self.qk_embed_dim).\
- permute(0, 3, 1, 2, 4).\
- repeat(n, 1, 1, 1, 1)
-
- # (n, num_heads, h, h_kv, dim)
- position_feat_y = self.appr_geom_fc_y(position_embed_y).\
- view(1, h, h_kv, num_heads, self.qk_embed_dim).\
- permute(0, 3, 1, 2, 4).\
- repeat(n, 1, 1, 1, 1)
-
- position_feat_x /= math.sqrt(2)
- position_feat_y /= math.sqrt(2)
-
- # accelerate for saliency only
- if (np.sum(self.attention_type) == 1) and self.attention_type[2]:
- appr_bias = self.appr_bias.\
- view(1, num_heads, 1, self.qk_embed_dim).\
- repeat(n, 1, 1, 1)
-
- energy = torch.matmul(appr_bias, proj_key).\
- view(n, num_heads, 1, h_kv * w_kv)
-
- h = 1
- w = 1
- else:
- # (n, num_heads, h*w, h_kv*w_kv), query before key, 540mb for
- if not self.attention_type[0]:
- energy = torch.zeros(
- n,
- num_heads,
- h,
- w,
- h_kv,
- w_kv,
- dtype=x_input.dtype,
- device=x_input.device)
-
- # attention_type[0]: appr - appr
- # attention_type[1]: appr - position
- # attention_type[2]: bias - appr
- # attention_type[3]: bias - position
- if self.attention_type[0] or self.attention_type[2]:
- if self.attention_type[0] and self.attention_type[2]:
- appr_bias = self.appr_bias.\
- view(1, num_heads, 1, self.qk_embed_dim)
- energy = torch.matmul(proj_query + appr_bias, proj_key).\
- view(n, num_heads, h, w, h_kv, w_kv)
-
- elif self.attention_type[0]:
- energy = torch.matmul(proj_query, proj_key).\
- view(n, num_heads, h, w, h_kv, w_kv)
-
- elif self.attention_type[2]:
- appr_bias = self.appr_bias.\
- view(1, num_heads, 1, self.qk_embed_dim).\
- repeat(n, 1, 1, 1)
-
- energy += torch.matmul(appr_bias, proj_key).\
- view(n, num_heads, 1, 1, h_kv, w_kv)
-
- if self.attention_type[1] or self.attention_type[3]:
- if self.attention_type[1] and self.attention_type[3]:
- geom_bias = self.geom_bias.\
- view(1, num_heads, 1, self.qk_embed_dim)
-
- proj_query_reshape = (proj_query + geom_bias).\
- view(n, num_heads, h, w, self.qk_embed_dim)
-
- energy_x = torch.matmul(
- proj_query_reshape.permute(0, 1, 3, 2, 4),
- position_feat_x.permute(0, 1, 2, 4, 3))
- energy_x = energy_x.\
- permute(0, 1, 3, 2, 4).unsqueeze(4)
-
- energy_y = torch.matmul(
- proj_query_reshape,
- position_feat_y.permute(0, 1, 2, 4, 3))
- energy_y = energy_y.unsqueeze(5)
-
- energy += energy_x + energy_y
-
- elif self.attention_type[1]:
- proj_query_reshape = proj_query.\
- view(n, num_heads, h, w, self.qk_embed_dim)
- proj_query_reshape = proj_query_reshape.\
- permute(0, 1, 3, 2, 4)
- position_feat_x_reshape = position_feat_x.\
- permute(0, 1, 2, 4, 3)
- position_feat_y_reshape = position_feat_y.\
- permute(0, 1, 2, 4, 3)
-
- energy_x = torch.matmul(proj_query_reshape,
- position_feat_x_reshape)
- energy_x = energy_x.permute(0, 1, 3, 2, 4).unsqueeze(4)
-
- energy_y = torch.matmul(proj_query_reshape,
- position_feat_y_reshape)
- energy_y = energy_y.unsqueeze(5)
-
- energy += energy_x + energy_y
-
- elif self.attention_type[3]:
- geom_bias = self.geom_bias.\
- view(1, num_heads, self.qk_embed_dim, 1).\
- repeat(n, 1, 1, 1)
-
- position_feat_x_reshape = position_feat_x.\
- view(n, num_heads, w*w_kv, self.qk_embed_dim)
-
- position_feat_y_reshape = position_feat_y.\
- view(n, num_heads, h * h_kv, self.qk_embed_dim)
-
- energy_x = torch.matmul(position_feat_x_reshape, geom_bias)
- energy_x = energy_x.view(n, num_heads, 1, w, 1, w_kv)
-
- energy_y = torch.matmul(position_feat_y_reshape, geom_bias)
- energy_y = energy_y.view(n, num_heads, h, 1, h_kv, 1)
-
- energy += energy_x + energy_y
-
- energy = energy.view(n, num_heads, h * w, h_kv * w_kv)
-
- if self.spatial_range >= 0:
- cur_local_constraint_map = \
- self.local_constraint_map[:h, :w, :h_kv, :w_kv].\
- contiguous().\
- view(1, 1, h*w, h_kv*w_kv)
-
- energy = energy.masked_fill_(cur_local_constraint_map,
- float('-inf'))
-
- attention = F.softmax(energy, 3)
-
- proj_value = self.value_conv(x_kv)
- proj_value_reshape = proj_value.\
- view((n, num_heads, self.v_dim, h_kv * w_kv)).\
- permute(0, 1, 3, 2)
-
- out = torch.matmul(attention, proj_value_reshape).\
- permute(0, 1, 3, 2).\
- contiguous().\
- view(n, self.v_dim * self.num_heads, h, w)
-
- out = self.proj_conv(out)
-
- # output is downsampled, upsample back to input size
- if self.q_downsample is not None:
- out = F.interpolate(
- out,
- size=x_input.shape[2:],
- mode='bilinear',
- align_corners=False)
-
- out = self.gamma * out + x_input
- return out
-
- def init_weights(self):
- for m in self.modules():
- if hasattr(m, 'kaiming_init') and m.kaiming_init:
- kaiming_init(
- m,
- mode='fan_in',
- nonlinearity='leaky_relu',
- bias=0,
- distribution='uniform',
- a=1)
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/bricks/padding.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/bricks/padding.py
deleted file mode 100644
index e4ac6b28a1789bd551c613a7d3e7b622433ac7ec..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/bricks/padding.py
+++ /dev/null
@@ -1,36 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch.nn as nn
-
-from .registry import PADDING_LAYERS
-
-PADDING_LAYERS.register_module('zero', module=nn.ZeroPad2d)
-PADDING_LAYERS.register_module('reflect', module=nn.ReflectionPad2d)
-PADDING_LAYERS.register_module('replicate', module=nn.ReplicationPad2d)
-
-
-def build_padding_layer(cfg, *args, **kwargs):
- """Build padding layer.
-
- Args:
- cfg (None or dict): The padding layer config, which should contain:
- - type (str): Layer type.
- - layer args: Args needed to instantiate a padding layer.
-
- Returns:
- nn.Module: Created padding layer.
- """
- if not isinstance(cfg, dict):
- raise TypeError('cfg must be a dict')
- if 'type' not in cfg:
- raise KeyError('the cfg dict must contain the key "type"')
-
- cfg_ = cfg.copy()
- padding_type = cfg_.pop('type')
- if padding_type not in PADDING_LAYERS:
- raise KeyError(f'Unrecognized padding type {padding_type}.')
- else:
- padding_layer = PADDING_LAYERS.get(padding_type)
-
- layer = padding_layer(*args, **kwargs, **cfg_)
-
- return layer
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/data/dataset_mappers/dataset_mapper.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/data/dataset_mappers/dataset_mapper.py
deleted file mode 100644
index 710c81bee298e9e6b21a93742d09e720024ceeff..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/data/dataset_mappers/dataset_mapper.py
+++ /dev/null
@@ -1,203 +0,0 @@
-# ------------------------------------------------------------------------------
-# Reference: https://github.com/facebookresearch/detectron2/blob/main/detectron2/data/dataset_mapper.py
-# Modified by Jitesh Jain (https://github.com/praeclarumjj3)
-# ------------------------------------------------------------------------------
-
-import copy
-import logging
-import numpy as np
-from typing import List, Optional, Union
-import torch
-
-from annotator.oneformer.detectron2.config import configurable
-
-from annotator.oneformer.detectron2.data import detection_utils as utils
-from annotator.oneformer.detectron2.data import transforms as T
-from annotator.oneformer.oneformer.data.tokenizer import SimpleTokenizer, Tokenize
-
-__all__ = ["DatasetMapper"]
-
-
-class DatasetMapper:
- """
- A callable which takes a dataset dict in Detectron2 Dataset format,
- and map it into a format used by the model.
-
- This is the default callable to be used to map your dataset dict into training data.
- You may need to follow it to implement your own one for customized logic,
- such as a different way to read or transform images.
- See :doc:`/tutorials/data_loading` for details.
-
- The callable currently does the following:
-
- 1. Read the image from "file_name"
- 2. Applies cropping/geometric transforms to the image and annotations
- 3. Prepare data and annotations to Tensor and :class:`Instances`
- """
-
- @configurable
- def __init__(
- self,
- is_train: bool,
- *,
- augmentations: List[Union[T.Augmentation, T.Transform]],
- image_format: str,
- task_seq_len: int,
- task: str = "panoptic",
- use_instance_mask: bool = False,
- use_keypoint: bool = False,
- instance_mask_format: str = "polygon",
- keypoint_hflip_indices: Optional[np.ndarray] = None,
- precomputed_proposal_topk: Optional[int] = None,
- recompute_boxes: bool = False,
- ):
- """
- NOTE: this interface is experimental.
-
- Args:
- is_train: whether it's used in training or inference
- augmentations: a list of augmentations or deterministic transforms to apply
- image_format: an image format supported by :func:`detection_utils.read_image`.
- use_instance_mask: whether to process instance segmentation annotations, if available
- use_keypoint: whether to process keypoint annotations if available
- instance_mask_format: one of "polygon" or "bitmask". Process instance segmentation
- masks into this format.
- keypoint_hflip_indices: see :func:`detection_utils.create_keypoint_hflip_indices`
- precomputed_proposal_topk: if given, will load pre-computed
- proposals from dataset_dict and keep the top k proposals for each image.
- recompute_boxes: whether to overwrite bounding box annotations
- by computing tight bounding boxes from instance mask annotations.
- """
- if recompute_boxes:
- assert use_instance_mask, "recompute_boxes requires instance masks"
- # fmt: off
- self.is_train = is_train
- self.augmentations = T.AugmentationList(augmentations)
- self.image_format = image_format
- self.use_instance_mask = use_instance_mask
- self.instance_mask_format = instance_mask_format
- self.use_keypoint = use_keypoint
- self.keypoint_hflip_indices = keypoint_hflip_indices
- self.proposal_topk = precomputed_proposal_topk
- self.recompute_boxes = recompute_boxes
- self.task_tokenizer = Tokenize(SimpleTokenizer(), max_seq_len=task_seq_len)
- self.task = task
- assert self.task in ["panoptic", "semantic", "instance"]
-
- # fmt: on
- logger = logging.getLogger(__name__)
- mode = "training" if is_train else "inference"
- logger.info(f"[DatasetMapper] Augmentations used in {mode}: {augmentations}")
-
- @classmethod
- def from_config(cls, cfg, is_train: bool = True):
- augs = utils.build_augmentation(cfg, is_train)
- if cfg.INPUT.CROP.ENABLED and is_train:
- augs.insert(0, T.RandomCrop(cfg.INPUT.CROP.TYPE, cfg.INPUT.CROP.SIZE))
- recompute_boxes = cfg.MODEL.MASK_ON
- else:
- recompute_boxes = False
-
- ret = {
- "is_train": is_train,
- "augmentations": augs,
- "image_format": cfg.INPUT.FORMAT,
- "use_instance_mask": cfg.MODEL.MASK_ON,
- "instance_mask_format": cfg.INPUT.MASK_FORMAT,
- "use_keypoint": cfg.MODEL.KEYPOINT_ON,
- "task_seq_len": cfg.INPUT.TASK_SEQ_LEN,
- "recompute_boxes": recompute_boxes,
- "task": cfg.MODEL.TEST.TASK,
- }
-
- if cfg.MODEL.KEYPOINT_ON:
- ret["keypoint_hflip_indices"] = utils.create_keypoint_hflip_indices(cfg.DATASETS.TRAIN)
-
- if cfg.MODEL.LOAD_PROPOSALS:
- ret["precomputed_proposal_topk"] = (
- cfg.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TRAIN
- if is_train
- else cfg.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TEST
- )
- return ret
-
- def _transform_annotations(self, dataset_dict, transforms, image_shape):
- # USER: Modify this if you want to keep them for some reason.
- for anno in dataset_dict["annotations"]:
- if not self.use_instance_mask:
- anno.pop("segmentation", None)
- if not self.use_keypoint:
- anno.pop("keypoints", None)
-
- # USER: Implement additional transformations if you have other types of data
- annos = [
- utils.transform_instance_annotations(
- obj, transforms, image_shape, keypoint_hflip_indices=self.keypoint_hflip_indices
- )
- for obj in dataset_dict.pop("annotations")
- if obj.get("iscrowd", 0) == 0
- ]
- instances = utils.annotations_to_instances(
- annos, image_shape, mask_format=self.instance_mask_format
- )
-
- # After transforms such as cropping are applied, the bounding box may no longer
- # tightly bound the object. As an example, imagine a triangle object
- # [(0,0), (2,0), (0,2)] cropped by a box [(1,0),(2,2)] (XYXY format). The tight
- # bounding box of the cropped triangle should be [(1,0),(2,1)], which is not equal to
- # the intersection of original bounding box and the cropping box.
- if self.recompute_boxes:
- instances.gt_boxes = instances.gt_masks.get_bounding_boxes()
- dataset_dict["instances"] = utils.filter_empty_instances(instances)
-
- def __call__(self, dataset_dict):
- """
- Args:
- dataset_dict (dict): Metadata of one image, in Detectron2 Dataset format.
-
- Returns:
- dict: a format that builtin models in detectron2 accept
- """
- dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below
- # USER: Write your own image loading if it's not from a file
- image = utils.read_image(dataset_dict["file_name"], format=self.image_format)
- utils.check_image_size(dataset_dict, image)
-
- task = f"The task is {self.task}"
- dataset_dict["task"] = task
-
- # USER: Remove if you don't do semantic/panoptic segmentation.
- if "sem_seg_file_name" in dataset_dict:
- sem_seg_gt = utils.read_image(dataset_dict.pop("sem_seg_file_name"), "L").squeeze(2)
- else:
- sem_seg_gt = None
-
- aug_input = T.AugInput(image, sem_seg=sem_seg_gt)
- transforms = self.augmentations(aug_input)
- image, sem_seg_gt = aug_input.image, aug_input.sem_seg
-
- image_shape = image.shape[:2] # h, w
- # Pytorch's dataloader is efficient on torch.Tensor due to shared-memory,
- # but not efficient on large generic data structures due to the use of pickle & mp.Queue.
- # Therefore it's important to use torch.Tensor.
- dataset_dict["image"] = torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1)))
- if sem_seg_gt is not None:
- dataset_dict["sem_seg"] = torch.as_tensor(sem_seg_gt.astype("long"))
-
- # USER: Remove if you don't use pre-computed proposals.
- # Most users would not need this feature.
- if self.proposal_topk is not None:
- utils.transform_proposals(
- dataset_dict, image_shape, transforms, proposal_topk=self.proposal_topk
- )
-
- if not self.is_train:
- # USER: Modify this if you want to keep them for some reason.
- dataset_dict.pop("annotations", None)
- dataset_dict.pop("sem_seg_file_name", None)
- return dataset_dict
-
- if "annotations" in dataset_dict:
- self._transform_annotations(dataset_dict, transforms, image_shape)
-
- return dataset_dict
\ No newline at end of file
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/segmentors/encoder_decoder.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/segmentors/encoder_decoder.py
deleted file mode 100644
index 98392ac04c4c44a7f4e7b1c0808266875877dd1f..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/segmentors/encoder_decoder.py
+++ /dev/null
@@ -1,298 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from annotator.uniformer.mmseg.core import add_prefix
-from annotator.uniformer.mmseg.ops import resize
-from .. import builder
-from ..builder import SEGMENTORS
-from .base import BaseSegmentor
-
-
-@SEGMENTORS.register_module()
-class EncoderDecoder(BaseSegmentor):
- """Encoder Decoder segmentors.
-
- EncoderDecoder typically consists of backbone, decode_head, auxiliary_head.
- Note that auxiliary_head is only used for deep supervision during training,
- which could be dumped during inference.
- """
-
- def __init__(self,
- backbone,
- decode_head,
- neck=None,
- auxiliary_head=None,
- train_cfg=None,
- test_cfg=None,
- pretrained=None):
- super(EncoderDecoder, self).__init__()
- self.backbone = builder.build_backbone(backbone)
- if neck is not None:
- self.neck = builder.build_neck(neck)
- self._init_decode_head(decode_head)
- self._init_auxiliary_head(auxiliary_head)
-
- self.train_cfg = train_cfg
- self.test_cfg = test_cfg
-
- self.init_weights(pretrained=pretrained)
-
- assert self.with_decode_head
-
- def _init_decode_head(self, decode_head):
- """Initialize ``decode_head``"""
- self.decode_head = builder.build_head(decode_head)
- self.align_corners = self.decode_head.align_corners
- self.num_classes = self.decode_head.num_classes
-
- def _init_auxiliary_head(self, auxiliary_head):
- """Initialize ``auxiliary_head``"""
- if auxiliary_head is not None:
- if isinstance(auxiliary_head, list):
- self.auxiliary_head = nn.ModuleList()
- for head_cfg in auxiliary_head:
- self.auxiliary_head.append(builder.build_head(head_cfg))
- else:
- self.auxiliary_head = builder.build_head(auxiliary_head)
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in backbone and heads.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
-
- super(EncoderDecoder, self).init_weights(pretrained)
- self.backbone.init_weights(pretrained=pretrained)
- self.decode_head.init_weights()
- if self.with_auxiliary_head:
- if isinstance(self.auxiliary_head, nn.ModuleList):
- for aux_head in self.auxiliary_head:
- aux_head.init_weights()
- else:
- self.auxiliary_head.init_weights()
-
- def extract_feat(self, img):
- """Extract features from images."""
- x = self.backbone(img)
- if self.with_neck:
- x = self.neck(x)
- return x
-
- def encode_decode(self, img, img_metas):
- """Encode images with backbone and decode into a semantic segmentation
- map of the same size as input."""
- x = self.extract_feat(img)
- out = self._decode_head_forward_test(x, img_metas)
- out = resize(
- input=out,
- size=img.shape[2:],
- mode='bilinear',
- align_corners=self.align_corners)
- return out
-
- def _decode_head_forward_train(self, x, img_metas, gt_semantic_seg):
- """Run forward function and calculate loss for decode head in
- training."""
- losses = dict()
- loss_decode = self.decode_head.forward_train(x, img_metas,
- gt_semantic_seg,
- self.train_cfg)
-
- losses.update(add_prefix(loss_decode, 'decode'))
- return losses
-
- def _decode_head_forward_test(self, x, img_metas):
- """Run forward function and calculate loss for decode head in
- inference."""
- seg_logits = self.decode_head.forward_test(x, img_metas, self.test_cfg)
- return seg_logits
-
- def _auxiliary_head_forward_train(self, x, img_metas, gt_semantic_seg):
- """Run forward function and calculate loss for auxiliary head in
- training."""
- losses = dict()
- if isinstance(self.auxiliary_head, nn.ModuleList):
- for idx, aux_head in enumerate(self.auxiliary_head):
- loss_aux = aux_head.forward_train(x, img_metas,
- gt_semantic_seg,
- self.train_cfg)
- losses.update(add_prefix(loss_aux, f'aux_{idx}'))
- else:
- loss_aux = self.auxiliary_head.forward_train(
- x, img_metas, gt_semantic_seg, self.train_cfg)
- losses.update(add_prefix(loss_aux, 'aux'))
-
- return losses
-
- def forward_dummy(self, img):
- """Dummy forward function."""
- seg_logit = self.encode_decode(img, None)
-
- return seg_logit
-
- def forward_train(self, img, img_metas, gt_semantic_seg):
- """Forward function for training.
-
- Args:
- img (Tensor): Input images.
- img_metas (list[dict]): List of image info dict where each dict
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- `mmseg/datasets/pipelines/formatting.py:Collect`.
- gt_semantic_seg (Tensor): Semantic segmentation masks
- used if the architecture supports semantic segmentation task.
-
- Returns:
- dict[str, Tensor]: a dictionary of loss components
- """
-
- x = self.extract_feat(img)
-
- losses = dict()
-
- loss_decode = self._decode_head_forward_train(x, img_metas,
- gt_semantic_seg)
- losses.update(loss_decode)
-
- if self.with_auxiliary_head:
- loss_aux = self._auxiliary_head_forward_train(
- x, img_metas, gt_semantic_seg)
- losses.update(loss_aux)
-
- return losses
-
- # TODO refactor
- def slide_inference(self, img, img_meta, rescale):
- """Inference by sliding-window with overlap.
-
- If h_crop > h_img or w_crop > w_img, the small patch will be used to
- decode without padding.
- """
-
- h_stride, w_stride = self.test_cfg.stride
- h_crop, w_crop = self.test_cfg.crop_size
- batch_size, _, h_img, w_img = img.size()
- num_classes = self.num_classes
- h_grids = max(h_img - h_crop + h_stride - 1, 0) // h_stride + 1
- w_grids = max(w_img - w_crop + w_stride - 1, 0) // w_stride + 1
- preds = img.new_zeros((batch_size, num_classes, h_img, w_img))
- count_mat = img.new_zeros((batch_size, 1, h_img, w_img))
- for h_idx in range(h_grids):
- for w_idx in range(w_grids):
- y1 = h_idx * h_stride
- x1 = w_idx * w_stride
- y2 = min(y1 + h_crop, h_img)
- x2 = min(x1 + w_crop, w_img)
- y1 = max(y2 - h_crop, 0)
- x1 = max(x2 - w_crop, 0)
- crop_img = img[:, :, y1:y2, x1:x2]
- crop_seg_logit = self.encode_decode(crop_img, img_meta)
- preds += F.pad(crop_seg_logit,
- (int(x1), int(preds.shape[3] - x2), int(y1),
- int(preds.shape[2] - y2)))
-
- count_mat[:, :, y1:y2, x1:x2] += 1
- assert (count_mat == 0).sum() == 0
- if torch.onnx.is_in_onnx_export():
- # cast count_mat to constant while exporting to ONNX
- count_mat = torch.from_numpy(
- count_mat.cpu().detach().numpy()).to(device=img.device)
- preds = preds / count_mat
- if rescale:
- preds = resize(
- preds,
- size=img_meta[0]['ori_shape'][:2],
- mode='bilinear',
- align_corners=self.align_corners,
- warning=False)
- return preds
-
- def whole_inference(self, img, img_meta, rescale):
- """Inference with full image."""
-
- seg_logit = self.encode_decode(img, img_meta)
- if rescale:
- # support dynamic shape for onnx
- if torch.onnx.is_in_onnx_export():
- size = img.shape[2:]
- else:
- size = img_meta[0]['ori_shape'][:2]
- seg_logit = resize(
- seg_logit,
- size=size,
- mode='bilinear',
- align_corners=self.align_corners,
- warning=False)
-
- return seg_logit
-
- def inference(self, img, img_meta, rescale):
- """Inference with slide/whole style.
-
- Args:
- img (Tensor): The input image of shape (N, 3, H, W).
- img_meta (dict): Image info dict where each dict has: 'img_shape',
- 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- `mmseg/datasets/pipelines/formatting.py:Collect`.
- rescale (bool): Whether rescale back to original shape.
-
- Returns:
- Tensor: The output segmentation map.
- """
-
- assert self.test_cfg.mode in ['slide', 'whole']
- ori_shape = img_meta[0]['ori_shape']
- assert all(_['ori_shape'] == ori_shape for _ in img_meta)
- if self.test_cfg.mode == 'slide':
- seg_logit = self.slide_inference(img, img_meta, rescale)
- else:
- seg_logit = self.whole_inference(img, img_meta, rescale)
- output = F.softmax(seg_logit, dim=1)
- flip = img_meta[0]['flip']
- if flip:
- flip_direction = img_meta[0]['flip_direction']
- assert flip_direction in ['horizontal', 'vertical']
- if flip_direction == 'horizontal':
- output = output.flip(dims=(3, ))
- elif flip_direction == 'vertical':
- output = output.flip(dims=(2, ))
-
- return output
-
- def simple_test(self, img, img_meta, rescale=True):
- """Simple test with single image."""
- seg_logit = self.inference(img, img_meta, rescale)
- seg_pred = seg_logit.argmax(dim=1)
- if torch.onnx.is_in_onnx_export():
- # our inference backend only support 4D output
- seg_pred = seg_pred.unsqueeze(0)
- return seg_pred
- seg_pred = seg_pred.cpu().numpy()
- # unravel batch dim
- seg_pred = list(seg_pred)
- return seg_pred
-
- def aug_test(self, imgs, img_metas, rescale=True):
- """Test with augmentations.
-
- Only rescale=True is supported.
- """
- # aug_test rescale all imgs back to ori_shape for now
- assert rescale
- # to save memory, we get augmented seg logit inplace
- seg_logit = self.inference(imgs[0], img_metas[0], rescale)
- for i in range(1, len(imgs)):
- cur_seg_logit = self.inference(imgs[i], img_metas[i], rescale)
- seg_logit += cur_seg_logit
- seg_logit /= len(imgs)
- seg_pred = seg_logit.argmax(dim=1)
- seg_pred = seg_pred.cpu().numpy()
- # unravel batch dim
- seg_pred = list(seg_pred)
- return seg_pred
diff --git a/spaces/cymic/Waifu_Diffusion_Webui/modules/masking.py b/spaces/cymic/Waifu_Diffusion_Webui/modules/masking.py
deleted file mode 100644
index 59e23daadce93c2b54cc8533bb78dbf6da5bcc3b..0000000000000000000000000000000000000000
--- a/spaces/cymic/Waifu_Diffusion_Webui/modules/masking.py
+++ /dev/null
@@ -1,99 +0,0 @@
-from PIL import Image, ImageFilter, ImageOps
-
-
-def get_crop_region(mask, pad=0):
- """finds a rectangular region that contains all masked ares in an image. Returns (x1, y1, x2, y2) coordinates of the rectangle.
- For example, if a user has painted the top-right part of a 512x512 image", the result may be (256, 0, 512, 256)"""
-
- h, w = mask.shape
-
- crop_left = 0
- for i in range(w):
- if not (mask[:, i] == 0).all():
- break
- crop_left += 1
-
- crop_right = 0
- for i in reversed(range(w)):
- if not (mask[:, i] == 0).all():
- break
- crop_right += 1
-
- crop_top = 0
- for i in range(h):
- if not (mask[i] == 0).all():
- break
- crop_top += 1
-
- crop_bottom = 0
- for i in reversed(range(h)):
- if not (mask[i] == 0).all():
- break
- crop_bottom += 1
-
- return (
- int(max(crop_left-pad, 0)),
- int(max(crop_top-pad, 0)),
- int(min(w - crop_right + pad, w)),
- int(min(h - crop_bottom + pad, h))
- )
-
-
-def expand_crop_region(crop_region, processing_width, processing_height, image_width, image_height):
- """expands crop region get_crop_region() to match the ratio of the image the region will processed in; returns expanded region
- for example, if user drew mask in a 128x32 region, and the dimensions for processing are 512x512, the region will be expanded to 128x128."""
-
- x1, y1, x2, y2 = crop_region
-
- ratio_crop_region = (x2 - x1) / (y2 - y1)
- ratio_processing = processing_width / processing_height
-
- if ratio_crop_region > ratio_processing:
- desired_height = (x2 - x1) * ratio_processing
- desired_height_diff = int(desired_height - (y2-y1))
- y1 -= desired_height_diff//2
- y2 += desired_height_diff - desired_height_diff//2
- if y2 >= image_height:
- diff = y2 - image_height
- y2 -= diff
- y1 -= diff
- if y1 < 0:
- y2 -= y1
- y1 -= y1
- if y2 >= image_height:
- y2 = image_height
- else:
- desired_width = (y2 - y1) * ratio_processing
- desired_width_diff = int(desired_width - (x2-x1))
- x1 -= desired_width_diff//2
- x2 += desired_width_diff - desired_width_diff//2
- if x2 >= image_width:
- diff = x2 - image_width
- x2 -= diff
- x1 -= diff
- if x1 < 0:
- x2 -= x1
- x1 -= x1
- if x2 >= image_width:
- x2 = image_width
-
- return x1, y1, x2, y2
-
-
-def fill(image, mask):
- """fills masked regions with colors from image using blur. Not extremely effective."""
-
- image_mod = Image.new('RGBA', (image.width, image.height))
-
- image_masked = Image.new('RGBa', (image.width, image.height))
- image_masked.paste(image.convert("RGBA").convert("RGBa"), mask=ImageOps.invert(mask.convert('L')))
-
- image_masked = image_masked.convert('RGBa')
-
- for radius, repeats in [(256, 1), (64, 1), (16, 2), (4, 4), (2, 2), (0, 1)]:
- blurred = image_masked.filter(ImageFilter.GaussianBlur(radius)).convert('RGBA')
- for _ in range(repeats):
- image_mod.alpha_composite(blurred)
-
- return image_mod.convert("RGB")
-
diff --git a/spaces/danielpedriniportfolio/AutoDA/pages/01-Exploratory_Data_Analysis.py b/spaces/danielpedriniportfolio/AutoDA/pages/01-Exploratory_Data_Analysis.py
deleted file mode 100644
index 2cf2fadb4dcae4d8f336637d8c293bb1d4c1f454..0000000000000000000000000000000000000000
--- a/spaces/danielpedriniportfolio/AutoDA/pages/01-Exploratory_Data_Analysis.py
+++ /dev/null
@@ -1,21 +0,0 @@
-import pandas as pd
-import streamlit as st
-from pandas_profiling import ProfileReport
-from streamlit_pandas_profiling import st_profile_report
-
-st.set_page_config(layout='wide')
-col1, col2, col3 = st.columns([15, 70, 15])
-
-with col1:
- st.write('')
-with col2:
- if 'df' not in st.session_state:
- st.warning('Please upload a CSV file')
-
- else:
- st.header('Exploratory Data Analysis')
- df = st.session_state['df']
- profile = ProfileReport(df, title='Pandas Profiling Report', explorative=True,dark_mode=True)
- st_profile_report(profile)
-with col3:
- st.write('')
\ No newline at end of file
diff --git a/spaces/danterivers/music-generation-samples/tests/modules/test_lstm.py b/spaces/danterivers/music-generation-samples/tests/modules/test_lstm.py
deleted file mode 100644
index 1248964c8191e19f27661f0974bef9cc967eb015..0000000000000000000000000000000000000000
--- a/spaces/danterivers/music-generation-samples/tests/modules/test_lstm.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import random
-import torch
-
-from audiocraft.modules.lstm import StreamableLSTM
-
-
-class TestStreamableLSTM:
-
- def test_lstm(self):
- B, C, T = 4, 2, random.randint(1, 100)
-
- lstm = StreamableLSTM(C, 3, skip=False)
- x = torch.randn(B, C, T)
- y = lstm(x)
-
- print(y.shape)
- assert y.shape == torch.Size([B, C, T])
-
- def test_lstm_skip(self):
- B, C, T = 4, 2, random.randint(1, 100)
-
- lstm = StreamableLSTM(C, 3, skip=True)
- x = torch.randn(B, C, T)
- y = lstm(x)
-
- assert y.shape == torch.Size([B, C, T])
diff --git a/spaces/davila7/try-gorilla/app.py b/spaces/davila7/try-gorilla/app.py
deleted file mode 100644
index 7b3ca2a1fbb4992e360154fe7f288b9165b18cde..0000000000000000000000000000000000000000
--- a/spaces/davila7/try-gorilla/app.py
+++ /dev/null
@@ -1,50 +0,0 @@
-import openai
-import urllib.parse
-import streamlit as st
-
-openai.api_key = "EMPTY" # Key is ignored and does not matter
-openai.api_base = "http://34.132.127.197:8000/v1"
-
-# Report issues
-def raise_issue(e, model, prompt):
- issue_title = urllib.parse.quote("[bug] Hosted Gorilla: ")
- issue_body = urllib.parse.quote(f"Exception: {e}\nFailed model: {model}, for prompt: {prompt}")
- issue_url = f"https://github.com/ShishirPatil/gorilla/issues/new?assignees=&labels=hosted-gorilla&projects=&template=hosted-gorilla-.md&title={issue_title}&body={issue_body}"
- print(f"An exception has occurred: {e} \nPlease raise an issue here: {issue_url}")
-
-# Query Gorilla server
-def get_gorilla_response(prompt="I would like to translate from English to French.", api_provider="Huggingface"):
- try:
- model = "gorilla-7b-hf-v0"
- if api_provider == "Huggingface":
- model = "gorilla-7b-hf-v0"
- if api_provider == "Torch Hub":
- model = "gorilla-7b-th-v0"
- if api_provider == "TensorFlow Hub":
- model = "gorilla-7b-tf-v0"
-
- completion = openai.ChatCompletion.create(
- model=model,
- messages=[{"role": "user", "content": prompt}]
- )
- return completion.choices[0].message.content
- except Exception as e:
- raise_issue(e, model, prompt)
-
-st.title("Try Gorilla 🦍")
-st.write("Large Language Model Connected with Massive APIs")
-st.markdown('* Read about this demo here: [Medium](https://medium.com/@dan.avila7/try-gorilla-a-large-language-model-connected-with-massive-apis-442f3b554ffb)')
-st.markdown('* All code was written with the help of CodeGPT (https://codegpt.co)')
-
-st.write('---')
-col1, col2 = st.columns(2)
-with col1:
- api_provider = st.radio("Select an API Provider:", ("Huggingface", "Torch Hub", "TensorFlow Hub"))
-with col2:
- input = st.text_input("Ask here:")
- st.write("Example: I would like to translate from English to French.")
-
-if api_provider and input:
- if st.button("Run Gorilla"):
- with st.spinner('Loading...'):
- st.success(get_gorilla_response(input, api_provider))
\ No newline at end of file
diff --git a/spaces/dawdqd/ChuanhuChatGPT/modules/models/StableLM.py b/spaces/dawdqd/ChuanhuChatGPT/modules/models/StableLM.py
deleted file mode 100644
index f4affc3699e335f1e42bf5fc8c93e92a41d027fe..0000000000000000000000000000000000000000
--- a/spaces/dawdqd/ChuanhuChatGPT/modules/models/StableLM.py
+++ /dev/null
@@ -1,93 +0,0 @@
-import torch
-from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer
-import time
-import numpy as np
-from torch.nn import functional as F
-import os
-from .base_model import BaseLLMModel
-from threading import Thread
-
-STABLELM_MODEL = None
-STABLELM_TOKENIZER = None
-
-
-class StopOnTokens(StoppingCriteria):
- def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
- stop_ids = [50278, 50279, 50277, 1, 0]
- for stop_id in stop_ids:
- if input_ids[0][-1] == stop_id:
- return True
- return False
-
-
-class StableLM_Client(BaseLLMModel):
- def __init__(self, model_name, user_name="") -> None:
- super().__init__(model_name=model_name, user=user_name)
- global STABLELM_MODEL, STABLELM_TOKENIZER
- print(f"Starting to load StableLM to memory")
- if model_name == "StableLM":
- model_name = "stabilityai/stablelm-tuned-alpha-7b"
- else:
- model_name = f"models/{model_name}"
- if STABLELM_MODEL is None:
- STABLELM_MODEL = AutoModelForCausalLM.from_pretrained(
- model_name, torch_dtype=torch.float16).cuda()
- if STABLELM_TOKENIZER is None:
- STABLELM_TOKENIZER = AutoTokenizer.from_pretrained(model_name)
- self.generator = pipeline(
- 'text-generation', model=STABLELM_MODEL, tokenizer=STABLELM_TOKENIZER, device=0)
- print(f"Sucessfully loaded StableLM to the memory")
- self.system_prompt = """StableAssistant
-- StableAssistant is A helpful and harmless Open Source AI Language Model developed by Stability and CarperAI.
-- StableAssistant is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
-- StableAssistant is more than just an information source, StableAssistant is also able to write poetry, short stories, and make jokes.
-- StableAssistant will refuse to participate in anything that could harm a human."""
- self.max_generation_token = 1024
- self.top_p = 0.95
- self.temperature = 1.0
-
- def _get_stablelm_style_input(self):
- history = self.history + [{"role": "assistant", "content": ""}]
- print(history)
- messages = self.system_prompt + \
- "".join(["".join(["<|USER|>"+history[i]["content"], "<|ASSISTANT|>"+history[i + 1]["content"]])
- for i in range(0, len(history), 2)])
- return messages
-
- def _generate(self, text, bad_text=None):
- stop = StopOnTokens()
- result = self.generator(text, max_new_tokens=self.max_generation_token, num_return_sequences=1, num_beams=1, do_sample=True,
- temperature=self.temperature, top_p=self.top_p, top_k=1000, stopping_criteria=StoppingCriteriaList([stop]))
- return result[0]["generated_text"].replace(text, "")
-
- def get_answer_at_once(self):
- messages = self._get_stablelm_style_input()
- return self._generate(messages), len(messages)
-
- def get_answer_stream_iter(self):
- stop = StopOnTokens()
- messages = self._get_stablelm_style_input()
-
- # model_inputs = tok([messages], return_tensors="pt")['input_ids'].cuda()[:, :4096-1024]
- model_inputs = STABLELM_TOKENIZER(
- [messages], return_tensors="pt").to("cuda")
- streamer = TextIteratorStreamer(
- STABLELM_TOKENIZER, timeout=10., skip_prompt=True, skip_special_tokens=True)
- generate_kwargs = dict(
- model_inputs,
- streamer=streamer,
- max_new_tokens=self.max_generation_token,
- do_sample=True,
- top_p=self.top_p,
- top_k=1000,
- temperature=self.temperature,
- num_beams=1,
- stopping_criteria=StoppingCriteriaList([stop])
- )
- t = Thread(target=STABLELM_MODEL.generate, kwargs=generate_kwargs)
- t.start()
-
- partial_text = ""
- for new_text in streamer:
- partial_text += new_text
- yield partial_text
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fastapi/security/open_id_connect_url.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fastapi/security/open_id_connect_url.py
deleted file mode 100644
index 4e65f1f6c486fa579554c61b9d137c7fda1f1b17..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fastapi/security/open_id_connect_url.py
+++ /dev/null
@@ -1,34 +0,0 @@
-from typing import Optional
-
-from fastapi.openapi.models import OpenIdConnect as OpenIdConnectModel
-from fastapi.security.base import SecurityBase
-from starlette.exceptions import HTTPException
-from starlette.requests import Request
-from starlette.status import HTTP_403_FORBIDDEN
-
-
-class OpenIdConnect(SecurityBase):
- def __init__(
- self,
- *,
- openIdConnectUrl: str,
- scheme_name: Optional[str] = None,
- description: Optional[str] = None,
- auto_error: bool = True,
- ):
- self.model = OpenIdConnectModel(
- openIdConnectUrl=openIdConnectUrl, description=description
- )
- self.scheme_name = scheme_name or self.__class__.__name__
- self.auto_error = auto_error
-
- async def __call__(self, request: Request) -> Optional[str]:
- authorization = request.headers.get("Authorization")
- if not authorization:
- if self.auto_error:
- raise HTTPException(
- status_code=HTTP_403_FORBIDDEN, detail="Not authenticated"
- )
- else:
- return None
- return authorization
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_hf_folder.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_hf_folder.py
deleted file mode 100644
index 5c9f07c9ba3a3d860e197312023857cb97230361..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_hf_folder.py
+++ /dev/null
@@ -1,102 +0,0 @@
-# coding=utf-8
-# Copyright 2022-present, the HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Contain helper class to retrieve/store token from/to local cache."""
-import os
-import warnings
-from pathlib import Path
-from typing import Optional
-
-from .. import constants
-
-
-class HfFolder:
- path_token = Path(constants.HF_TOKEN_PATH)
- # Private attribute. Will be removed in v0.15
- _old_path_token = Path(constants._OLD_HF_TOKEN_PATH)
-
- @classmethod
- def save_token(cls, token: str) -> None:
- """
- Save token, creating folder as needed.
-
- Token is saved in the huggingface home folder. You can configure it by setting
- the `HF_HOME` environment variable.
-
- Args:
- token (`str`):
- The token to save to the [`HfFolder`]
- """
- cls.path_token.parent.mkdir(parents=True, exist_ok=True)
- cls.path_token.write_text(token)
-
- @classmethod
- def get_token(cls) -> Optional[str]:
- """
- Get token or None if not existent.
-
- Note that a token can be also provided using the `HUGGING_FACE_HUB_TOKEN` environment variable.
-
- Token is saved in the huggingface home folder. You can configure it by setting
- the `HF_HOME` environment variable. Previous location was `~/.huggingface/token`.
- If token is found in old location but not in new location, it is copied there first.
- For more details, see https://github.com/huggingface/huggingface_hub/issues/1232.
-
- Returns:
- `str` or `None`: The token, `None` if it doesn't exist.
- """
- # 0. Check if token exist in old path but not new location
- try:
- cls._copy_to_new_path_and_warn()
- except Exception: # if not possible (e.g. PermissionError), do not raise
- pass
-
- # 1. Is it set by environment variable ?
- token: Optional[str] = os.environ.get("HUGGING_FACE_HUB_TOKEN")
- if token is not None:
- return token
-
- # 2. Is it set in token path ?
- try:
- return cls.path_token.read_text()
- except FileNotFoundError:
- return None
-
- @classmethod
- def delete_token(cls) -> None:
- """
- Deletes the token from storage. Does not fail if token does not exist.
- """
- try:
- cls.path_token.unlink()
- except FileNotFoundError:
- pass
-
- try:
- cls._old_path_token.unlink()
- except FileNotFoundError:
- pass
-
- @classmethod
- def _copy_to_new_path_and_warn(cls):
- if cls._old_path_token.exists() and not cls.path_token.exists():
- cls.save_token(cls._old_path_token.read_text())
- warnings.warn(
- f"A token has been found in `{cls._old_path_token}`. This is the old"
- " path where tokens were stored. The new location is"
- f" `{cls.path_token}` which is configurable using `HF_HOME` environment"
- " variable. Your token has been copied to this new location. You can"
- " now safely delete the old token file manually or use"
- " `huggingface-cli logout`."
- )
diff --git a/spaces/deepaksarika01/youtube-video-qa-lamini/README.md b/spaces/deepaksarika01/youtube-video-qa-lamini/README.md
deleted file mode 100644
index 88f9fd7fff587b0c4d69a6619465a05df42afce2..0000000000000000000000000000000000000000
--- a/spaces/deepaksarika01/youtube-video-qa-lamini/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Youtube Video Qa Lamini
-emoji: 🚀
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.43.2
-app_file: app.py
-pinned: false
-license: creativeml-openrail-m
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/deepdml/whisper-demo-mix-es/app.py b/spaces/deepdml/whisper-demo-mix-es/app.py
deleted file mode 100644
index d6162e149224cf7038c5a33808f968524effe21e..0000000000000000000000000000000000000000
--- a/spaces/deepdml/whisper-demo-mix-es/app.py
+++ /dev/null
@@ -1,97 +0,0 @@
-import torch
-
-import gradio as gr
-import pytube as pt
-from transformers import pipeline
-from huggingface_hub import model_info
-
-MODEL_NAME = "deepdml/whisper-medium-mix-es" #this always needs to stay in line 8 :D sorry for the hackiness
-lang = "es"
-
-device = 0 if torch.cuda.is_available() else "cpu"
-pipe = pipeline(
- task="automatic-speech-recognition",
- model=MODEL_NAME,
- chunk_length_s=30,
- device=device,
-)
-
-pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(language=lang, task="transcribe")
-
-def transcribe(microphone, file_upload):
- warn_output = ""
- if (microphone is not None) and (file_upload is not None):
- warn_output = (
- "WARNING: You've uploaded an audio file and used the microphone. "
- "The recorded file from the microphone will be used and the uploaded audio will be discarded.\n"
- )
-
- elif (microphone is None) and (file_upload is None):
- return "ERROR: You have to either use the microphone or upload an audio file"
-
- file = microphone if microphone is not None else file_upload
-
- text = pipe(file)["text"]
-
- return warn_output + text
-
-
-def _return_yt_html_embed(yt_url):
- video_id = yt_url.split("?v=")[-1]
- HTML_str = (
- f'
'
- "
"
- )
- return HTML_str
-
-
-def yt_transcribe(yt_url):
- yt = pt.YouTube(yt_url)
- html_embed_str = _return_yt_html_embed(yt_url)
- stream = yt.streams.filter(only_audio=True)[0]
- stream.download(filename="audio.mp3")
-
- text = pipe("audio.mp3")["text"]
-
- return html_embed_str, text
-
-
-demo = gr.Blocks()
-
-mf_transcribe = gr.Interface(
- fn=transcribe,
- inputs=[
- gr.inputs.Audio(source="microphone", type="filepath", optional=True),
- gr.inputs.Audio(source="upload", type="filepath", optional=True),
- ],
- outputs="text",
- layout="horizontal",
- theme="huggingface",
- title="Whisper Demo: Transcribe Audio",
- description=(
- "Transcribe long-form microphone or audio inputs with the click of a button! Demo uses the the fine-tuned"
- f" checkpoint [{MODEL_NAME}](https://huggingface.co/{MODEL_NAME}) and 🤗 Transformers to transcribe audio files"
- " of arbitrary length."
- ),
- allow_flagging="never",
-)
-
-yt_transcribe = gr.Interface(
- fn=yt_transcribe,
- inputs=[gr.inputs.Textbox(lines=1, placeholder="Paste the URL to a YouTube video here", label="YouTube URL")],
- outputs=["html", "text"],
- layout="horizontal",
- theme="huggingface",
- title="Whisper Demo: Transcribe YouTube",
- description=(
- "Transcribe long-form YouTube videos with the click of a button! Demo uses the the fine-tuned checkpoint:"
- f" [{MODEL_NAME}](https://huggingface.co/{MODEL_NAME}) and 🤗 Transformers to transcribe audio files of"
- " arbitrary length."
- ),
- allow_flagging="never",
-)
-
-with demo:
- gr.TabbedInterface([mf_transcribe, yt_transcribe], ["Transcribe Audio", "Transcribe YouTube"])
-
-demo.launch(enable_queue=True)
diff --git a/spaces/deepwisdom/MetaGPT/metagpt/roles/product_manager.py b/spaces/deepwisdom/MetaGPT/metagpt/roles/product_manager.py
deleted file mode 100644
index b42e9bb294484d57aa38a01e23ef98104483a5c6..0000000000000000000000000000000000000000
--- a/spaces/deepwisdom/MetaGPT/metagpt/roles/product_manager.py
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-"""
-@Time : 2023/5/11 14:43
-@Author : alexanderwu
-@File : product_manager.py
-"""
-from metagpt.actions import BossRequirement, WritePRD
-from metagpt.roles import Role
-
-
-class ProductManager(Role):
- def __init__(self, name="Alice", profile="Product Manager", goal="Efficiently create a successful product",
- constraints=""):
- super().__init__(name, profile, goal, constraints)
- self._init_actions([WritePRD])
- self._watch([BossRequirement])
diff --git a/spaces/derek-thomas/disc-golf-simulator/utilities/get_disc.py b/spaces/derek-thomas/disc-golf-simulator/utilities/get_disc.py
deleted file mode 100644
index 4eea761a6a90e4da8f8dfed2ae1e621e5cec5b1d..0000000000000000000000000000000000000000
--- a/spaces/derek-thomas/disc-golf-simulator/utilities/get_disc.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import requests
-
-headers = {
- 'authority': 'alldiscs.com',
- 'accept': 'application/json, text/javascript, */*; q=0.01',
- 'accept-language': 'en-US,en;q=0.6',
- 'content-type': 'application/x-www-form-urlencoded; charset=UTF-8',
- 'origin': 'https://alldiscs.com',
- 'referer': 'https://alldiscs.com/',
- 'sec-fetch-dest': 'empty',
- 'sec-fetch-mode': 'cors',
- 'sec-fetch-site': 'same-origin',
- 'sec-gpc': '1',
- 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36',
- 'x-requested-with': 'XMLHttpRequest',
-}
-
-params = {
- 'action': 'get_wdtable',
- 'table_id': '5',
-}
-
-data = {
- 'draw': '4',
- 'columns[0][data]': '0',
- 'columns[0][name]': 'wdt_ID',
- 'columns[0][searchable]': 'true',
- 'columns[0][orderable]': 'true',
- 'columns[0][search][value]': '',
- 'columns[0][search][regex]': 'false',
- 'columns[1][data]': '1',
- 'columns[1][name]': 'brand',
- 'columns[1][searchable]': 'true',
- 'columns[1][orderable]': 'true',
- 'columns[1][search][value]': '',
- 'columns[1][search][regex]': 'false',
- 'columns[2][data]': '2',
- 'columns[2][name]': 'mold',
- 'columns[2][searchable]': 'true',
- 'columns[2][orderable]': 'true',
- 'columns[2][search][value]': '',
- 'columns[2][search][regex]': 'false',
- 'columns[3][data]': '3',
- 'columns[3][name]': 'type',
- 'columns[3][searchable]': 'true',
- 'columns[3][orderable]': 'true',
- 'columns[3][search][value]': 'Distance|Fairway|Midrange|Putter',
- 'columns[3][search][regex]': 'false',
- 'columns[4][data]': '4',
- 'columns[4][name]': 'speed',
- 'columns[4][searchable]': 'true',
- 'columns[4][orderable]': 'true',
- 'columns[4][search][value]': '1|15',
- 'columns[4][search][regex]': 'false',
- 'columns[5][data]': '5',
- 'columns[5][name]': 'glide',
- 'columns[5][searchable]': 'true',
- 'columns[5][orderable]': 'true',
- 'columns[5][search][value]': '1|7',
- 'columns[5][search][regex]': 'false',
- 'columns[6][data]': '6',
- 'columns[6][name]': 'turn',
- 'columns[6][searchable]': 'true',
- 'columns[6][orderable]': 'true',
- 'columns[6][search][value]': '-5|1',
- 'columns[6][search][regex]': 'false',
- 'columns[7][data]': '7',
- 'columns[7][name]': 'fade',
- 'columns[7][searchable]': 'true',
- 'columns[7][orderable]': 'true',
- 'columns[7][search][value]': '0|5',
- 'columns[7][search][regex]': 'false',
- 'columns[8][data]': '8',
- 'columns[8][name]': 'inproduction',
- 'columns[8][searchable]': 'true',
- 'columns[8][orderable]': 'true',
- 'columns[8][search][value]': 'Coming Soon|Yes',
- 'columns[8][search][regex]': 'false',
- 'columns[9][data]': '9',
- 'columns[9][name]': 'dateapproved',
- 'columns[9][searchable]': 'true',
- 'columns[9][orderable]': 'true',
- 'columns[9][search][value]': '|',
- 'columns[9][search][regex]': 'false',
- 'columns[10][data]': '10',
- 'columns[10][name]': 'link',
- 'columns[10][searchable]': 'true',
- 'columns[10][orderable]': 'true',
- 'columns[10][search][value]': '',
- 'columns[10][search][regex]': 'false',
- 'order[0][column]': '0',
- 'order[0][dir]': 'asc',
- 'start': '0',
- 'length': '10',
- 'search[value]': 'wraith',
- 'search[regex]': 'false',
- 'wdtNonce': '511bd3400c',
- 'sRangeSeparator': '|',
-}
-
-response = requests.post('https://alldiscs.com/wp-admin/admin-ajax.php', params=params, headers=headers, data=data)
\ No newline at end of file
diff --git a/spaces/deydebasmita91/Twitter_Live/app.py b/spaces/deydebasmita91/Twitter_Live/app.py
deleted file mode 100644
index fbcb5d29edbecc18d33210b63095d33d1d60fa32..0000000000000000000000000000000000000000
--- a/spaces/deydebasmita91/Twitter_Live/app.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import tweepy as tw
-import streamlit as st
-import pandas as pd
-from transformers import pipeline
-consumer_key = '9zDPUQtTVTI6ZkVfgBfQbfEg1'
-consumer_secret = 'pM9gNhj8lL6tfo3UdXBSQfS9dVT1mGQxqMSaqpPd3TmwSDXc0C'
-access_token = '2152566757-N0PSK7s7yruqL80HTDDq9FUESZVOI6qtLD4DekD'
-access_token_secret = 'DLrlDY5W9i7Hgksx41eaXV9A4gS3eUf0VoBu0VMBFJUnm'
-auth = tw.OAuthHandler(consumer_key, consumer_secret)
-auth.set_access_token(access_token, access_token_secret)
-api = tw.API(auth, wait_on_rate_limit=True)
-classifier = pipeline('sentiment-analysis')
-st.title('Live Twitter Sentiment Analysis with Tweepy and HuggingFace Transformers')
-st.markdown('This app uses tweepy to get tweets from twitter based on the input name/phrase. It then processes the tweets through HuggingFace transformers pipeline function for sentiment analysis. The resulting sentiments and corresponding tweets are then put in a dataframe for display which is what you see as result.')
-def run():
- with st.form(key ='Enter name'):
- search_words = st.text_input('Enter the name for which you want to know the sentiment')
- number_of_tweets = st.number_input('Enter the number of latest tweets for which you want to know the sentiment(Maximum 50 tweets)',
- 0,50,10)
- submit_button = st.form_submit_button(label='Submit')
- if submit_button:
- tweets =tw.Cursor(api.search_tweets,q=search_words,lang="en").items(number_of_tweets)
- tweet_list = [i.text for i in tweets]
- p = [i for i in classifier(tweet_list)]
- q=[p[i]['label'] for i in range(len(p))]
- df = pd.DataFrame(list(zip(tweet_list, q)),columns =['Latest '+str(number_of_tweets)+' Tweets'+' on '+search_words, 'sentiment'])
- st.write(df)
-
-
-if __name__=='__main__':
- run()
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Mortal Kombat 3 Game Free HOT! Download For Pc Full Version.md b/spaces/diacanFperku/AutoGPT/Mortal Kombat 3 Game Free HOT! Download For Pc Full Version.md
deleted file mode 100644
index c1ee8ce30a1059234ecabcac25123061c1f1dbc1..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Mortal Kombat 3 Game Free HOT! Download For Pc Full Version.md
+++ /dev/null
@@ -1,149 +0,0 @@
-
-
Mortal Kombat 3 Game Free Download for PC Full Version: A Review
-
Mortal Kombat 3 is one of the most legendary fighting games of all time. Released in 1995 by Midway Games, it is the third installment in the Mortal Kombat series, which is known for its brutal and gory gameplay, its iconic characters and fatalities, and its rich and complex lore. Mortal Kombat 3 introduced new features and improvements that made it a classic among fans and critics alike. In this article, we will show you how to download and play Mortal Kombat 3 game free for PC full version, as well as review its features and benefits.
-
What is Mortal Kombat 3 Game Free for PC Full Version?
-
Mortal Kombat 3 game free for PC full version is a modified version of the original Mortal Kombat 3 game that was released for arcades and various home consoles in 1995. It is based on the Ultimate Mortal Kombat 3 version, which was an enhanced update of the original game that added new characters, stages, modes, and gameplay tweaks. The PC version of Mortal Kombat 3 game free for PC full version is an accurate and optimized emulation of the arcade version, which means it has the same graphics, sound, and gameplay as the original. However, it also has some advantages over the arcade version, such as being able to play it on any modern PC or laptop, having no need for coins or tokens, and being able to customize your controls and settings.
-
mortal kombat 3 game free download for pc full version
What are the features and benefits of Mortal Kombat 3 Game Free for PC Full Version?
-
Mortal Kombat 3 game free for PC full version has many features and benefits that make it a great choice for fighting game enthusiasts. Some of them are:
-
-
It has a large and diverse roster of playable characters, including all the fighters from the original Mortal Kombat 3 game, plus four additional fighters from previous games (Jade, Kitana, Reptile, and Scorpion), and three new fighters that were added later (Mileena, Ermac, and Classic Sub-Zero). You can also unlock a hidden fighter (Smoke) by entering a secret code before a match.
-
It has a variety of game modes to choose from, such as Arcade mode, where you fight against a series of opponents until you face the final boss (Shao Kahn); Versus mode, where you can challenge another player or the computer in a one-on-one match; Tournament mode, where you can compete with up to eight players in a single-elimination bracket; Practice mode, where you can train your skills and learn new moves; and Shao Kahn's Lost Treasures mode, where you can unlock various rewards by completing certain tasks.
-
It has a deep and complex combat system that allows you to perform various attacks, combos, special moves, and finishing moves. You can also use a Run button to dash towards your opponent, a Block button to defend yourself from attacks, and a High Punch button to uppercut your opponent into the air. You can also perform different types of fatalities depending on your distance from your opponent: close-range fatalities (such as ripping out their heart or spine), mid-range fatalities (such as slicing them in half or burning them alive), long-range fatalities (such as shooting them with a laser or freezing them), stage fatalities (such as throwing them into spikes or acid), or animalities (where you transform into an animal and maul them).
-
It has stunning graphics and sound that capture the atmosphere and intensity of the Mortal Kombat universe. The characters are detailed and animated with realistic movements and expressions. The stages are varied and colorful, with different backgrounds and interactive elements. The sound effects are crisp and clear, with punches, kicks, screams, and explosions. The music is catchy and energetic, with different themes for each stage.
-
It is easy to download and install on your PC. You just need to find a reliable source that offers it for free or for a reasonable price. You should also make sure that the source is safe and secure, and that it does not contain any viruses or malware. You should also check the feedback and ratings of the source before you download anything from it.
-
-
-
How to download and install Mortal Kombat 3 Game Free for PC Full Version?
-
-
If you want to download and install Mortal Kombat 3 game free for PC full version on your PC,
-you need to follow these steps:
-
-
-
-
Find a reliable source that offers Mortal Kombat 3 game free for PC full version. You can use this link as an example: https://www.filehorse.com/download-ultimate-mortal-kombat-3/
-
-
Download the file from the source. It should be an ISO file named Ultimate_Mortal_Kombat_3.iso
-
-
Burn the ISO file onto a CD or DVD using any burning software. Alternatively,
-you can create a bootable USB drive using tools like Rufus or Universal USB Installer.
-
-
Insert the CD or USB drive into your PC
-and restart it. Boot from
-the CD or USB drive by pressing F12 or any other key depending on your BIOS settings.
Select Custom (advanced) installation option
-and choose
-a clean partition where
-you want
-to install Mortal Kombat 3 game free
-for PC full version.
-
-
Wait
-for
-the installation process
-to complete.
-It may take some time depending on your hardware specifications.
-
-
After installation is done,
-remove
-the CD or USB drive
-and restart your PC.
-
-
You have successfully installed Mortal Kombat 3 game free
-for PC full version on your PC.
-You can now play it normally.
-
-
-
-
Note: If you have any problems or errors during
-the installation process,
-you can try
-to troubleshoot them using tools like System Restore,
-System Repair,
-Safe Mode,
-Event Viewer,
-or Task Manager.
-You can access these tools from
-the Start menu or
-the F8 key during booting.
-
-
Conclusion
-
Mortal Kombat 3 game free for PC full version is an excellent fighting game that offers hours of fun
-and entertainment.
-It has a large roster of characters,
-a variety of game modes,
-a deep combat system,
-and stunning graphics
-and sound.
-It is also easy to download
-and install on your PC using an ISO file.
-If you are looking
-for a classic fighting game that will challenge your skills
-and satisfy your bloodlust,
-you should definitely try Mortal Kombat 3 game free for PC full version.
-You will not regret it!
-
What are the pros and cons of Mortal Kombat 3 Game Free for PC Full Version?
-
Mortal Kombat 3 game free for PC full version is not an official version of Mortal Kombat 3 from Midway Games. It is a fan-made modification that may not be legal or safe in your country or region. Therefore, you should weigh the pros and cons of Mortal Kombat 3 game free for PC full version before you decide to download and install it on your PC. Some of the pros and cons are:
-
-
Pros:
-
-
It is free to download and play, which means you can enjoy a classic fighting game without spending any money.
-
It is compatible with any modern PC or laptop, which means you can play it on any device that meets the minimum system requirements.
-
It has all the features and benefits of the original Mortal Kombat 3 game, plus some additional ones that make it more fun and challenging.
-
It has a loyal and active fan community that supports and updates the game regularly.
-
-
-
Cons:
-
-
It may not be legal or safe in your country or region, which means you may face legal or security issues if you download and play it.
-
It may not be compatible with some games and applications that require genuine Windows validation, which means you may encounter some errors or limitations if you use them.
-
It may have some bugs or glitches that are not present in the original Mortal Kombat 3 game, which means you may experience some problems or crashes while playing it.
-
It may not have some features or functions that are available in the original Mortal Kombat 3 game, which means you may miss out on some aspects of the game.
-
-
-
-
-
How to play Mortal Kombat 3 Game Free for PC Full Version?
-
-
If you have downloaded and installed Mortal Kombat 3 game free for PC full version on your PC, you can play it by following these steps:
-
-
-
-
-
Launch the game from your desktop shortcut or Start menu.
-
-
Select your preferred game mode from the main menu. You can choose from Arcade mode, Versus mode, Tournament mode, Practice mode, or Shao Kahn's Lost Treasures mode.
-
-
Select your preferred character from the character selection screen. You can choose from 23 fighters, each with their own special moves, combos, and fatalities. You can also unlock a hidden fighter (Smoke) by entering a secret code before a match.
-
-
Select your preferred stage from the stage selection screen. You can choose from 15 stages, each with their own background and interactive elements.
-
-
Fight against your opponent using your keyboard or controller. You can use various buttons to perform attacks, combos, special moves, and finishing moves. You can also use a Run button to dash towards your opponent, a Block button to defend yourself from attacks, and a High Punch button to uppercut your opponent into the air.
-
-
Win the match by depleting your opponent's health bar or by performing a fatality when they are stunned. You can perform different types of fatalities depending on your distance from your opponent: close-range fatalities (such as ripping out their heart or spine), mid-range fatalities (such as slicing them in half or burning them alive), long-range fatalities (such as shooting them with a laser or freezing them), stage fatalities (such as throwing them into spikes or acid), or animalities (where you transform into an animal and maul them).
-
-
Continue playing until you complete your chosen game mode or until you lose a match. You can also quit the game at any time by pressing Esc or Pause.
-
-
-
-
Note: If you want to customize your controls and settings, you can access the options menu from the main menu or during a match. You can change various options such as sound volume, difficulty level, blood level, timer speed, control layout, etc.
-
Conclusion
-
Mortal Kombat 3 game free download for PC full version is a modified version of the original Mortal Kombat 3 game that is specially designed for PC gamers who want to enjoy a classic fighting game. It has many features and benefits that make it more stable, reliable, and fun for running games. It also has a sleek and stylish interface that suits the gaming theme. It supports all the latest games and DirectX 11 features. It also has low memory consumption and fast booting time. It can receive all the future updates and packages from Midway Games without any problems. It also has some useful options and tools that make it easier to customize and manage your system settings.
-
-
However, Mortal Kombat 3 game free download for PC full version is not an official version of Mortal Kombat 3 from Midway Games. It may not be compatible with some games and applications that require genuine Windows validation. It may also have some bugs or errors that are not present in the original Mortal Kombat 3 game. It may also not be secure or safe as the original Mortal Kombat 3 game, which means it may be vulnerable to viruses, malware, or hackers. It may also not have some features or functions that are available in the original Mortal Kombat 3 game. It may also not have some drivers or software that are needed for some hardware devices or peripherals.
-
-
Therefore, you should weigh the pros and cons of Mortal Kombat 3 game free download for PC full version before you decide to download and install it on your PC. You should also backup your important data and files before you proceed with the installation process. You should also use a VPN service or a proxy server to protect your identity and privacy online. You should also scan your downloaded files with a reliable antivirus program to protect your system from viruses, malware, or hackers.
-
-
If you are interested in downloading and installing Mortal Kombat 3 game free download for PC full version on your PC, you can find it on various sources online such as FileHorse.com or Malavida.com. However, you should be careful when downloading anything from these sources, as they may not be legal or safe in your country or region. You should also check the feedback and ratings of these sources before you download anything from them.
-
-
We hope this article has helped you to learn more about Mortal Kombat 3 game free download for PC full version and how to download and install it on your PC. If you have any questions or comments, please feel free to leave them below. Thank you for reading!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/digitalxingtong/Eileen-Bert-Vits2/text/japanese.py b/spaces/digitalxingtong/Eileen-Bert-Vits2/text/japanese.py
deleted file mode 100644
index ddedafa0c5b7986068dc6c91637a86febc3923a9..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Eileen-Bert-Vits2/text/japanese.py
+++ /dev/null
@@ -1,104 +0,0 @@
-# modified from https://github.com/CjangCjengh/vits/blob/main/text/japanese.py
-import re
-import sys
-
-import pyopenjtalk
-
-from text import symbols
-
-# Regular expression matching Japanese without punctuation marks:
-_japanese_characters = re.compile(
- r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# Regular expression matching non-Japanese characters or punctuation marks:
-_japanese_marks = re.compile(
- r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
-
-# List of (symbol, Japanese) pairs for marks:
-_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('%', 'パーセント')
-]]
-
-
-# List of (consonant, sokuon) pairs:
-_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'Q([↑↓]*[kg])', r'k#\1'),
- (r'Q([↑↓]*[tdjʧ])', r't#\1'),
- (r'Q([↑↓]*[sʃ])', r's\1'),
- (r'Q([↑↓]*[pb])', r'p#\1')
-]]
-
-# List of (consonant, hatsuon) pairs:
-_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [
- (r'N([↑↓]*[pbm])', r'm\1'),
- (r'N([↑↓]*[ʧʥj])', r'n^\1'),
- (r'N([↑↓]*[tdn])', r'n\1'),
- (r'N([↑↓]*[kg])', r'ŋ\1')
-]]
-
-
-
-def post_replace_ph(ph):
- rep_map = {
- ':': ',',
- ';': ',',
- ',': ',',
- '。': '.',
- '!': '!',
- '?': '?',
- '\n': '.',
- "·": ",",
- '、': ",",
- '...': '…',
- 'v': "V"
- }
- if ph in rep_map.keys():
- ph = rep_map[ph]
- if ph in symbols:
- return ph
- if ph not in symbols:
- ph = 'UNK'
- return ph
-
-def symbols_to_japanese(text):
- for regex, replacement in _symbols_to_japanese:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def preprocess_jap(text):
- '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html'''
- text = symbols_to_japanese(text)
- sentences = re.split(_japanese_marks, text)
- marks = re.findall(_japanese_marks, text)
- text = []
- for i, sentence in enumerate(sentences):
- if re.match(_japanese_characters, sentence):
- p = pyopenjtalk.g2p(sentence)
- text += p.split(" ")
-
- if i < len(marks):
- text += [marks[i].replace(' ', '')]
- return text
-
-def text_normalize(text):
- # todo: jap text normalize
- return text
-
-def g2p(norm_text):
- phones = preprocess_jap(norm_text)
- phones = [post_replace_ph(i) for i in phones]
- # todo: implement tones and word2ph
- tones = [0 for i in phones]
- word2ph = [1 for i in phones]
- return phones, tones, word2ph
-
-
-if __name__ == '__main__':
- for line in open("../../../Downloads/transcript_utf8.txt").readlines():
- text = line.split(":")[1]
- phones, tones, word2ph = g2p(text)
- for p in phones:
- if p == "z":
- print(text, phones)
- sys.exit(0)
diff --git a/spaces/digitalxingtong/Luzao-Bert-Vits2/bert_gen.py b/spaces/digitalxingtong/Luzao-Bert-Vits2/bert_gen.py
deleted file mode 100644
index 467655b2c4171608ad690fe7dec350db85f84f1b..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Luzao-Bert-Vits2/bert_gen.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import torch
-from torch.utils.data import DataLoader
-from multiprocessing import Pool
-import commons
-import utils
-from data_utils import TextAudioSpeakerLoader, TextAudioSpeakerCollate
-from tqdm import tqdm
-import warnings
-
-from text import cleaned_text_to_sequence, get_bert
-
-config_path = 'configs/config.json'
-hps = utils.get_hparams_from_file(config_path)
-
-def process_line(line):
- _id, spk, language_str, text, phones, tone, word2ph = line.strip().split("|")
- phone = phones.split(" ")
- tone = [int(i) for i in tone.split(" ")]
- word2ph = [int(i) for i in word2ph.split(" ")]
- w2pho = [i for i in word2ph]
- word2ph = [i for i in word2ph]
- phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str)
-
- if hps.data.add_blank:
- phone = commons.intersperse(phone, 0)
- tone = commons.intersperse(tone, 0)
- language = commons.intersperse(language, 0)
- for i in range(len(word2ph)):
- word2ph[i] = word2ph[i] * 2
- word2ph[0] += 1
- wav_path = f'{_id}'
-
- bert_path = wav_path.replace(".wav", ".bert.pt")
- try:
- bert = torch.load(bert_path)
- assert bert.shape[-1] == len(phone)
- except:
- bert = get_bert(text, word2ph, language_str)
- assert bert.shape[-1] == len(phone)
- torch.save(bert, bert_path)
-
-
-if __name__ == '__main__':
- lines = []
- with open(hps.data.training_files, encoding='utf-8' ) as f:
- lines.extend(f.readlines())
-
- # with open(hps.data.validation_files, encoding='utf-8' ) as f:
- # lines.extend(f.readlines())
-
- with Pool(processes=2) as pool: #A100 40GB suitable config,if coom,please decrease the processess number.
- for _ in tqdm(pool.imap_unordered(process_line, lines)):
- pass
diff --git a/spaces/dineshreddy/WALT/mmdet/core/bbox/coder/delta_xywh_bbox_coder.py b/spaces/dineshreddy/WALT/mmdet/core/bbox/coder/delta_xywh_bbox_coder.py
deleted file mode 100644
index da317184a6eb6f87b0b658e9ff8be289794a0cb2..0000000000000000000000000000000000000000
--- a/spaces/dineshreddy/WALT/mmdet/core/bbox/coder/delta_xywh_bbox_coder.py
+++ /dev/null
@@ -1,237 +0,0 @@
-import mmcv
-import numpy as np
-import torch
-
-from ..builder import BBOX_CODERS
-from .base_bbox_coder import BaseBBoxCoder
-
-
-@BBOX_CODERS.register_module()
-class DeltaXYWHBBoxCoder(BaseBBoxCoder):
- """Delta XYWH BBox coder.
-
- Following the practice in `R-CNN `_,
- this coder encodes bbox (x1, y1, x2, y2) into delta (dx, dy, dw, dh) and
- decodes delta (dx, dy, dw, dh) back to original bbox (x1, y1, x2, y2).
-
- Args:
- target_means (Sequence[float]): Denormalizing means of target for
- delta coordinates
- target_stds (Sequence[float]): Denormalizing standard deviation of
- target for delta coordinates
- clip_border (bool, optional): Whether clip the objects outside the
- border of the image. Defaults to True.
- """
-
- def __init__(self,
- target_means=(0., 0., 0., 0.),
- target_stds=(1., 1., 1., 1.),
- clip_border=True):
- super(BaseBBoxCoder, self).__init__()
- self.means = target_means
- self.stds = target_stds
- self.clip_border = clip_border
-
- def encode(self, bboxes, gt_bboxes):
- """Get box regression transformation deltas that can be used to
- transform the ``bboxes`` into the ``gt_bboxes``.
-
- Args:
- bboxes (torch.Tensor): Source boxes, e.g., object proposals.
- gt_bboxes (torch.Tensor): Target of the transformation, e.g.,
- ground-truth boxes.
-
- Returns:
- torch.Tensor: Box transformation deltas
- """
-
- assert bboxes.size(0) == gt_bboxes.size(0)
- assert bboxes.size(-1) == gt_bboxes.size(-1) == 4
- encoded_bboxes = bbox2delta(bboxes, gt_bboxes, self.means, self.stds)
- return encoded_bboxes
-
- def decode(self,
- bboxes,
- pred_bboxes,
- max_shape=None,
- wh_ratio_clip=16 / 1000):
- """Apply transformation `pred_bboxes` to `boxes`.
-
- Args:
- bboxes (torch.Tensor): Basic boxes. Shape (B, N, 4) or (N, 4)
- pred_bboxes (Tensor): Encoded offsets with respect to each roi.
- Has shape (B, N, num_classes * 4) or (B, N, 4) or
- (N, num_classes * 4) or (N, 4). Note N = num_anchors * W * H
- when rois is a grid of anchors.Offset encoding follows [1]_.
- max_shape (Sequence[int] or torch.Tensor or Sequence[
- Sequence[int]],optional): Maximum bounds for boxes, specifies
- (H, W, C) or (H, W). If bboxes shape is (B, N, 4), then
- the max_shape should be a Sequence[Sequence[int]]
- and the length of max_shape should also be B.
- wh_ratio_clip (float, optional): The allowed ratio between
- width and height.
-
- Returns:
- torch.Tensor: Decoded boxes.
- """
-
- assert pred_bboxes.size(0) == bboxes.size(0)
- if pred_bboxes.ndim == 3:
- assert pred_bboxes.size(1) == bboxes.size(1)
- decoded_bboxes = delta2bbox(bboxes, pred_bboxes, self.means, self.stds,
- max_shape, wh_ratio_clip, self.clip_border)
-
- return decoded_bboxes
-
-
-@mmcv.jit(coderize=True)
-def bbox2delta(proposals, gt, means=(0., 0., 0., 0.), stds=(1., 1., 1., 1.)):
- """Compute deltas of proposals w.r.t. gt.
-
- We usually compute the deltas of x, y, w, h of proposals w.r.t ground
- truth bboxes to get regression target.
- This is the inverse function of :func:`delta2bbox`.
-
- Args:
- proposals (Tensor): Boxes to be transformed, shape (N, ..., 4)
- gt (Tensor): Gt bboxes to be used as base, shape (N, ..., 4)
- means (Sequence[float]): Denormalizing means for delta coordinates
- stds (Sequence[float]): Denormalizing standard deviation for delta
- coordinates
-
- Returns:
- Tensor: deltas with shape (N, 4), where columns represent dx, dy,
- dw, dh.
- """
- assert proposals.size() == gt.size()
-
- proposals = proposals.float()
- gt = gt.float()
- px = (proposals[..., 0] + proposals[..., 2]) * 0.5
- py = (proposals[..., 1] + proposals[..., 3]) * 0.5
- pw = proposals[..., 2] - proposals[..., 0]
- ph = proposals[..., 3] - proposals[..., 1]
-
- gx = (gt[..., 0] + gt[..., 2]) * 0.5
- gy = (gt[..., 1] + gt[..., 3]) * 0.5
- gw = gt[..., 2] - gt[..., 0]
- gh = gt[..., 3] - gt[..., 1]
-
- dx = (gx - px) / pw
- dy = (gy - py) / ph
- dw = torch.log(gw / pw)
- dh = torch.log(gh / ph)
- deltas = torch.stack([dx, dy, dw, dh], dim=-1)
-
- means = deltas.new_tensor(means).unsqueeze(0)
- stds = deltas.new_tensor(stds).unsqueeze(0)
- deltas = deltas.sub_(means).div_(stds)
-
- return deltas
-
-
-@mmcv.jit(coderize=True)
-def delta2bbox(rois,
- deltas,
- means=(0., 0., 0., 0.),
- stds=(1., 1., 1., 1.),
- max_shape=None,
- wh_ratio_clip=16 / 1000,
- clip_border=True):
- """Apply deltas to shift/scale base boxes.
-
- Typically the rois are anchor or proposed bounding boxes and the deltas are
- network outputs used to shift/scale those boxes.
- This is the inverse function of :func:`bbox2delta`.
-
- Args:
- rois (Tensor): Boxes to be transformed. Has shape (N, 4) or (B, N, 4)
- deltas (Tensor): Encoded offsets with respect to each roi.
- Has shape (B, N, num_classes * 4) or (B, N, 4) or
- (N, num_classes * 4) or (N, 4). Note N = num_anchors * W * H
- when rois is a grid of anchors.Offset encoding follows [1]_.
- means (Sequence[float]): Denormalizing means for delta coordinates
- stds (Sequence[float]): Denormalizing standard deviation for delta
- coordinates
- max_shape (Sequence[int] or torch.Tensor or Sequence[
- Sequence[int]],optional): Maximum bounds for boxes, specifies
- (H, W, C) or (H, W). If rois shape is (B, N, 4), then
- the max_shape should be a Sequence[Sequence[int]]
- and the length of max_shape should also be B.
- wh_ratio_clip (float): Maximum aspect ratio for boxes.
- clip_border (bool, optional): Whether clip the objects outside the
- border of the image. Defaults to True.
-
- Returns:
- Tensor: Boxes with shape (B, N, num_classes * 4) or (B, N, 4) or
- (N, num_classes * 4) or (N, 4), where 4 represent
- tl_x, tl_y, br_x, br_y.
-
- References:
- .. [1] https://arxiv.org/abs/1311.2524
-
- Example:
- >>> rois = torch.Tensor([[ 0., 0., 1., 1.],
- >>> [ 0., 0., 1., 1.],
- >>> [ 0., 0., 1., 1.],
- >>> [ 5., 5., 5., 5.]])
- >>> deltas = torch.Tensor([[ 0., 0., 0., 0.],
- >>> [ 1., 1., 1., 1.],
- >>> [ 0., 0., 2., -1.],
- >>> [ 0.7, -1.9, -0.5, 0.3]])
- >>> delta2bbox(rois, deltas, max_shape=(32, 32, 3))
- tensor([[0.0000, 0.0000, 1.0000, 1.0000],
- [0.1409, 0.1409, 2.8591, 2.8591],
- [0.0000, 0.3161, 4.1945, 0.6839],
- [5.0000, 5.0000, 5.0000, 5.0000]])
- """
- means = deltas.new_tensor(means).view(1,
- -1).repeat(1,
- deltas.size(-1) // 4)
- stds = deltas.new_tensor(stds).view(1, -1).repeat(1, deltas.size(-1) // 4)
- denorm_deltas = deltas * stds + means
- dx = denorm_deltas[..., 0::4]
- dy = denorm_deltas[..., 1::4]
- dw = denorm_deltas[..., 2::4]
- dh = denorm_deltas[..., 3::4]
- max_ratio = np.abs(np.log(wh_ratio_clip))
- dw = dw.clamp(min=-max_ratio, max=max_ratio)
- dh = dh.clamp(min=-max_ratio, max=max_ratio)
- x1, y1 = rois[..., 0], rois[..., 1]
- x2, y2 = rois[..., 2], rois[..., 3]
- # Compute center of each roi
- px = ((x1 + x2) * 0.5).unsqueeze(-1).expand_as(dx)
- py = ((y1 + y2) * 0.5).unsqueeze(-1).expand_as(dy)
- # Compute width/height of each roi
- pw = (x2 - x1).unsqueeze(-1).expand_as(dw)
- ph = (y2 - y1).unsqueeze(-1).expand_as(dh)
- # Use exp(network energy) to enlarge/shrink each roi
- gw = pw * dw.exp()
- gh = ph * dh.exp()
- # Use network energy to shift the center of each roi
- gx = px + pw * dx
- gy = py + ph * dy
- # Convert center-xy/width/height to top-left, bottom-right
- x1 = gx - gw * 0.5
- y1 = gy - gh * 0.5
- x2 = gx + gw * 0.5
- y2 = gy + gh * 0.5
-
- bboxes = torch.stack([x1, y1, x2, y2], dim=-1).view(deltas.size())
-
- if clip_border and max_shape is not None:
- if not isinstance(max_shape, torch.Tensor):
- max_shape = x1.new_tensor(max_shape)
- max_shape = max_shape[..., :2].type_as(x1)
- if max_shape.ndim == 2:
- assert bboxes.ndim == 3
- assert max_shape.size(0) == bboxes.size(0)
-
- min_xy = x1.new_tensor(0)
- max_xy = torch.cat(
- [max_shape] * (deltas.size(-1) // 2),
- dim=-1).flip(-1).unsqueeze(-2)
- bboxes = torch.where(bboxes < min_xy, min_xy, bboxes)
- bboxes = torch.where(bboxes > max_xy, max_xy, bboxes)
-
- return bboxes
diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/panet/panet_r50_fpem_ffm_600e_icdar2017.py b/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/panet/panet_r50_fpem_ffm_600e_icdar2017.py
deleted file mode 100644
index 0e9768d4742e845a45bd343d70bd06f3cb0e4fcb..0000000000000000000000000000000000000000
--- a/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/panet/panet_r50_fpem_ffm_600e_icdar2017.py
+++ /dev/null
@@ -1,33 +0,0 @@
-_base_ = [
- '../../_base_/default_runtime.py',
- '../../_base_/schedules/schedule_adam_600e.py',
- '../../_base_/det_models/panet_r50_fpem_ffm.py',
- '../../_base_/det_datasets/icdar2017.py',
- '../../_base_/det_pipelines/panet_pipeline.py'
-]
-
-train_list = {{_base_.train_list}}
-test_list = {{_base_.test_list}}
-
-train_pipeline_icdar2017 = {{_base_.train_pipeline_icdar2017}}
-test_pipeline_icdar2017 = {{_base_.test_pipeline_icdar2017}}
-
-data = dict(
- samples_per_gpu=4,
- workers_per_gpu=4,
- val_dataloader=dict(samples_per_gpu=1),
- test_dataloader=dict(samples_per_gpu=1),
- train=dict(
- type='UniformConcatDataset',
- datasets=train_list,
- pipeline=train_pipeline_icdar2017),
- val=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_icdar2017),
- test=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_icdar2017))
-
-evaluation = dict(interval=10, metric='hmean-iou')
diff --git a/spaces/dirge/voicevox/test/test_acoustic_feature_extractor.py b/spaces/dirge/voicevox/test/test_acoustic_feature_extractor.py
deleted file mode 100644
index a82e7afe62eed4f1be1506d7cd34335c769d17d0..0000000000000000000000000000000000000000
--- a/spaces/dirge/voicevox/test/test_acoustic_feature_extractor.py
+++ /dev/null
@@ -1,266 +0,0 @@
-import os
-from pathlib import Path
-from typing import List, Type
-from unittest import TestCase
-
-from voicevox_engine.acoustic_feature_extractor import (
- BasePhoneme,
- JvsPhoneme,
- OjtPhoneme,
-)
-
-
-class TestBasePhoneme(TestCase):
- def setUp(self):
- super().setUp()
- self.str_hello_hiho = "sil k o N n i ch i w a pau h i h o d e s U sil"
- self.base_hello_hiho = [
- BasePhoneme(s, i, i + 1) for i, s in enumerate(self.str_hello_hiho.split())
- ]
- self.lab_str = """
- 0.00 1.00 pau
- 1.00 2.00 k
- 2.00 3.00 o
- 3.00 4.00 N
- 4.00 5.00 n
- 5.00 6.00 i
- 6.00 7.00 ch
- 7.00 8.00 i
- 8.00 9.00 w
- 9.00 10.00 a
- 10.00 11.00 pau
- 11.00 12.00 h
- 12.00 13.00 i
- 13.00 14.00 h
- 14.00 15.00 o
- 15.00 16.00 d
- 16.00 17.00 e
- 17.00 18.00 s
- 18.00 19.00 U
- 19.00 20.00 pau
- """.replace(
- " ", ""
- )[
- 1:-1
- ] # ダブルクオーテーションx3で囲われている部分で、空白をすべて置き換え、先頭と最後の"\n"を除外する
-
- def test_repr_(self):
- self.assertEqual(
- self.base_hello_hiho[1].__repr__(), "Phoneme(phoneme='k', start=1, end=2)"
- )
- self.assertEqual(
- self.base_hello_hiho[10].__repr__(),
- "Phoneme(phoneme='pau', start=10, end=11)",
- )
-
- def test_convert(self):
- with self.assertRaises(NotImplementedError):
- BasePhoneme.convert(self.base_hello_hiho)
-
- def test_duration(self):
- self.assertEqual(self.base_hello_hiho[1].duration, 1)
-
- def test_parse(self):
- parse_str_1 = "0 1 pau"
- parse_str_2 = "32.67543 33.48933 e"
- parsed_base_1 = BasePhoneme.parse(parse_str_1)
- parsed_base_2 = BasePhoneme.parse(parse_str_2)
- self.assertEqual(parsed_base_1.phoneme, "pau")
- self.assertEqual(parsed_base_1.start, 0.0)
- self.assertEqual(parsed_base_1.end, 1.0)
- self.assertEqual(parsed_base_2.phoneme, "e")
- self.assertEqual(parsed_base_2.start, 32.68)
- self.assertEqual(parsed_base_2.end, 33.49)
-
- def lab_test_base(
- self,
- file_path: str,
- phonemes: List["BasePhoneme"],
- phoneme_class: Type["BasePhoneme"],
- ):
- phoneme_class.save_lab_list(phonemes, Path(file_path))
- with open(file_path, mode="r") as f:
- self.assertEqual(f.read(), self.lab_str)
- result_phoneme = phoneme_class.load_lab_list(Path(file_path))
- self.assertEqual(result_phoneme, phonemes)
- os.remove(file_path)
-
-
-class TestJvsPhoneme(TestBasePhoneme):
- def setUp(self):
- super().setUp()
- base_hello_hiho = [
- JvsPhoneme(s, i, i + 1) for i, s in enumerate(self.str_hello_hiho.split())
- ]
- self.jvs_hello_hiho = JvsPhoneme.convert(base_hello_hiho)
-
- def test_phoneme_list(self):
- self.assertEqual(JvsPhoneme.phoneme_list[1], "I")
- self.assertEqual(JvsPhoneme.phoneme_list[14], "gy")
- self.assertEqual(JvsPhoneme.phoneme_list[26], "p")
- self.assertEqual(JvsPhoneme.phoneme_list[38], "z")
-
- def test_const(self):
- self.assertEqual(JvsPhoneme.num_phoneme, 39)
- self.assertEqual(JvsPhoneme.space_phoneme, "pau")
-
- def test_convert(self):
- converted_str_hello_hiho = " ".join([p.phoneme for p in self.jvs_hello_hiho])
- self.assertEqual(
- converted_str_hello_hiho, "pau k o N n i ch i w a pau h i h o d e s U pau"
- )
-
- def test_equal(self):
- # jvs_hello_hihoの2番目の"k"と比較
- true_jvs_phoneme = JvsPhoneme("k", 1, 2)
- # OjtPhonemeと比べる、比較はBasePhoneme内で実装されているので、比較結果はTrue
- true_ojt_phoneme = OjtPhoneme("k", 1, 2)
-
- false_jvs_phoneme_1 = JvsPhoneme("a", 1, 2)
- false_jvs_phoneme_2 = JvsPhoneme("k", 2, 3)
- self.assertTrue(self.jvs_hello_hiho[1] == true_jvs_phoneme)
- self.assertTrue(self.jvs_hello_hiho[1] == true_ojt_phoneme)
- self.assertFalse(self.jvs_hello_hiho[1] == false_jvs_phoneme_1)
- self.assertFalse(self.jvs_hello_hiho[1] == false_jvs_phoneme_2)
-
- def test_verify(self):
- for phoneme in self.jvs_hello_hiho:
- phoneme.verify()
-
- def test_phoneme_id(self):
- jvs_str_hello_hiho = " ".join([str(p.phoneme_id) for p in self.jvs_hello_hiho])
- self.assertEqual(
- jvs_str_hello_hiho, "0 19 25 2 23 17 7 17 36 4 0 15 17 15 25 9 11 30 3 0"
- )
-
- def test_onehot(self):
- phoneme_id_list = [
- 0,
- 19,
- 25,
- 2,
- 23,
- 17,
- 7,
- 17,
- 36,
- 4,
- 0,
- 15,
- 17,
- 15,
- 25,
- 9,
- 11,
- 30,
- 3,
- 0,
- ]
- for i, phoneme in enumerate(self.jvs_hello_hiho):
- for j in range(JvsPhoneme.num_phoneme):
- if phoneme_id_list[i] == j:
- self.assertEqual(phoneme.onehot[j], True)
- else:
- self.assertEqual(phoneme.onehot[j], False)
-
- def test_parse(self):
- parse_str_1 = "0 1 pau"
- parse_str_2 = "15.32654 16.39454 a"
- parsed_jvs_1 = JvsPhoneme.parse(parse_str_1)
- parsed_jvs_2 = JvsPhoneme.parse(parse_str_2)
- self.assertEqual(parsed_jvs_1.phoneme_id, 0)
- self.assertEqual(parsed_jvs_2.phoneme_id, 4)
-
- def test_lab_list(self):
- self.lab_test_base("./jvs_lab_test", self.jvs_hello_hiho, JvsPhoneme)
-
-
-class TestOjtPhoneme(TestBasePhoneme):
- def setUp(self):
- super().setUp()
- self.str_hello_hiho = "sil k o N n i ch i w a pau h i h o d e s U sil"
- base_hello_hiho = [
- OjtPhoneme(s, i, i + 1) for i, s in enumerate(self.str_hello_hiho.split())
- ]
- self.ojt_hello_hiho = OjtPhoneme.convert(base_hello_hiho)
-
- def test_phoneme_list(self):
- self.assertEqual(OjtPhoneme.phoneme_list[1], "A")
- self.assertEqual(OjtPhoneme.phoneme_list[14], "e")
- self.assertEqual(OjtPhoneme.phoneme_list[26], "m")
- self.assertEqual(OjtPhoneme.phoneme_list[38], "ts")
- self.assertEqual(OjtPhoneme.phoneme_list[41], "v")
-
- def test_const(self):
- self.assertEqual(OjtPhoneme.num_phoneme, 45)
- self.assertEqual(OjtPhoneme.space_phoneme, "pau")
-
- def test_convert(self):
- ojt_str_hello_hiho = " ".join([p.phoneme for p in self.ojt_hello_hiho])
- self.assertEqual(
- ojt_str_hello_hiho, "pau k o N n i ch i w a pau h i h o d e s U pau"
- )
-
- def test_equal(self):
- # ojt_hello_hihoの10番目の"a"と比較
- true_ojt_phoneme = OjtPhoneme("a", 9, 10)
- # JvsPhonemeと比べる、比較はBasePhoneme内で実装されているので、比較結果はTrue
- true_jvs_phoneme = JvsPhoneme("a", 9, 10)
-
- false_ojt_phoneme_1 = OjtPhoneme("k", 9, 10)
- false_ojt_phoneme_2 = OjtPhoneme("a", 10, 11)
- self.assertTrue(self.ojt_hello_hiho[9] == true_ojt_phoneme)
- self.assertTrue(self.ojt_hello_hiho[9] == true_jvs_phoneme)
- self.assertFalse(self.ojt_hello_hiho[9] == false_ojt_phoneme_1)
- self.assertFalse(self.ojt_hello_hiho[9] == false_ojt_phoneme_2)
-
- def test_verify(self):
- for phoneme in self.ojt_hello_hiho:
- phoneme.verify()
-
- def test_phoneme_id(self):
- ojt_str_hello_hiho = " ".join([str(p.phoneme_id) for p in self.ojt_hello_hiho])
- self.assertEqual(
- ojt_str_hello_hiho, "0 23 30 4 28 21 10 21 42 7 0 19 21 19 30 12 14 35 6 0"
- )
-
- def test_onehot(self):
- phoneme_id_list = [
- 0,
- 23,
- 30,
- 4,
- 28,
- 21,
- 10,
- 21,
- 42,
- 7,
- 0,
- 19,
- 21,
- 19,
- 30,
- 12,
- 14,
- 35,
- 6,
- 0,
- ]
- for i, phoneme in enumerate(self.ojt_hello_hiho):
- for j in range(OjtPhoneme.num_phoneme):
- if phoneme_id_list[i] == j:
- self.assertEqual(phoneme.onehot[j], True)
- else:
- self.assertEqual(phoneme.onehot[j], False)
-
- def test_parse(self):
- parse_str_1 = "0 1 pau"
- parse_str_2 = "32.67543 33.48933 e"
- parsed_ojt_1 = OjtPhoneme.parse(parse_str_1)
- parsed_ojt_2 = OjtPhoneme.parse(parse_str_2)
- self.assertEqual(parsed_ojt_1.phoneme_id, 0)
- self.assertEqual(parsed_ojt_2.phoneme_id, 14)
-
- def tes_lab_list(self):
- self.lab_test_base("./ojt_lab_test", self.ojt_hello_hiho, OjtPhoneme)
diff --git a/spaces/doevent/colorizator/utils/util.py b/spaces/doevent/colorizator/utils/util.py
deleted file mode 100644
index bc372b21316cb0bb351ba9cdbda3c950a83cc1e7..0000000000000000000000000000000000000000
--- a/spaces/doevent/colorizator/utils/util.py
+++ /dev/null
@@ -1,178 +0,0 @@
-from __future__ import division
-from __future__ import print_function
-import os, glob, shutil, math, json
-from queue import Queue
-from threading import Thread
-from skimage.segmentation import mark_boundaries
-import numpy as np
-from PIL import Image
-import cv2, torch
-
-def get_gauss_kernel(size, sigma):
- '''Function to mimic the 'fspecial' gaussian MATLAB function'''
- x, y = np.mgrid[-size//2 + 1:size//2 + 1, -size//2 + 1:size//2 + 1]
- g = np.exp(-((x**2 + y**2)/(2.0*sigma**2)))
- return g/g.sum()
-
-
-def batchGray2Colormap(gray_batch):
- colormap = plt.get_cmap('viridis')
- heatmap_batch = []
- for i in range(gray_batch.shape[0]):
- # quantize [-1,1] to {0,1}
- gray_map = gray_batch[i, :, :, 0]
- heatmap = (colormap(gray_map) * 2**16).astype(np.uint16)[:,:,:3]
- heatmap_batch.append(heatmap/127.5-1.0)
- return np.array(heatmap_batch)
-
-
-class PlotterThread():
- '''log tensorboard data in a background thread to save time'''
- def __init__(self, writer):
- self.writer = writer
- self.task_queue = Queue(maxsize=0)
- worker = Thread(target=self.do_work, args=(self.task_queue,))
- worker.setDaemon(True)
- worker.start()
-
- def do_work(self, q):
- while True:
- content = q.get()
- if content[-1] == 'image':
- self.writer.add_image(*content[:-1])
- elif content[-1] == 'scalar':
- self.writer.add_scalar(*content[:-1])
- else:
- raise ValueError
- q.task_done()
-
- def add_data(self, name, value, step, data_type='scalar'):
- self.task_queue.put([name, value, step, data_type])
-
- def __len__(self):
- return self.task_queue.qsize()
-
-
-def save_images_from_batch(img_batch, save_dir, filename_list, batch_no=-1, suffix=None):
- N,H,W,C = img_batch.shape
- if C == 3:
- #! rgb color image
- for i in range(N):
- # [-1,1] >>> [0,255]
- image = Image.fromarray((127.5*(img_batch[i,:,:,:]+1.)).astype(np.uint8))
- save_name = filename_list[i] if batch_no==-1 else '%05d.png' % (batch_no*N+i)
- save_name = save_name.replace('.png', '-%s.png'%suffix) if suffix else save_name
- image.save(os.path.join(save_dir, save_name), 'PNG')
- elif C == 1:
- #! single-channel gray image
- for i in range(N):
- # [-1,1] >>> [0,255]
- image = Image.fromarray((127.5*(img_batch[i,:,:,0]+1.)).astype(np.uint8))
- save_name = filename_list[i] if batch_no==-1 else '%05d.png' % (batch_no*img_batch.shape[0]+i)
- save_name = save_name.replace('.png', '-%s.png'%suffix) if suffix else save_name
- image.save(os.path.join(save_dir, save_name), 'PNG')
- else:
- #! multi-channel: save each channel as a single image
- for i in range(N):
- # [-1,1] >>> [0,255]
- for j in range(C):
- image = Image.fromarray((127.5*(img_batch[i,:,:,j]+1.)).astype(np.uint8))
- if batch_no == -1:
- _, file_name = os.path.split(filename_list[i])
- name_only, _ = os.path.os.path.splitext(file_name)
- save_name = name_only + '_c%d.png' % j
- else:
- save_name = '%05d_c%d.png' % (batch_no*N+i, j)
- save_name = save_name.replace('.png', '-%s.png'%suffix) if suffix else save_name
- image.save(os.path.join(save_dir, save_name), 'PNG')
- return None
-
-
-def save_normLabs_from_batch(img_batch, save_dir, filename_list, batch_no=-1, suffix=None):
- N,H,W,C = img_batch.shape
- if C != 3:
- print('@Warning:the Lab images are NOT in 3 channels!')
- return None
- # denormalization: L: (L+1.0)*50.0 | a: a*110.0| b: b*110.0
- img_batch[:,:,:,0] = img_batch[:,:,:,0] * 50.0 + 50.0
- img_batch[:,:,:,1:3] = img_batch[:,:,:,1:3] * 110.0
- #! convert into RGB color image
- for i in range(N):
- rgb_img = cv2.cvtColor(img_batch[i,:,:,:], cv2.COLOR_LAB2RGB)
- image = Image.fromarray((rgb_img*255.0).astype(np.uint8))
- save_name = filename_list[i] if batch_no==-1 else '%05d.png' % (batch_no*N+i)
- save_name = save_name.replace('.png', '-%s.png'%suffix) if suffix else save_name
- image.save(os.path.join(save_dir, save_name), 'PNG')
- return None
-
-
-def save_markedSP_from_batch(img_batch, spix_batch, save_dir, filename_list, batch_no=-1, suffix=None):
- N,H,W,C = img_batch.shape
- #! img_batch: BGR nd-array (range:0~1)
- #! map_batch: single-channel spixel map
- #print('----------', img_batch.shape, spix_batch.shape)
- for i in range(N):
- norm_image = img_batch[i,:,:,:]*0.5+0.5
- spixel_bd_image = mark_boundaries(norm_image, spix_batch[i,:,:,0].astype(int), color=(1,1,1))
- #spixel_bd_image = cv2.cvtColor(spixel_bd_image, cv2.COLOR_BGR2RGB)
- image = Image.fromarray((spixel_bd_image*255.0).astype(np.uint8))
- save_name = filename_list[i] if batch_no==-1 else '%05d.png' % (batch_no*N+i)
- save_name = save_name.replace('.png', '-%s.png'%suffix) if suffix else save_name
- image.save(os.path.join(save_dir, save_name), 'PNG')
- return None
-
-
-def get_filelist(data_dir):
- file_list = glob.glob(os.path.join(data_dir, '*.*'))
- file_list.sort()
- return file_list
-
-
-def collect_filenames(data_dir):
- file_list = get_filelist(data_dir)
- name_list = []
- for file_path in file_list:
- _, file_name = os.path.split(file_path)
- name_list.append(file_name)
- name_list.sort()
- return name_list
-
-
-def exists_or_mkdir(path, need_remove=False):
- if not os.path.exists(path):
- os.makedirs(path)
- elif need_remove:
- shutil.rmtree(path)
- os.makedirs(path)
- return None
-
-
-def save_list(save_path, data_list, append_mode=False):
- n = len(data_list)
- if append_mode:
- with open(save_path, 'a') as f:
- f.writelines([str(data_list[i]) + '\n' for i in range(n-1,n)])
- else:
- with open(save_path, 'w') as f:
- f.writelines([str(data_list[i]) + '\n' for i in range(n)])
- return None
-
-
-def save_dict(save_path, dict):
- json.dumps(dict, open(save_path,"w"))
- return None
-
-
-if __name__ == '__main__':
- data_dir = '../PolyNet/PolyNet/cache/'
- #visualizeLossCurves(data_dir)
- clbar = GamutIndex()
- ab, ab_gamut_mask = clbar._get_gamut_mask()
- ab2q = clbar._get_ab_to_q(ab_gamut_mask)
- q2ab = clbar._get_q_to_ab(ab, ab_gamut_mask)
- maps = ab_gamut_mask*255.0
- image = Image.fromarray(maps.astype(np.uint8))
- image.save('gamut.png', 'PNG')
- print(ab2q.shape)
- print(q2ab.shape)
- print('label range:', np.min(ab2q), np.max(ab2q))
\ No newline at end of file
diff --git a/spaces/dorkai/ChatUIPro/app/components/base/loading/index.tsx b/spaces/dorkai/ChatUIPro/app/components/base/loading/index.tsx
deleted file mode 100644
index c6c4800f307518159d51773c8656445e7d49455a..0000000000000000000000000000000000000000
--- a/spaces/dorkai/ChatUIPro/app/components/base/loading/index.tsx
+++ /dev/null
@@ -1,31 +0,0 @@
-import React from 'react'
-
-import './style.css'
-
-type ILoadingProps = {
- type?: 'area' | 'app'
-}
-const Loading = (
- { type = 'area' }: ILoadingProps = { type: 'area' },
-) => {
- return (
-
-
- 4d29de3e1b
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/Download 3d Album Cs 3.29 Full LINK Crack.md b/spaces/falterWliame/Face_Mask_Detection/Download 3d Album Cs 3.29 Full LINK Crack.md
deleted file mode 100644
index 68347c7b3dfc1a47943de80554649da6fea1b37f..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Download 3d Album Cs 3.29 Full LINK Crack.md
+++ /dev/null
@@ -1,115 +0,0 @@
-
-
Download 3D Album CS 3.29 Full Crack and Create Amazing 3D Animations
-
-
If you are looking for a powerful and easy-to-use software to create stunning 3D animations, presentations, exhibitions and photo quizzes, you should download 3D Album CS 3.29 full crack. This software is a multimedia suite that allows you to create your own multimedia production for Windows users. You can also use it to create CD/DVD productions with 3D album logos and links.
3D Album CS 3.29 is the latest version of the 3D Album Commercial Suite, which is a software that lets you create 3D animations from your photos and videos. You can choose from over 110 Hollywood styles that can be enhanced with special effects, such as reflections, shadows and lighting. You can also customize the themes, backgrounds, music, text and transitions of your animations.
-
-
What are the features of 3D Album CS 3.29?
-
-
Some of the features of 3D Album CS 3.29 are:
-
-
A commercial license that allows you to sell your work as a photographic image of the program such as DVD, CD or graphics.
-
A user-friendly interface that includes a step-by-step guide and basic tools for basic tasks and other high-end tools for improved work.
-
A graphic photo editor that includes a smart smooth brush, 90 special effects, 3 dimensional photocoposition and precision tools such as clones, patches, mirrors, stamps and smudge.
-
An advanced photo organizer that helps you manage your photos and videos in albums and folders.
-
A creative photo printing and page design tool that allows you to print your photos in various sizes and layouts.
-
A professional multimedia control tool that gives you full control over the playback of your animations, such as pause, resume, skip, repeat and volume.
-
-
-
How to download 3D Album CS 3.29 full crack?
-
-
To download 3D Album CS 3.29 full crack, you need to follow these steps:
-
-
-
Click on the link below to download the software file.
-
Extract the file using WinRAR or any other software that can unzip files.
-
Run the setup file and follow the instructions to install the software.
-
Copy the crack file from the crack folder and paste it into the installation directory of the software.
-
Run the software and enjoy creating amazing 3D animations.
-
-
-
The link to download 3D Album CS 3.29 full crack is:
3D Album CS 3.29 is a great software for anyone who wants to create impressive 3D animations from their photos and videos. It has many features that make it easy and fun to use. You can download 3D Album CS 3.29 full crack from the link above and start creating your own multimedia production.
-
What are the benefits of downloading 3D Album CS 3.29 full crack?
-
-
By downloading 3D Album CS 3.29 full crack, you can enjoy many benefits, such as:
-
-
Save money and time by getting the software for free and without any registration or activation.
-
Access all the features and styles of the software without any limitations or restrictions.
-
Create professional and high-quality 3D animations that can impress your clients and audience.
-
Share your work online or offline with 3D album logos and links that can promote your brand and business.
-
Learn and improve your skills in 3D animation and multimedia production with the user-friendly interface and extensive user guide.
-
-
-
How to use 3D Album CS 3.29 full crack?
-
-
To use 3D Album CS 3.29 full crack, you need to follow these steps:
-
-
Launch the software and select a style from the style library or create your own style.
-
Add your photos and videos to the style and adjust the settings, such as theme, background, music, text and transition.
-
Preview your animation and apply any special effects, such as reflections, shadows and lighting.
-
Save your animation as a file or export it as a CD/DVD production with 3D album logos and links.
-
Share your animation online or offline with your clients and audience.
-
-
-
You can also use the graphic photo editor, the advanced photo organizer and the creative photo printing and page design tool to enhance your photos and videos before adding them to the style.
-
-
Download 3D Album CS 3.29 full crack today!
-
-
If you want to create amazing 3D animations from your photos and videos, you should download 3D Album CS 3.29 full crack today. This software is a multimedia suite that allows you to create your own multimedia production for Windows users. You can also use it to create CD/DVD productions with 3D album logos and links. You can download 3D Album CS 3.29 full crack from the link below and start creating your own multimedia production.
-
-
The link to download 3D Album CS 3.29 full crack is:
3D Album CS 3.29 is a great software for anyone who wants to create impressive 3D animations from their photos and videos. It has many features that make it easy and fun to use. You can download 3D Album CS 3.29 full crack from the link above and start creating your own multimedia production.
-
What are the requirements for downloading 3D Album CS 3.29 full crack?
-
-
Before you download 3D Album CS 3.29 full crack, you need to make sure that your computer meets the minimum requirements for running the software. These are:
-
-
Operating system: Windows NT/98/XP/2000
-
Processor: Pentium III or higher
-
Memory: 256 MB RAM or more
-
Hard disk space: 1 GB or more
-
Display: 1024 x 768 resolution or higher
-
Sound card: DirectX compatible
-
CD/DVD drive: Required for CD/DVD production
-
-
-
If your computer meets these requirements, you can download 3D Album CS 3.29 full crack without any problems.
-
-
What are the alternatives to downloading 3D Album CS 3.29 full crack?
-
-
If you are not comfortable with downloading 3D Album CS 3.29 full crack, you can also try some of the alternatives that are available online. Some of these are:
-
-
Xara 3D Maker: This is a software that allows you to create 3D text and graphics for web pages, presentations and logos. You can choose from over 700 templates and customize them with colors, textures, shadows and animations.
-
Blender: This is a free and open source software that lets you create 3D models, animations, games and visual effects. You can use it for any purpose, from personal to commercial projects. It has a powerful and flexible interface that supports many tools and features.
-
3ds Max: This is a professional software that is used for creating 3D animations, models, games and visual effects. It has a comprehensive set of tools and features that can handle complex and realistic projects. It also supports many plugins and extensions that can enhance its functionality.
-
-
-
These are some of the alternatives to downloading 3D Album CS 3.29 full crack that you can try. However, they may not have all the features and styles that 3D Album CS 3.29 has.
-
-
Download 3D Album CS 3.29 full crack today!
-
-
If you want to create amazing 3D animations from your photos and videos, you should download 3D Album CS 3.29 full crack today. This software is a multimedia suite that allows you to create your own multimedia production for Windows users. You can also use it to create CD/DVD productions with 3D album logos and links. You can download 3D Album CS 3.29 full crack from the link below and start creating your own multimedia production.
-
-
The link to download 3D Album CS 3.29 full crack is:
3D Album CS 3.29 is a great software for anyone who wants to create impressive 3D animations from their photos and videos. It has many features that make it easy and fun to use. You can download 3D Album CS 3.29 full crack from the link above and start creating your own multimedia production.
-
Conclusion
-
-
3D Album CS 3.29 is a great software for anyone who wants to create impressive 3D animations from their photos and videos. It has many features that make it easy and fun to use. You can download 3D Album CS 3.29 full crack from the link above and start creating your own multimedia production.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Cookie Run Kingdom APK - Epic RPG Adventure with Cookies.md b/spaces/fatiXbelha/sd/Cookie Run Kingdom APK - Epic RPG Adventure with Cookies.md
deleted file mode 100644
index 6a80de52a275f05aaa71881f2de9afd7509b40fb..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Cookie Run Kingdom APK - Epic RPG Adventure with Cookies.md
+++ /dev/null
@@ -1,140 +0,0 @@
-
-
How to Download APK Cookie Run Kingdom
-
If you are a fan of cute and colorful games, you might want to try Cookie Run Kingdom, a popular mobile game that combines adventure, strategy, and RPG elements. In this game, you can build your own cookie kingdom, recruit and upgrade various cookie heroes, and battle against the dark forces that threaten your land. But what if you want to download the game without using Google Play Store? Or what if you want to play it on your Windows PC? In this article, we will show you how to download APK Cookie Run Kingdom, a file format that allows you to install the game on different devices. We will also explain what is Cookie Run Kingdom and why you might want to download its APK file.
Cookie Run Kingdom is a game developed by Devsisters Corporation, the same company behind other popular games like OvenBreak and Cookie Wars. It was released in January 2021 and has since gained millions of downloads and positive reviews from players around the world. The game is set in a world called Earthbread, where cookies live in harmony until a mysterious evil force invades their land. You play as GingerBrave, a brave cookie who leads a team of cookie heroes to fight against the dark enchantress and her minions. Along the way, you can also build your own cookie kingdom, decorate it with various items, and interact with other players through guilds and alliances.
-
Some of the features of Cookie Run Kingdom include:
-
-
Over 200 cookie characters with unique skills and personalities
-
A rich and engaging story mode with over 600 stages
-
A real-time combat system that requires strategy and teamwork
-
A kingdom-building mode that lets you customize your own cookie land
-
A social aspect that allows you to join guilds, chat with other players, and participate in cooperative battles
-
A regular update of new content, events, and rewards
-
-
The benefits of downloading the APK file
-
While you can download Cookie Run Kingdom from Google Play Store if you have an Android device, you might want to download its APK file instead for some reasons. For example:
-
-
You don't have enough space on your device to install the game from Google Play Store
-
You want to play the game on a device that doesn't support Google Play Store or has a different operating system
-
You want to access the latest version of the game before it is officially released on Google Play Store
-
You want to avoid any potential errors or bugs that might occur when installing the game from Google Play Store
-
You want to have more control over your game data and settings
-
-
Downloading the APK file of Cookie Run Kingdom can give you these benefits, but you need to be careful about where you get it from. Not all websites that offer APK files are trustworthy, and some might contain malware or viruses that can harm your device or steal your personal information. Therefore, you should only download APK files from reputable sources that have positive feedback from other users.
-
How to download and install the APK file on Android devices
-
The steps to enable unknown sources and download the APK file from a trusted website
-
If you want to download and install the APK file of Cookie Run Kingdom on your Android device, you need to follow these steps:
-
download cookie run kingdom apk latest version
-cookie run kingdom apk mod unlimited money
-how to download cookie run kingdom apk on pc
-cookie run kingdom apk obb download
-cookie run kingdom apk android 11
-download cookie run kingdom apk for ios
-cookie run kingdom apk hack gems
-cookie run kingdom apk offline mode
-cookie run kingdom apk update 4.6.002
-cookie run kingdom apk pure download
-cookie run kingdom apk mirror link
-cookie run kingdom apk nox player
-cookie run kingdom apk bluestacks emulator
-cookie run kingdom apk file size
-cookie run kingdom apk free crystals
-download cookie run kingdom apk from play store
-cookie run kingdom apk mod menu
-how to install cookie run kingdom apk on android
-cookie run kingdom apk data download
-cookie run kingdom apk reddit review
-download cookie run kingdom apk old version
-cookie run kingdom apk unlimited stamina
-cookie run kingdom apk error code 1000
-cookie run kingdom apk compatible devices
-cookie run kingdom apk google drive download
-cookie run kingdom apk modded by platinmods
-how to update cookie run kingdom apk manually
-cookie run kingdom apk not working fix
-cookie run kingdom apk full unlocked
-cookie run kingdom apk gameplay video
-download cookie run kingdom apk from apkpure
-cookie run kingdom apk mod god mode
-how to transfer cookie run kingdom apk data to another device
-cookie run kingdom apk requirements minimum
-cookie run kingdom apk tips and tricks guide
-download cookie run kingdom apk from apkmirror
-cookie run kingdom apk mod speed hack
-how to backup cookie run kingdom apk data to cloud storage
-cookie run kingdom apk features list
-cookie run kingdom apk best characters ranking
-download cookie run kingdom apk from apktada.com[^1^]
-cookie run kingdom apk mod one hit kill
-how to play cookie run kingdom apk with friends online
-cookie run kingdom apk cheats codes generator
-download cookie run kingdom apk from apkmody.io[^2^]
-how to uninstall and reinstall the game without losing your progress.
-
-
Go to your device's settings and look for the option that allows you to install apps from unknown sources. This option might be under security, privacy, or applications, depending on your device model and operating system. Enable this option by tapping on it or sliding the switch.
-
Open your web browser and search for a website that offers the APK file of Cookie Run Kingdom. Make sure that the website is reliable and has positive reviews from other users. You can also use the link below to download the APK file from APKPure, one of the most popular and trusted websites for APK files.
-
Tap on the download button and wait for the APK file to be downloaded to your device. You might see a warning message that says the file might harm your device, but you can ignore it if you trust the website.
-
-
The steps to install the APK file and launch the game
-
Once you have downloaded the APK file of Cookie Run Kingdom, you can install it and launch the game by following these steps:
-
-
Locate the APK file on your device's storage. You can use a file manager app or go to your downloads folder to find it.
-
Tap on the APK file and confirm that you want to install it. You might see some permissions that the app requires, such as access to your storage, network, and location. Tap on accept or allow to grant these permissions.
-
Wait for the installation process to finish. You might see a progress bar or a notification that shows the status of the installation.
-
Once the installation is complete, you can tap on open to launch the game. You might also see a shortcut icon on your home screen or app drawer that you can use to access the game anytime.
-
-
How to download and install the APK file on Windows PC
-
The steps to download and install an Android emulator
-
If you want to play Cookie Run Kingdom on your Windows PC, you need to use an Android emulator, which is a software that simulates an Android device on your computer. There are many Android emulators available online, but some of the most popular and recommended ones are BlueStacks, NoxPlayer, and LDPlayer. To download and install an Android emulator on your PC, you need to follow these steps:
-
-
Go to the official website of the Android emulator that you want to use and look for the download button. Make sure that you download the version that is compatible with your PC's operating system and specifications.
-
Run the installer file that you have downloaded and follow the instructions on the screen. You might need to agree to some terms and conditions, choose a destination folder, and create a shortcut icon.
-
Wait for the installation process to finish. You might see a progress bar or a notification that shows the status of the installation.
-
Once the installation is complete, you can launch the Android emulator by double-clicking on its icon or opening it from your start menu.
-
-
The steps to download the APK file from a trusted website and install it on the emulator
-
After you have installed an Android emulator on your PC, you can download and install the APK file of Cookie Run Kingdom on it by following these steps:
-
-
Open your web browser on the emulator and search for a website that offers the APK file of Cookie Run Kingdom. Make sure that the website is reliable and has positive reviews from other users. You can also use the link below to download the APK file from APKPure, one of the most popular and trusted websites for APK files.
-
Tap on the download button and wait for the APK file to be downloaded to the emulator's storage. You might see a warning message that says the file might harm your device, but you can ignore it if you trust the website.
-
Locate the APK file on the emulator's storage. You can use a file manager app or go to your downloads folder to find it.
-
Tap on the APK file and confirm that you want to install it. You might see some permissions that the app requires, such as access to your storage, network, and location. Tap on accept or allow to grant these permissions.
-
Wait for the installation process to finish. You might see a progress bar or a notification that shows the status of the installation.
-
Once the installation is complete, you can tap on open to launch the game. You might also see a shortcut icon on the emulator's home screen or app drawer that you can use to access the game anytime.
-
-
Conclusion
-
Cookie Run Kingdom is a fun and addictive game that lets you create your own cookie kingdom, recruit and upgrade cookie heroes, and fight against evil forces. You can download and play this game on your Android device or your Windows PC by using its APK file, which gives you more flexibility and control over your game experience. However, you need to be careful about where you get the APK file from, as not all websites are safe and trustworthy. You should only download APK files from reputable sources that have positive feedback from other users. We hope that this article has helped you learn how to download APK Cookie Run Kingdom and enjoy this game on your preferred device.
-
FAQs
-
What are the system requirements for Cookie Run Kingdom?
-
The minimum system requirements for Cookie Run Kingdom are:
-
-
Android 4.4 or higher
-
2 GB of RAM or higher
-
At least 1.5 GB of free storage space
-
-
The recommended system requirements for Cookie Run Kingdom are:
-
-
Android 8.0 or higher
-
4 GB of RAM or higher
-
At least 3 GB of free storage space
-
-
Is Cookie Run Kingdom free to play?
-
Yes, Cookie Run Kingdom is free to download and play, but it also offers in-app purchases that can enhance your game experience. You can buy items such as crystals, cookies, costumes, and packages with real money. However, these purchases are optional and not required to enjoy the game.
-
How can I update Cookie Run Kingdom APK?
-
If you have downloaded Cookie Run Kingdom APK from a website, you need to check the website regularly for any new updates of the game. You can also enable notifications from the website to get alerted when a new version is available. To update Cookie Run Kingdom APK, you need to download the latest version of the APK file from the website and install it over the existing one. You don't need to uninstall the previous version or lose your game data.
-
Is Cookie Run Kingdom safe to download?
-
Cookie Run Kingdom is safe to download if you get it from Google Play Store or a trusted website that offers its APK file. However, if you download it from an unknown or unverified source, you might risk exposing your device to malware or viruses that can harm your device or steal your personal information. Therefore, you should always be careful about where you download APK files from and only use reputable sources that have positive reviews from other users.
-
How can I contact the developers of Cookie Run Kingdom?
-
If you have any questions, feedback, or issues regarding Cookie Run Kingdom, you can contact the developers of the game by using one of these methods:
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Explore and Buy Makeup 3D Models from Sketchfab.md b/spaces/fatiXbelha/sd/Explore and Buy Makeup 3D Models from Sketchfab.md
deleted file mode 100644
index 97c13e6ccbd942d3f4df58887151ed28221b3c9f..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Explore and Buy Makeup 3D Models from Sketchfab.md
+++ /dev/null
@@ -1,155 +0,0 @@
-
-
Makeup 3D Model Free: What You Need to Know
-
Have you ever wondered how to make your digital characters look more realistic and expressive with makeup? Or how to add some glamour and fun to your virtual reality or augmented reality experiences? Or how to create stunning animations and visual effects with makeup? If you answered yes to any of these questions, then you might be interested in learning more about makeup 3D models.
Makeup 3D models are digital representations of cosmetic products and accessories that can be applied to human or animal faces or bodies. They can include lipstick, eyeliner, mascara, blush, eyeshadow, foundation, brushes, sponges, mirrors, and more. Makeup 3D models can help you enhance the appearance and personality of your characters, create realistic or fantasy scenarios, and express your creativity and style.
-
But where can you find makeup 3D models for free? And how can you use them in your projects? And what if you want to make your own makeup 3D models? In this article, we will answer these questions and more. We will show you how to find and download free makeup 3D models from various websites, how to use them in different software and tools, and how to create your own makeup 3D models with some steps and resources. Let's get started!
-
How to Find and Download Free Makeup 3D Models
-
Websites that offer free makeup 3D models
-
One of the easiest ways to get free makeup 3D models is to browse online platforms that offer them. There are many websites that provide free or low-cost 3D models for various purposes, such as CGTrader, TurboSquid, Sketchfab, and others. These websites allow you to search by keywords, categories, formats, quality, license, and other filters. You can also view previews, ratings, reviews, and details of each model before downloading it.
-
Here are some examples of websites that offer free makeup 3D models:
-
-
CGTrader: This website has over 70 free makeup 3D models in various formats such as MAX, OBJ, FBX, 3DS, STL, C4D, BLEND, MA, MB. You can find professional-quality models for VR, AR, games, animation, and more.
-
TurboSquid: This website has over 40 free makeup 3D models in formats such as 3DS, MAX, C4D, MAYA, BLEND, OBJ, FBX. You can find realistic and stylized models for different genres and themes.
-
Sketchfab: This website has over 30 free makeup 3D models in formats such as OBJ, FBX, ABC, MTL, GLTF. You can view the models in 3D and VR on your browser or mobile device.
-
-
Tips for choosing the right format, quality, and license for your needs
-
When downloading free makeup 3D models from online platforms, you need to consider some factors that may affect your project. Here are some tips for choosing the right format, quality, and license for your needs:
-
-
Format: The format of a 3D model is the file type that contains the data of the model, such as geometry, texture, animation, etc. Different formats have different features and compatibility with different software and tools. For example, OBJ is a common and simple format that can be imported and exported by most 3D software, but it does not support animation. FBX is a more advanced and versatile format that can store animation, rigging, lighting, and other data, but it may not be compatible with some older software. You need to choose the format that suits your project and software requirements.
-
Quality: The quality of a 3D model refers to the level of detail and realism of the model, which is determined by factors such as polygon count, texture resolution, shading, lighting, etc. Higher quality models usually look more realistic and appealing, but they also require more computing power and storage space. Lower quality models may look less realistic and appealing, but they are faster and easier to render and manipulate. You need to balance the quality and performance of your project and choose the models that match your expectations.
-
License: The license of a 3D model is the legal agreement that defines how you can use the model in your project. Different licenses have different terms and conditions that may restrict or allow certain uses of the model. For example, some licenses may require you to credit the original author or source of the model, while others may allow you to modify or redistribute the model as you wish. You need to read and understand the license of each model before downloading and using it in your project.
-
-
How to Use Free Makeup 3D Models in Your Projects
-
Software and tools that support makeup 3D models
-
Once you have downloaded some free makeup 3D models, you need to use some software and tools that can import, edit, and export them. There are many software and tools that support makeup 3D models, depending on your project goals and preferences. Some of them are free and open-source, while others are paid and proprietary. Some of them are general-purpose 3D software, while others are specialized for specific tasks or industries.
-
Here are some examples of software and tools that support makeup 3D models:
-
free 3d makeup models for download
-free 3d cosmetic models for download
-free makeup 3d models cgtrader
-free 3d makeup models turbosquid
-free makeup 3d models obj
-free makeup 3d models fbx
-free makeup 3d models max
-free makeup 3d models blend
-free makeup 3d models c4d
-free makeup 3d models maya
-free makeup 3d models stl
-free makeup 3d models vr
-free makeup 3d models ar
-free makeup 3d models low poly
-free makeup 3d models animated
-free makeup 3d models rigged
-free makeup 3d models game
-free makeup 3d models realistic
-free makeup 3d models pbr
-free makeup 3d models collection
-free cosmetic 3d models obj
-free cosmetic 3d models fbx
-free cosmetic 3d models max
-free cosmetic 3d models blend
-free cosmetic 3d models c4d
-free cosmetic 3d models maya
-free cosmetic 3d models stl
-free cosmetic 3d models vr
-free cosmetic 3d models ar
-free cosmetic 3d models low poly
-free cosmetic 3d models animated
-free cosmetic 3d models rigged
-free cosmetic 3d models game
-free cosmetic 3d models realistic
-free cosmetic 3d models pbr
-free cosmetic 3d models collection
-download makeup 3d model for free
-download cosmetic 3d model for free
-download beauty products 3d model for free
-download lipstick 3d model for free
-download mascara 3d model for free
-download eyeshadow palette 3d model for free
-download foundation bottle 3d model for free
-download blush brush 3d model for free
-download nail polish bottle 3d model for free
-download perfume bottle 3d model for free
-download skincare products 3d model for free
-download hair products 3d model for free
-download soap dispenser 3d model for free
-
-
Blender: This is a free and open-source 3D software that can create, edit, animate, render, and export 3D models in various formats. It has a powerful and flexible interface that allows you to customize your workflow and tools. It also has a large and active community that provides tutorials, add-ons, resources, and support.
-
Maya: This is a paid and proprietary 3D software that is widely used by professionals in the film, game, animation, and visual effects industries. It has a comprehensive set of features and tools that can handle complex and high-quality 3D models. It also has a robust scripting and plug-in system that allows you to extend its functionality.
-
Photoshop: This is a paid and proprietary image editing software that can also import, edit, and export 3D models in some formats. It has a user-friendly interface that allows you to apply various effects, filters, adjustments, layers, masks, etc. to your 3D models. It also has a wide range of brushes, tools, presets, plugins, etc. that can help you create realistic or artistic makeup effects.
-
-
Examples of creative applications of makeup 3D models
-
Using free makeup 3D models in your projects can open up many possibilities for creativity and innovation. You can use them for various purposes such as entertainment, education, marketing, art, etc. You can also combine them with other 3D models, such as human or animal faces, bodies, clothes, accessories, etc. to create unique and diverse characters and scenes. You can also experiment with different styles, colors, textures, lighting, etc. to create different moods and effects.
-
Here are some examples of creative applications of makeup 3D models:
-
-
Games: You can use makeup 3D models to create realistic or stylized characters for your games. You can also allow your players to customize their characters' appearance with different makeup options. For example, you can use makeup 3D models to create a beauty salon game, where your players can apply makeup to their clients and see the results in 3D.
-
VR/AR: You can use makeup 3D models to enhance your virtual reality or augmented reality experiences. You can also use them to create interactive and immersive simulations or applications. For example, you can use makeup 3D models to create a virtual makeup try-on app, where your users can see how different makeup products look on their faces in real-time.
-
Animation: You can use makeup 3D models to create expressive and dynamic animations for your films, videos, commercials, etc. You can also use them to add some humor, drama, or emotion to your stories. For example, you can use makeup 3D models to create a funny animation where a character tries to apply makeup but fails miserably.
-
-
How to Create Your Own Makeup 3D Models for Free
-
Steps and resources for making your own makeup 3D models
-
If you want to have more control and flexibility over your makeup 3D models, you can also try to create your own. Creating your own makeup 3D models can be challenging but rewarding. You will need some skills and knowledge in 3D modeling, scanning, texturing, etc. You will also need some tools and resources that can help you with the process.
-
Here are some steps and resources for making your own makeup 3D models:
-
-
Scanning: The first step is to scan the real-life makeup products or accessories that you want to model. You can use a 3D scanner or a smartphone app that can capture the shape and color of the objects. You can also use photos or images as references. Some examples of tools and resources for scanning are Qlone, Trnio, 123D Catch, etc.
-
Modeling: The second step is to model the scanned objects in a 3D software. You can use various tools and techniques to create the geometry and topology of the models. You can also adjust the scale, orientation, position, etc. of the models. Some examples of tools and resources for modeling are Blender, Maya, ZBrush, etc.
-
Texturing: The third step is to texture the modeled objects in a 3D software or a dedicated texturing software. You can use various tools and techniques to create the color, material, reflection, transparency, etc. of the models. You can also apply images or photos as textures or paint them manually. Some examples of tools and resources for texturing are Photoshop, Substance Painter, Quixel Mixer, etc.
-
Exporting: The final step is to export the finished models in a format that suits your project and software requirements. You can also optimize the models by reducing the polygon count, file size, etc. Some examples of formats that you can export are OBJ, FBX, GLTF, etc.
-
-
Benefits and challenges of creating your own makeup 3D models
-
Creating your own makeup 3D models has some benefits and challenges that you should be aware of before starting the process.
-
Some of the benefits are:
-
-
Creativity: Creating your own makeup 3D models allows you to express your creativity and style. You can design and customize your models according to your vision and preferences.
-
Originality: Creating your own makeup 3D models ensures that your models are original and unique. You can avoid using the same models as others and stand out from the crowd.
-
Control: Creating your own makeup 3D models gives you more control and flexibility over your models. You can modify and adjust your models as you wish and according to your project needs.
-
-
Some of the challenges are:
-
-
Time: Creating your own makeup 3D models can take a lot of time and effort. You need to go through several steps and processes to create a model from scratch. You also need to test and troubleshoot your models for any errors or issues.
-
Skill: Creating your own makeup 3D models requires some skill and knowledge in 3D modeling, scanning, texturing, etc. You need to learn how to use various software and tools and how to apply various techniques and methods. You also need to have some artistic sense and vision to create appealing and realistic models.
-
Cost: Creating your own makeup 3D models may involve some cost. You may need to buy or rent some equipment or software that can help you with the process. You may also need to pay for some resources or services that can assist you with the process.
-
-
Conclusion
-
Makeup 3D models are digital representations of cosmetic products and accessories that can be applied to human or animal faces or bodies. They can help you enhance the appearance and personality of your characters, create realistic or fantasy scenarios, and express your creativity and style.
-
In this article, we have shown you how to find and download free makeup 3D models from various websites, how to use them in different software and tools, and how to create your own makeup 3D models with some steps and resources. We hope that this article has been helpful and informative for you.
-
If you are interested in learning more about makeup 3D models, you can check out some of the links below. You can also share your thoughts, questions, or feedback with us in the comments section. Thank you for reading!
-
FAQs
-
What are the benefits of using makeup 3D models?
-
Some of the benefits of using makeup 3D models are:
-
-
They can make your digital characters look more realistic and expressive with makeup.
-
They can add some glamour and fun to your virtual reality or augmented reality experiences.
-
They can create stunning animations and visual effects with makeup.
-
-
What are the challenges of using makeup 3D models?
-
Some of the challenges of using makeup 3D models are:
-
-
They may require more computing power and storage space than other 3D models.
-
They may not be compatible with some software or tools that do not support makeup 3D models.
-
They may have different quality, format, and license issues that may affect your project.
-
-
How can I learn more about makeup 3D models?
-
You can learn more about makeup 3D models by:
-
-
Browsing online platforms that offer free or low-cost makeup 3D models, such as CGTrader, TurboSquid, Sketchfab, etc.
-
Watching online tutorials or courses that teach you how to use or create makeup 3D models, such as YouTube, Udemy, Skillshare, etc.
-
Reading online articles or blogs that share tips, tricks, or examples of using or creating makeup 3D models, such as Medium, Quora, Reddit, etc.
-
-
What are some examples of projects that use makeup 3D models?
-
Some examples of projects that use makeup 3D models are:
-
-
A beauty salon game, where you can apply makeup to your clients and see the results in 3D.
-
A virtual makeup try-on app, where you can see how different makeup products look on your face in real-time.
-
A funny animation where a character tries to apply makeup but fails miserably.
-
-
What are some tips for creating realistic and appealing makeup 3D models?
-
Some tips for creating realistic and appealing makeup 3D models are:
-
-
Use high-quality references or images of real-life makeup products or accessories.
-
Use accurate and consistent measurements and proportions for your models.
-
Use realistic and varied textures, colors, materials, lighting, etc. for your models.
-
Use appropriate levels of detail and realism for your models depending on your project goals and preferences.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/utils/preprocess.py b/spaces/fb700/chatglm-fitness-RLHF/src/utils/preprocess.py
deleted file mode 100644
index 0f784e6c3d8562e1db1bbd850b9f01843cee3c97..0000000000000000000000000000000000000000
--- a/spaces/fb700/chatglm-fitness-RLHF/src/utils/preprocess.py
+++ /dev/null
@@ -1,170 +0,0 @@
-import numpy as np
-import cv2, os, sys, torch
-from tqdm import tqdm
-from PIL import Image
-
-# 3dmm extraction
-import safetensors
-import safetensors.torch
-from src.face3d.util.preprocess import align_img
-from src.face3d.util.load_mats import load_lm3d
-from src.face3d.models import networks
-
-from scipy.io import loadmat, savemat
-from src.utils.croper import Preprocesser
-
-
-import warnings
-
-from src.utils.safetensor_helper import load_x_from_safetensor
-warnings.filterwarnings("ignore")
-
-def split_coeff(coeffs):
- """
- Return:
- coeffs_dict -- a dict of torch.tensors
-
- Parameters:
- coeffs -- torch.tensor, size (B, 256)
- """
- id_coeffs = coeffs[:, :80]
- exp_coeffs = coeffs[:, 80: 144]
- tex_coeffs = coeffs[:, 144: 224]
- angles = coeffs[:, 224: 227]
- gammas = coeffs[:, 227: 254]
- translations = coeffs[:, 254:]
- return {
- 'id': id_coeffs,
- 'exp': exp_coeffs,
- 'tex': tex_coeffs,
- 'angle': angles,
- 'gamma': gammas,
- 'trans': translations
- }
-
-
-class CropAndExtract():
- def __init__(self, sadtalker_path, device):
-
- self.propress = Preprocesser(device)
- self.net_recon = networks.define_net_recon(net_recon='resnet50', use_last_fc=False, init_path='').to(device)
-
- if sadtalker_path['use_safetensor']:
- checkpoint = safetensors.torch.load_file(sadtalker_path['checkpoint'])
- self.net_recon.load_state_dict(load_x_from_safetensor(checkpoint, 'face_3drecon'))
- else:
- checkpoint = torch.load(sadtalker_path['path_of_net_recon_model'], map_location=torch.device(device))
- self.net_recon.load_state_dict(checkpoint['net_recon'])
-
- self.net_recon.eval()
- self.lm3d_std = load_lm3d(sadtalker_path['dir_of_BFM_fitting'])
- self.device = device
-
- def generate(self, input_path, save_dir, crop_or_resize='crop', source_image_flag=False, pic_size=256):
-
- pic_name = os.path.splitext(os.path.split(input_path)[-1])[0]
-
- landmarks_path = os.path.join(save_dir, pic_name+'_landmarks.txt')
- coeff_path = os.path.join(save_dir, pic_name+'.mat')
- png_path = os.path.join(save_dir, pic_name+'.png')
-
- #load input
- if not os.path.isfile(input_path):
- raise ValueError('input_path must be a valid path to video/image file')
- elif input_path.split('.')[-1] in ['jpg', 'png', 'jpeg']:
- # loader for first frame
- full_frames = [cv2.imread(input_path)]
- fps = 25
- else:
- # loader for videos
- video_stream = cv2.VideoCapture(input_path)
- fps = video_stream.get(cv2.CAP_PROP_FPS)
- full_frames = []
- while 1:
- still_reading, frame = video_stream.read()
- if not still_reading:
- video_stream.release()
- break
- full_frames.append(frame)
- if source_image_flag:
- break
-
- x_full_frames= [cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) for frame in full_frames]
-
- #### crop images as the
- if 'crop' in crop_or_resize.lower(): # default crop
- x_full_frames, crop, quad = self.propress.crop(x_full_frames, still=True if 'ext' in crop_or_resize.lower() else False, xsize=512)
- clx, cly, crx, cry = crop
- lx, ly, rx, ry = quad
- lx, ly, rx, ry = int(lx), int(ly), int(rx), int(ry)
- oy1, oy2, ox1, ox2 = cly+ly, cly+ry, clx+lx, clx+rx
- crop_info = ((ox2 - ox1, oy2 - oy1), crop, quad)
- elif 'full' in crop_or_resize.lower():
- x_full_frames, crop, quad = self.propress.crop(x_full_frames, still=True if 'ext' in crop_or_resize.lower() else False, xsize=512)
- clx, cly, crx, cry = crop
- lx, ly, rx, ry = quad
- lx, ly, rx, ry = int(lx), int(ly), int(rx), int(ry)
- oy1, oy2, ox1, ox2 = cly+ly, cly+ry, clx+lx, clx+rx
- crop_info = ((ox2 - ox1, oy2 - oy1), crop, quad)
- else: # resize mode
- oy1, oy2, ox1, ox2 = 0, x_full_frames[0].shape[0], 0, x_full_frames[0].shape[1]
- crop_info = ((ox2 - ox1, oy2 - oy1), None, None)
-
- frames_pil = [Image.fromarray(cv2.resize(frame,(pic_size, pic_size))) for frame in x_full_frames]
- if len(frames_pil) == 0:
- print('No face is detected in the input file')
- return None, None
-
- # save crop info
- for frame in frames_pil:
- cv2.imwrite(png_path, cv2.cvtColor(np.array(frame), cv2.COLOR_RGB2BGR))
-
- # 2. get the landmark according to the detected face.
- if not os.path.isfile(landmarks_path):
- lm = self.propress.predictor.extract_keypoint(frames_pil, landmarks_path)
- else:
- print(' Using saved landmarks.')
- lm = np.loadtxt(landmarks_path).astype(np.float32)
- lm = lm.reshape([len(x_full_frames), -1, 2])
-
- if not os.path.isfile(coeff_path):
- # load 3dmm paramter generator from Deep3DFaceRecon_pytorch
- video_coeffs, full_coeffs = [], []
- for idx in tqdm(range(len(frames_pil)), desc='3DMM Extraction In Video:'):
- frame = frames_pil[idx]
- W,H = frame.size
- lm1 = lm[idx].reshape([-1, 2])
-
- if np.mean(lm1) == -1:
- lm1 = (self.lm3d_std[:, :2]+1)/2.
- lm1 = np.concatenate(
- [lm1[:, :1]*W, lm1[:, 1:2]*H], 1
- )
- else:
- lm1[:, -1] = H - 1 - lm1[:, -1]
-
- trans_params, im1, lm1, _ = align_img(frame, lm1, self.lm3d_std)
-
- trans_params = np.array([float(item) for item in np.hsplit(trans_params, 5)]).astype(np.float32)
- im_t = torch.tensor(np.array(im1)/255., dtype=torch.float32).permute(2, 0, 1).to(self.device).unsqueeze(0)
-
- with torch.no_grad():
- full_coeff = self.net_recon(im_t)
- coeffs = split_coeff(full_coeff)
-
- pred_coeff = {key:coeffs[key].cpu().numpy() for key in coeffs}
-
- pred_coeff = np.concatenate([
- pred_coeff['exp'],
- pred_coeff['angle'],
- pred_coeff['trans'],
- trans_params[2:][None],
- ], 1)
- video_coeffs.append(pred_coeff)
- full_coeffs.append(full_coeff.cpu().numpy())
-
- semantic_npy = np.array(video_coeffs)[:,0]
-
- savemat(coeff_path, {'coeff_3dmm': semantic_npy, 'full_3dmm': np.array(full_coeffs)[0]})
-
- return coeff_path, png_path, crop_info
diff --git a/spaces/fclong/summary/fengshen/models/transfo_xl_paraphrase/__init__.py b/spaces/fclong/summary/fengshen/models/transfo_xl_paraphrase/__init__.py
deleted file mode 100644
index 8eb10eb65d1b0c4da740e22fcba4e19461121f20..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/models/transfo_xl_paraphrase/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from fengshen.models.transfo_xl_denoise.modeling_transfo_xl_denoise import TransfoXLDenoiseModel as TransfoXLModel
-from .generate import paraphrase_generate
diff --git a/spaces/fffiloni/Image-to-MusicGen/audiocraft/models/__init__.py b/spaces/fffiloni/Image-to-MusicGen/audiocraft/models/__init__.py
deleted file mode 100644
index 92c7a48a200eba455044cd66e0d2c1efe6494f5c..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/Image-to-MusicGen/audiocraft/models/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# flake8: noqa
-from .musicgen import MusicGen
-from .lm import LMModel
-from .encodec import CompressionModel, EncodecModel
diff --git a/spaces/fffiloni/Music_Source_Separation/separate_scripts/separate_vocals.sh b/spaces/fffiloni/Music_Source_Separation/separate_scripts/separate_vocals.sh
deleted file mode 100644
index be445a415fabcfab04a3f5b73b27493e99d85227..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/Music_Source_Separation/separate_scripts/separate_vocals.sh
+++ /dev/null
@@ -1,21 +0,0 @@
-#!/bin/bash
-AUDIO_PATH=${1:-"./resources/vocals_accompaniment_10s.mp3"} # The path of audio to be separated.
-OUTPUT_PATH=${2:-"./sep_results/sep_vocals.mp3"} # The path to write out separated audio.
-
-MODEL_NAME="resunet_subbandtime" # "resunet_ismir2021" | ""resunet_subbandtime""
-
-if [ $MODEL_NAME = "resunet_ismir2021" ]; then
- TRAIN_CONFIG_YAML="./scripts/4_train/musdb18/configs/vocals-accompaniment,resunet_ismir2021.yaml"
- CHECKPOINT_PATH="./downloaded_checkpoints/resunet143_ismir2021_vocals_8.9dB_350k_steps.pth"
-
-elif [ $MODEL_NAME = "resunet_subbandtime" ]; then
- TRAIN_CONFIG_YAML="./scripts/4_train/musdb18/configs/vocals-accompaniment,resunet_subbandtime.yaml"
- CHECKPOINT_PATH="./downloaded_checkpoints/resunet143_subbtandtime_vocals_8.8dB_350k_steps.pth"
-fi
-
-# Inference
-CUDA_VISIBLE_DEVICES=0 python3 bytesep/inference.py \
- --config_yaml=$TRAIN_CONFIG_YAML \
- --checkpoint_path=$CHECKPOINT_PATH \
- --audio_path=$AUDIO_PATH \
- --output_path=$OUTPUT_PATH
diff --git a/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/models/GroundingDINO/backbone/position_encoding.py b/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/models/GroundingDINO/backbone/position_encoding.py
deleted file mode 100644
index eac7e896bbe85a670824bfe8ef487d0535d5bd99..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/Video-Matting-Anything/GroundingDINO/groundingdino/models/GroundingDINO/backbone/position_encoding.py
+++ /dev/null
@@ -1,186 +0,0 @@
-# ------------------------------------------------------------------------
-# Grounding DINO
-# url: https://github.com/IDEA-Research/GroundingDINO
-# Copyright (c) 2023 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# DINO
-# Copyright (c) 2022 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Conditional DETR
-# Copyright (c) 2021 Microsoft. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# Copied from DETR (https://github.com/facebookresearch/detr)
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-# ------------------------------------------------------------------------
-
-"""
-Various positional encodings for the transformer.
-"""
-import math
-
-import torch
-from torch import nn
-
-from groundingdino.util.misc import NestedTensor
-
-
-class PositionEmbeddingSine(nn.Module):
- """
- This is a more standard version of the position embedding, very similar to the one
- used by the Attention is all you need paper, generalized to work on images.
- """
-
- def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None):
- super().__init__()
- self.num_pos_feats = num_pos_feats
- self.temperature = temperature
- self.normalize = normalize
- if scale is not None and normalize is False:
- raise ValueError("normalize should be True if scale is passed")
- if scale is None:
- scale = 2 * math.pi
- self.scale = scale
-
- def forward(self, tensor_list: NestedTensor):
- x = tensor_list.tensors
- mask = tensor_list.mask
- assert mask is not None
- not_mask = ~mask
- y_embed = not_mask.cumsum(1, dtype=torch.float32)
- x_embed = not_mask.cumsum(2, dtype=torch.float32)
- if self.normalize:
- eps = 1e-6
- # if os.environ.get("SHILONG_AMP", None) == '1':
- # eps = 1e-4
- # else:
- # eps = 1e-6
- y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale
- x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale
-
- dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)
- dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)
-
- pos_x = x_embed[:, :, :, None] / dim_t
- pos_y = y_embed[:, :, :, None] / dim_t
- pos_x = torch.stack(
- (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4
- ).flatten(3)
- pos_y = torch.stack(
- (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4
- ).flatten(3)
- pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)
- return pos
-
-
-class PositionEmbeddingSineHW(nn.Module):
- """
- This is a more standard version of the position embedding, very similar to the one
- used by the Attention is all you need paper, generalized to work on images.
- """
-
- def __init__(
- self, num_pos_feats=64, temperatureH=10000, temperatureW=10000, normalize=False, scale=None
- ):
- super().__init__()
- self.num_pos_feats = num_pos_feats
- self.temperatureH = temperatureH
- self.temperatureW = temperatureW
- self.normalize = normalize
- if scale is not None and normalize is False:
- raise ValueError("normalize should be True if scale is passed")
- if scale is None:
- scale = 2 * math.pi
- self.scale = scale
-
- def forward(self, tensor_list: NestedTensor):
- x = tensor_list.tensors
- mask = tensor_list.mask
- assert mask is not None
- not_mask = ~mask
- y_embed = not_mask.cumsum(1, dtype=torch.float32)
- x_embed = not_mask.cumsum(2, dtype=torch.float32)
-
- # import ipdb; ipdb.set_trace()
-
- if self.normalize:
- eps = 1e-6
- y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale
- x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale
-
- dim_tx = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)
- dim_tx = self.temperatureW ** (2 * (torch.div(dim_tx, 2, rounding_mode='floor')) / self.num_pos_feats)
- pos_x = x_embed[:, :, :, None] / dim_tx
-
- dim_ty = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)
- dim_ty = self.temperatureH ** (2 * (torch.div(dim_ty, 2, rounding_mode='floor')) / self.num_pos_feats)
- pos_y = y_embed[:, :, :, None] / dim_ty
-
- pos_x = torch.stack(
- (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4
- ).flatten(3)
- pos_y = torch.stack(
- (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4
- ).flatten(3)
- pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)
-
- # import ipdb; ipdb.set_trace()
-
- return pos
-
-
-class PositionEmbeddingLearned(nn.Module):
- """
- Absolute pos embedding, learned.
- """
-
- def __init__(self, num_pos_feats=256):
- super().__init__()
- self.row_embed = nn.Embedding(50, num_pos_feats)
- self.col_embed = nn.Embedding(50, num_pos_feats)
- self.reset_parameters()
-
- def reset_parameters(self):
- nn.init.uniform_(self.row_embed.weight)
- nn.init.uniform_(self.col_embed.weight)
-
- def forward(self, tensor_list: NestedTensor):
- x = tensor_list.tensors
- h, w = x.shape[-2:]
- i = torch.arange(w, device=x.device)
- j = torch.arange(h, device=x.device)
- x_emb = self.col_embed(i)
- y_emb = self.row_embed(j)
- pos = (
- torch.cat(
- [
- x_emb.unsqueeze(0).repeat(h, 1, 1),
- y_emb.unsqueeze(1).repeat(1, w, 1),
- ],
- dim=-1,
- )
- .permute(2, 0, 1)
- .unsqueeze(0)
- .repeat(x.shape[0], 1, 1, 1)
- )
- return pos
-
-
-def build_position_encoding(args):
- N_steps = args.hidden_dim // 2
- if args.position_embedding in ("v2", "sine"):
- # TODO find a better way of exposing other arguments
- position_embedding = PositionEmbeddingSineHW(
- N_steps,
- temperatureH=args.pe_temperatureH,
- temperatureW=args.pe_temperatureW,
- normalize=True,
- )
- elif args.position_embedding in ("v3", "learned"):
- position_embedding = PositionEmbeddingLearned(N_steps)
- else:
- raise ValueError(f"not supported {args.position_embedding}")
-
- return position_embedding
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/dist/uws.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/dist/uws.js
deleted file mode 100644
index 23eedf9c094f0c8cb854768f1b9f79f64fa28f97..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io/dist/uws.js
+++ /dev/null
@@ -1,135 +0,0 @@
-"use strict";
-var __importDefault = (this && this.__importDefault) || function (mod) {
- return (mod && mod.__esModule) ? mod : { "default": mod };
-};
-Object.defineProperty(exports, "__esModule", { value: true });
-exports.serveFile = exports.restoreAdapter = exports.patchAdapter = void 0;
-const socket_io_adapter_1 = require("socket.io-adapter");
-const fs_1 = require("fs");
-const debug_1 = __importDefault(require("debug"));
-const debug = (0, debug_1.default)("socket.io:adapter-uws");
-const SEPARATOR = "\x1f"; // see https://en.wikipedia.org/wiki/Delimiter#ASCII_delimited_text
-const { addAll, del, broadcast } = socket_io_adapter_1.Adapter.prototype;
-function patchAdapter(app /* : TemplatedApp */) {
- socket_io_adapter_1.Adapter.prototype.addAll = function (id, rooms) {
- const isNew = !this.sids.has(id);
- addAll.call(this, id, rooms);
- const socket = this.nsp.sockets.get(id);
- if (!socket) {
- return;
- }
- if (socket.conn.transport.name === "websocket") {
- subscribe(this.nsp.name, socket, isNew, rooms);
- return;
- }
- if (isNew) {
- socket.conn.on("upgrade", () => {
- const rooms = this.sids.get(id);
- if (rooms) {
- subscribe(this.nsp.name, socket, isNew, rooms);
- }
- });
- }
- };
- socket_io_adapter_1.Adapter.prototype.del = function (id, room) {
- del.call(this, id, room);
- const socket = this.nsp.sockets.get(id);
- if (socket && socket.conn.transport.name === "websocket") {
- // @ts-ignore
- const sessionId = socket.conn.id;
- // @ts-ignore
- const websocket = socket.conn.transport.socket;
- const topic = `${this.nsp.name}${SEPARATOR}${room}`;
- debug("unsubscribe connection %s from topic %s", sessionId, topic);
- websocket.unsubscribe(topic);
- }
- };
- socket_io_adapter_1.Adapter.prototype.broadcast = function (packet, opts) {
- const useFastPublish = opts.rooms.size <= 1 && opts.except.size === 0;
- if (!useFastPublish) {
- broadcast.call(this, packet, opts);
- return;
- }
- const flags = opts.flags || {};
- const basePacketOpts = {
- preEncoded: true,
- volatile: flags.volatile,
- compress: flags.compress,
- };
- packet.nsp = this.nsp.name;
- const encodedPackets = this.encoder.encode(packet);
- const topic = opts.rooms.size === 0
- ? this.nsp.name
- : `${this.nsp.name}${SEPARATOR}${opts.rooms.keys().next().value}`;
- debug("fast publish to %s", topic);
- // fast publish for clients connected with WebSocket
- encodedPackets.forEach((encodedPacket) => {
- const isBinary = typeof encodedPacket !== "string";
- // "4" being the message type in the Engine.IO protocol, see https://github.com/socketio/engine.io-protocol
- app.publish(topic, isBinary ? encodedPacket : "4" + encodedPacket, isBinary);
- });
- this.apply(opts, (socket) => {
- if (socket.conn.transport.name !== "websocket") {
- // classic publish for clients connected with HTTP long-polling
- socket.client.writeToEngine(encodedPackets, basePacketOpts);
- }
- });
- };
-}
-exports.patchAdapter = patchAdapter;
-function subscribe(namespaceName, socket, isNew, rooms) {
- // @ts-ignore
- const sessionId = socket.conn.id;
- // @ts-ignore
- const websocket = socket.conn.transport.socket;
- if (isNew) {
- debug("subscribe connection %s to topic %s", sessionId, namespaceName);
- websocket.subscribe(namespaceName);
- }
- rooms.forEach((room) => {
- const topic = `${namespaceName}${SEPARATOR}${room}`; // '#' can be used as wildcard
- debug("subscribe connection %s to topic %s", sessionId, topic);
- websocket.subscribe(topic);
- });
-}
-function restoreAdapter() {
- socket_io_adapter_1.Adapter.prototype.addAll = addAll;
- socket_io_adapter_1.Adapter.prototype.del = del;
- socket_io_adapter_1.Adapter.prototype.broadcast = broadcast;
-}
-exports.restoreAdapter = restoreAdapter;
-const toArrayBuffer = (buffer) => {
- const { buffer: arrayBuffer, byteOffset, byteLength } = buffer;
- return arrayBuffer.slice(byteOffset, byteOffset + byteLength);
-};
-// imported from https://github.com/kolodziejczak-sz/uwebsocket-serve
-function serveFile(res /* : HttpResponse */, filepath) {
- const { size } = (0, fs_1.statSync)(filepath);
- const readStream = (0, fs_1.createReadStream)(filepath);
- const destroyReadStream = () => !readStream.destroyed && readStream.destroy();
- const onError = (error) => {
- destroyReadStream();
- throw error;
- };
- const onDataChunk = (chunk) => {
- const arrayBufferChunk = toArrayBuffer(chunk);
- const lastOffset = res.getWriteOffset();
- const [ok, done] = res.tryEnd(arrayBufferChunk, size);
- if (!done && !ok) {
- readStream.pause();
- res.onWritable((offset) => {
- const [ok, done] = res.tryEnd(arrayBufferChunk.slice(offset - lastOffset), size);
- if (!done && ok) {
- readStream.resume();
- }
- return ok;
- });
- }
- };
- res.onAborted(destroyReadStream);
- readStream
- .on("data", onDataChunk)
- .on("error", onError)
- .on("end", destroyReadStream);
-}
-exports.serveFile = serveFile;
diff --git a/spaces/fffiloni/img-to-music/app.py b/spaces/fffiloni/img-to-music/app.py
deleted file mode 100644
index 30d094ce05b344d21f1c497c183a4ce7649ec164..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/img-to-music/app.py
+++ /dev/null
@@ -1,333 +0,0 @@
-import gradio as gr
-import openai
-import numpy as np
-import time
-import base64
-import ffmpeg
-from sentence_transformers import SentenceTransformer
-from audio2numpy import open_audio
-import httpx
-import json
-import os
-import requests
-import urllib
-import pydub
-from os import path
-from pydub import AudioSegment
-import re
-
-MUBERT_LICENSE = os.environ.get('MUBERT_LICENSE')
-MUBERT_TOKEN = os.environ.get('MUBERT_TOKEN')
-
-#img_to_text = gr.Blocks.load(name="spaces/pharma/CLIP-Interrogator")
-img_to_text = gr.Blocks.load(name="spaces/fffiloni/CLIP-Interrogator-2")
-
-from share_btn import community_icon_html, loading_icon_html, share_js
-from utils import get_tags_for_prompts, get_mubert_tags_embeddings
-
-minilm = SentenceTransformer('all-MiniLM-L6-v2')
-mubert_tags_embeddings = get_mubert_tags_embeddings(minilm)
-
-##————————————————————————————————————
-
-MUBERT_LICENSE = os.environ.get('MUBERT_LICENSE')
-MUBERT_TOKEN = os.environ.get('MUBERT_TOKEN')
-
-##————————————————————————————————————
-def get_pat_token():
- r = httpx.post('https://api-b2b.mubert.com/v2/GetServiceAccess',
- json={
- "method": "GetServiceAccess",
- "params": {
- "email":"mail@mail.com",
- "phone":"+11234567890",
- "license": MUBERT_LICENSE,
- "token": MUBERT_TOKEN,
-
- }
- })
-
- rdata = json.loads(r.text)
- assert rdata['status'] == 1, "probably incorrect e-mail"
- pat = rdata['data']['pat']
- #print(f"pat: {pat}")
- return pat
-
-def get_music(pat, prompt, track_duration, gen_intensity, gen_mode):
-
- if len(prompt) > 200:
- prompt = prompt[:200]
-
- r = httpx.post('https://api-b2b.mubert.com/v2/TTMRecordTrack',
- json={
- "method": "TTMRecordTrack",
- "params":
- {
- "text": prompt,
- "pat": pat,
- "mode":gen_mode,
- "duration":track_duration,
- "intensity": gen_intensity,
- "format": "wav"
- }
- })
-
- rdata = json.loads(r.text)
-
- #print(f"rdata: {rdata}")
- assert rdata['status'] == 1, rdata['error']['text']
- track = rdata['data']['tasks'][0]['download_link']
- print(track)
-
- local_file_path = "sample.wav"
-
- # Download the MP3 file from the URL
- headers = {
- 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7; rv:93.0) Gecko/20100101 Firefox/93.0'}
-
- retries = 3
- delay = 5 # in seconds
- while retries > 0:
- response = requests.get(track, headers=headers)
- if response.status_code == 200:
- break
- retries -= 1
- time.sleep(delay)
- response = requests.get(track, headers=headers)
- print(f"{response}")
- # Save the downloaded content to a local file
- with open(local_file_path, 'wb') as f:
- f.write(response.content)
- return "sample.wav", track
-
-
-def get_results(text_prompt,track_duration,gen_intensity,gen_mode):
- pat_token = get_pat_token()
- music = get_music(pat_token, text_prompt, track_duration, gen_intensity, gen_mode)
- return pat_token, music[0], music[1]
-
-def get_prompts(uploaded_image, track_duration, gen_intensity, gen_mode, openai_api_key):
- print("calling clip interrogator")
- #prompt = img_to_text(uploaded_image, "ViT-L (best for Stable Diffusion 1.*)", "fast", fn_index=1)[0]
-
- prompt = img_to_text(uploaded_image, 'best', 4, fn_index=1)[0]
- print(prompt)
- clean_prompt = clean_text(prompt)
- print(f"prompt cleaned: {clean_prompt}")
- musical_prompt = 'You did not use any OpenAI API key to pimp your result :)'
- if openai_api_key is not None:
- gpt_adaptation = try_api(prompt, openai_api_key)
- if gpt_adaptation[0] != "oups":
- musical_prompt = gpt_adaptation[0]
- print(f"musical adapt: {musical_prompt}")
- music_result = get_results(musical_prompt, track_duration, gen_intensity, gen_mode)
- else:
- music_result = get_results(clean_prompt, track_duration, gen_intensity, gen_mode)
- else:
- music_result = get_results(clean_prompt, track_duration, gen_intensity, gen_mode)
-
- show_prompts = f"""
- CLIP Interrogator Caption: '{prompt}'
- —
- OpenAI Musical Adaptation: '{musical_prompt}'
- —
- Audio file link: {music_result[2]}
- """
- #wave_file = convert_mp3_to_wav(music_result[1])
-
- time.sleep(1)
- return gr.Textbox.update(value=show_prompts, visible=True), music_result[1], gr.update(visible=True), gr.update(visible=True), gr.update(visible=True)
-
-def try_api(message, openai_api_key):
-
- try:
- response = call_api(message, openai_api_key)
- return response, "no error"
- except openai.error.Timeout as e:
- #Handle timeout error, e.g. retry or log
- #print(f"OpenAI API request timed out: {e}")
- return "oups", f"OpenAI API request timed out: {e}"
- except openai.error.APIError as e:
- #Handle API error, e.g. retry or log
- #print(f"OpenAI API returned an API Error: {e}")
- return "oups", f"OpenAI API returned an API Error: {e}"
- except openai.error.APIConnectionError as e:
- #Handle connection error, e.g. check network or log
- #print(f"OpenAI API request failed to connect: {e}")
- return "oups", f"OpenAI API request failed to connect: {e}"
- except openai.error.InvalidRequestError as e:
- #Handle invalid request error, e.g. validate parameters or log
- #print(f"OpenAI API request was invalid: {e}")
- return "oups", f"OpenAI API request was invalid: {e}"
- except openai.error.AuthenticationError as e:
- #Handle authentication error, e.g. check credentials or log
- #print(f"OpenAI API request was not authorized: {e}")
- return "oups", f"OpenAI API request was not authorized: {e}"
- except openai.error.PermissionError as e:
- #Handle permission error, e.g. check scope or log
- #print(f"OpenAI API request was not permitted: {e}")
- return "oups", f"OpenAI API request was not permitted: {e}"
- except openai.error.RateLimitError as e:
- #Handle rate limit error, e.g. wait or log
- #print(f"OpenAI API request exceeded rate limit: {e}")
- return "oups", f"OpenAI API request exceeded rate limit: {e}"
-
-def call_api(message, openai_api_key):
-
- instruction = "Convert in less than 200 characters this image caption to a very concise musical description with musical terms, as if you wanted to describe a musical ambiance, stricly in English"
-
- print("starting open ai")
- augmented_prompt = f"{instruction}: '{message}'."
- openai.api_key = openai_api_key
-
- response = openai.Completion.create(
- model="text-davinci-003",
- prompt=augmented_prompt,
- temperature=0.5,
- max_tokens=2048,
- top_p=1,
- frequency_penalty=0,
- presence_penalty=0.6
- )
-
- #print(response)
-
- #return str(response.choices[0].text).split("\n",2)[2]
- return str(response.choices[0].text).lstrip('\n')
-
-
-def get_track_by_tags(tags, pat, duration, gen_intensity, gen_mode, maxit=20):
-
- r = httpx.post('https://api-b2b.mubert.com/v2/RecordTrackTTM',
- json={
- "method": "RecordTrackTTM",
- "params": {
- "pat": pat,
- "duration": duration,
- "format": "wav",
- "intensity":gen_intensity,
- "tags": tags,
- "mode": gen_mode
- }
- })
-
- rdata = json.loads(r.text)
- print(rdata)
- #assert rdata['status'] == 1, rdata['error']['text']
- trackurl = rdata['data']['tasks'][0]
-
- print('Generating track ', end='')
- for i in range(maxit):
- r = httpx.get(trackurl)
- if r.status_code == 200:
- return trackurl
- time.sleep(1)
-
-
-def generate_track_by_prompt(pat, prompt, duration, gen_intensity, gen_mode):
- try:
- _, tags = get_tags_for_prompts(minilm, mubert_tags_embeddings, prompt)[0]
- result = get_track_by_tags(tags, pat, int(duration), gen_intensity, gen_mode)
- print(result)
- return result, ",".join(tags), "Success"
- except Exception as e:
- return None, "", str(e)
-
-def convert_mp3_to_wav(mp3_filepath):
-
- wave_file="file.wav"
-
- sound = AudioSegment.from_mp3(mp3_filepath)
- sound.export(wave_file, format="wav")
-
- return wave_file
-
-def remove_emoji(text):
- emoji_pattern = re.compile("["
- u"\U0001F600-\U0001F64F" # emoticons
- u"\U0001F300-\U0001F5FF" # symbols & pictographs
- u"\U0001F680-\U0001F6FF" # transport & map symbols
- u"\U0001F1E0-\U0001F1FF" # flags (iOS)
- "]+", flags=re.UNICODE)
- return emoji_pattern.sub(r'', text)
-
-def remove_nonalphanumeric(text):
- return re.sub(r'[^a-zA-Z0-9\s]', '', text)
-
-def clean_text(text):
- clean_text = remove_nonalphanumeric(text)
- clean_text = remove_emoji(clean_text)
- clean_text = re.sub(r'\d+', '', clean_text) # Remove any number
- return clean_text
-
-article = """
-
-
-
-
-
You may also like:
-
-
-
-
-
-
-"""
-
-with gr.Blocks(css="style.css") as demo:
- with gr.Column(elem_id="col-container"):
-
- gr.HTML("""
-
-
- Image to Music
-
-
-
- Sends an image in to CLIP Interrogator
- to generate a text prompt which is then run through
- Mubert text-to-music to generate music from the input image!
-
-
""")
-
- input_img = gr.Image(type="filepath", elem_id="input-img")
- prompts_out = gr.Textbox(label="Text Captions", visible=False, elem_id="prompts_out", info="If player do not work, try to copy/paste the link in a new browser window")
- music_output = gr.Audio(label="Result", type="filepath", elem_id="music-output").style(height="5rem")
- #music_url = gr.Textbox(max_lines=1, info="If player do not work, try to copy/paste the link in a new browser window")
- #text_status = gr.Textbox(label="status")
- with gr.Group(elem_id="share-btn-container"):
- community_icon = gr.HTML(community_icon_html, visible=False)
- loading_icon = gr.HTML(loading_icon_html, visible=False)
- share_button = gr.Button("Share to community", elem_id="share-btn", visible=False)
-
- with gr.Accordion(label="Music Generation Options", open=False):
- openai_api_key = gr.Textbox(type="password", label="🔐 Your OpenAI API Key (optional)", placeholder="sk-123abc...", info="You can use your OpenAI key to adapt CLIP Interrogator caption to a musical translation.")
- track_duration = gr.Slider(minimum=20, maximum=120, value=55, ustep=5, label="Track duration", elem_id="duration-inp")
- with gr.Row():
- gen_intensity = gr.Dropdown(choices=["low", "medium", "high"], value="medium", label="Intensity")
- gen_mode = gr.Radio(label="mode", choices=["track", "loop"], value="loop")
-
- generate = gr.Button("Generate Music from Image")
-
- gr.HTML(article)
-
- generate.click(get_prompts, inputs=[input_img,track_duration,gen_intensity,gen_mode, openai_api_key], outputs=[prompts_out, music_output, share_button, community_icon, loading_icon], api_name="i2m")
- share_button.click(None, [], [], _js=share_js)
-
-demo.queue(max_size=32).launch()
\ No newline at end of file
diff --git a/spaces/florim/MedGPT/CODE_OF_CONDUCT.md b/spaces/florim/MedGPT/CODE_OF_CONDUCT.md
deleted file mode 100644
index d2331b4c60b9fb27f06953273355dcf53b8d4321..0000000000000000000000000000000000000000
--- a/spaces/florim/MedGPT/CODE_OF_CONDUCT.md
+++ /dev/null
@@ -1,40 +0,0 @@
-# Code of Conduct for auto-gpt
-
-## 1. Purpose
-
-The purpose of this Code of Conduct is to provide guidelines for contributors to the auto-gpt project on GitHub. We aim to create a positive and inclusive environment where all participants can contribute and collaborate effectively. By participating in this project, you agree to abide by this Code of Conduct.
-
-## 2. Scope
-
-This Code of Conduct applies to all contributors, maintainers, and users of the auto-gpt project. It extends to all project spaces, including but not limited to issues, pull requests, code reviews, comments, and other forms of communication within the project.
-
-## 3. Our Standards
-
-We encourage the following behavior:
-
-* Being respectful and considerate to others
-* Actively seeking diverse perspectives
-* Providing constructive feedback and assistance
-* Demonstrating empathy and understanding
-
-We discourage the following behavior:
-
-* Harassment or discrimination of any kind
-* Disrespectful, offensive, or inappropriate language or content
-* Personal attacks or insults
-* Unwarranted criticism or negativity
-
-## 4. Reporting and Enforcement
-
-If you witness or experience any violations of this Code of Conduct, please report them to the project maintainers by email or other appropriate means. The maintainers will investigate and take appropriate action, which may include warnings, temporary or permanent bans, or other measures as necessary.
-
-Maintainers are responsible for ensuring compliance with this Code of Conduct and may take action to address any violations.
-
-## 5. Acknowledgements
-
-This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org/version/2/0/code_of_conduct.html).
-
-## 6. Contact
-
-If you have any questions or concerns, please contact the project maintainers.
-
diff --git a/spaces/freddyaboulton/atari_agents/app.py b/spaces/freddyaboulton/atari_agents/app.py
deleted file mode 100644
index 2ca2df8c54bf9e9116eceb6df565c8d4aae75da6..0000000000000000000000000000000000000000
--- a/spaces/freddyaboulton/atari_agents/app.py
+++ /dev/null
@@ -1,71 +0,0 @@
-import cv2
-import gradio as gr
-import time
-
-from huggingface_sb3 import load_from_hub
-
-from stable_baselines3 import PPO
-from stable_baselines3.common.env_util import make_atari_env
-from stable_baselines3.common.vec_env import VecFrameStack
-
-from stable_baselines3.common.env_util import make_atari_env
-
-max_steps = 5000 # Let's try with 5000 steps.
-
-# Loading functions were taken from Edward Beeching code
-def load_env(env_name):
- env = make_atari_env(env_name, n_envs=1)
- env = VecFrameStack(env, n_stack=4)
- return env
-
-def load_model(env_name):
- custom_objects = {
- "learning_rate": 0.0,
- "lr_schedule": lambda _: 0.0,
- "clip_range": lambda _: 0.0,
- }
-
- checkpoint = load_from_hub(
- f"ThomasSimonini/ppo-{env_name}",
- f"ppo-{env_name}.zip",
- )
-
- model = PPO.load(checkpoint, custom_objects=custom_objects)
-
- return model
-
-def replay(env_name, time_sleep):
- max_steps = 500
- env = load_env(env_name)
- model = load_model(env_name)
- #for i in range(num_episodes):
- obs = env.reset()
- done = False
- i = 0
- while not done:
- i+= 1
- if i < max_steps:
- frame = env.render(mode="rgb_array")
- action, _states = model.predict(obs)
- obs, reward, done, info = env.step([action])
- time.sleep(time_sleep)
- yield frame
- else:
- break
-
-demo = gr.Interface(
- replay,
- [gr.Dropdown(["SpaceInvadersNoFrameskip-v4",
- "PongNoFrameskip-v4",
- "SeaquestNoFrameskip-v4",
- "QbertNoFrameskip-v4",
- ]),
- #gr.Slider(100, 10000, value=500),
- gr.Slider(0.01, 1, value=0.05),
- #gr.Slider(1, 20, value=5)
- ],
- gr.Image(shape=(300, 150)),
- title="Watch Agents playing Atari games 🤖",
- description="Select an environment to watch a Hugging Face's trained deep reinforcement learning agent.",
- article = "time_sleep is the time delay between each frame (0.05 by default)."
-).launch().queue(max_concurrency=20, max_size=20)
\ No newline at end of file
diff --git a/spaces/gagan3012/T5-Summarization/src/visualization/visualize.py b/spaces/gagan3012/T5-Summarization/src/visualization/visualize.py
deleted file mode 100644
index 75d5f46eaef8b8ba573b5ff9f323861ff6ca992d..0000000000000000000000000000000000000000
--- a/spaces/gagan3012/T5-Summarization/src/visualization/visualize.py
+++ /dev/null
@@ -1,32 +0,0 @@
-import streamlit as st
-import yaml
-
-from src.models.predict_model import predict_model
-
-
-def visualize():
- st.write("# Summarization UI")
- st.markdown(
- """
- *For additional questions and inquiries, please contact **Gagan Bhatia** via [LinkedIn](
- https://www.linkedin.com/in/gbhatia30/) or [Github](https://github.com/gagan3012).*
- """
- )
-
- text = st.text_area("Enter text here")
- if st.button("Generate Summary"):
- with st.spinner("Connecting the Dots..."):
- sumtext = predict_model(text=text)
- st.write("# Generated Summary:")
- st.write("{}".format(sumtext))
- with open("reports/visualization_metrics.txt", "w") as file1:
- file1.writelines(text)
- file1.writelines(sumtext)
-
-
-if __name__ == "__main__":
- with open("params.yml") as f:
- params = yaml.safe_load(f)
-
- if params["visualise"]:
- visualize()
diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/assign_score_withk.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/assign_score_withk.py
deleted file mode 100644
index 4906adaa2cffd1b46912fbe7d4f87ef2f9fa0012..0000000000000000000000000000000000000000
--- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/assign_score_withk.py
+++ /dev/null
@@ -1,123 +0,0 @@
-from torch.autograd import Function
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext(
- '_ext', ['assign_score_withk_forward', 'assign_score_withk_backward'])
-
-
-class AssignScoreWithK(Function):
- r"""Perform weighted sum to generate output features according to scores.
- Modified from `PAConv `_.
-
- This is a memory-efficient CUDA implementation of assign_scores operation,
- which first transform all point features with weight bank, then assemble
- neighbor features with ``knn_idx`` and perform weighted sum of ``scores``.
-
- See the `paper `_ appendix Sec. D for
- more detailed descriptions.
-
- Note:
- This implementation assumes using ``neighbor`` kernel input, which is
- (point_features - center_features, point_features).
- See https://github.com/CVMI-Lab/PAConv/blob/main/scene_seg/model/
- pointnet2/paconv.py#L128 for more details.
- """
-
- @staticmethod
- def forward(ctx,
- scores,
- point_features,
- center_features,
- knn_idx,
- aggregate='sum'):
- """
- Args:
- scores (torch.Tensor): (B, npoint, K, M), predicted scores to
- aggregate weight matrices in the weight bank.
- ``npoint`` is the number of sampled centers.
- ``K`` is the number of queried neighbors.
- ``M`` is the number of weight matrices in the weight bank.
- point_features (torch.Tensor): (B, N, M, out_dim)
- Pre-computed point features to be aggregated.
- center_features (torch.Tensor): (B, N, M, out_dim)
- Pre-computed center features to be aggregated.
- knn_idx (torch.Tensor): (B, npoint, K), index of sampled kNN.
- We assume the first idx in each row is the idx of the center.
- aggregate (str, optional): Aggregation method.
- Can be 'sum', 'avg' or 'max'. Defaults: 'sum'.
-
- Returns:
- torch.Tensor: (B, out_dim, npoint, K), the aggregated features.
- """
- agg = {'sum': 0, 'avg': 1, 'max': 2}
-
- B, N, M, out_dim = point_features.size()
- _, npoint, K, _ = scores.size()
-
- output = point_features.new_zeros((B, out_dim, npoint, K))
- ext_module.assign_score_withk_forward(
- point_features.contiguous(),
- center_features.contiguous(),
- scores.contiguous(),
- knn_idx.contiguous(),
- output,
- B=B,
- N0=N,
- N1=npoint,
- M=M,
- K=K,
- O=out_dim,
- aggregate=agg[aggregate])
-
- ctx.save_for_backward(output, point_features, center_features, scores,
- knn_idx)
- ctx.agg = agg[aggregate]
-
- return output
-
- @staticmethod
- def backward(ctx, grad_out):
- """
- Args:
- grad_out (torch.Tensor): (B, out_dim, npoint, K)
-
- Returns:
- grad_scores (torch.Tensor): (B, npoint, K, M)
- grad_point_features (torch.Tensor): (B, N, M, out_dim)
- grad_center_features (torch.Tensor): (B, N, M, out_dim)
- """
- _, point_features, center_features, scores, knn_idx = ctx.saved_tensors
-
- agg = ctx.agg
-
- B, N, M, out_dim = point_features.size()
- _, npoint, K, _ = scores.size()
-
- grad_point_features = point_features.new_zeros(point_features.shape)
- grad_center_features = center_features.new_zeros(center_features.shape)
- grad_scores = scores.new_zeros(scores.shape)
-
- ext_module.assign_score_withk_backward(
- grad_out.contiguous(),
- point_features.contiguous(),
- center_features.contiguous(),
- scores.contiguous(),
- knn_idx.contiguous(),
- grad_point_features,
- grad_center_features,
- grad_scores,
- B=B,
- N0=N,
- N1=npoint,
- M=M,
- K=K,
- O=out_dim,
- aggregate=agg)
-
- return grad_scores, grad_point_features, \
- grad_center_features, None, None
-
-
-assign_score_withk = AssignScoreWithK.apply
diff --git a/spaces/georgefen/Face-Landmark-ControlNet/app.py b/spaces/georgefen/Face-Landmark-ControlNet/app.py
deleted file mode 100644
index 2d27c61488f85b1100aa8573bc3ed4a6a7af3273..0000000000000000000000000000000000000000
--- a/spaces/georgefen/Face-Landmark-ControlNet/app.py
+++ /dev/null
@@ -1,211 +0,0 @@
-from share import *
-import config
-
-import cv2
-import einops
-import gradio as gr
-import numpy as np
-import torch
-import random
-
-from pytorch_lightning import seed_everything
-from annotator.util import resize_image, HWC3
-from cldm.model import create_model, load_state_dict
-from cldm.ddim_hacked import DDIMSampler
-
-import dlib
-from PIL import Image, ImageDraw
-
-if torch.cuda.is_available():
- device = torch.device("cuda")
-else:
- device = torch.device("cpu")
-
-model = create_model('./models/cldm_v15.yaml').cpu()
-model.load_state_dict(load_state_dict(
- './models/control_sd15_landmarks.pth', location='cpu'))
-model = model.to(device)
-ddim_sampler = DDIMSampler(model)
-
-detector = dlib.get_frontal_face_detector()
-predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
-
-
-canvas_html = ""
-load_js = """
-async () => {
-const url = "https://huggingface.co/datasets/radames/gradio-components/raw/main/face-canvas.js"
-fetch(url)
- .then(res => res.text())
- .then(text => {
- const script = document.createElement('script');
- script.type = "module"
- script.src = URL.createObjectURL(new Blob([text], { type: 'application/javascript' }));
- document.head.appendChild(script);
- });
-}
-"""
-get_js_image = """
-async (input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, ddim_steps, guess_mode, landmark_direct_mode, strength, scale, seed, eta, image_file_live_opt) => {
- const canvasEl = document.getElementById("canvas-root");
- const imageData = canvasEl? canvasEl._data : null;
- if(image_file_live_opt === 'webcam'){
- input_image = imageData['image']
- landmark_direct_mode = true
- }
- return [input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, ddim_steps, guess_mode, landmark_direct_mode, strength, scale, seed, eta, image_file_live_opt]
-}
-"""
-
-
-def draw_landmarks(image, landmarks, color="white", radius=2.5):
- draw = ImageDraw.Draw(image)
- for dot in landmarks:
- x, y = dot
- draw.ellipse((x-radius, y-radius, x+radius, y+radius), fill=color)
-
-
-def get_68landmarks_img(img):
- gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
- faces = detector(gray)
- landmarks = []
- for face in faces:
- shape = predictor(gray, face)
- for i in range(68):
- x = shape.part(i).x
- y = shape.part(i).y
- landmarks.append((x, y))
- con_img = Image.new('RGB', (img.shape[1], img.shape[0]), color=(0, 0, 0))
- draw_landmarks(con_img, landmarks)
- con_img = np.array(con_img)
- return con_img
-
-
-def process(input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, ddim_steps, guess_mode, landmark_direct_mode, strength, scale, seed, eta, image_file_live_opt="file"):
- input_image = input_image.convert('RGB')
- input_image = np.array(input_image)
- input_image = np.flip(input_image, axis=2)
- print('input_image.shape', input_image.shape)
- # Limit the number of samples to 2 for Spaces only
- num_samples = min(num_samples, 2)
- with torch.no_grad():
- img = resize_image(HWC3(input_image), image_resolution)
- H, W, C = img.shape
-
- if landmark_direct_mode:
- detected_map = img
- else:
- detected_map = get_68landmarks_img(img)
- detected_map = HWC3(detected_map)
-
- control = torch.from_numpy(
- detected_map.copy()).float().to(device) / 255.0
- control = torch.stack([control for _ in range(num_samples)], dim=0)
- control = einops.rearrange(control, 'b h w c -> b c h w').clone()
-
- if seed == -1:
- seed = random.randint(0, 2**32 - 1)
- seed_everything(seed)
-
- if config.save_memory:
- model.low_vram_shift(is_diffusing=False)
-
- cond = {"c_concat": [control], "c_crossattn": [
- model.get_learned_conditioning([prompt + ', ' + a_prompt] * num_samples)]}
- un_cond = {"c_concat": None if guess_mode else [control], "c_crossattn": [
- model.get_learned_conditioning([n_prompt] * num_samples)]}
- shape = (4, H // 8, W // 8)
-
- if config.save_memory:
- model.low_vram_shift(is_diffusing=True)
-
- model.control_scales = [strength * (0.825 ** float(12 - i)) for i in range(13)] if guess_mode else (
- [strength] * 13) # Magic number. IDK why. Perhaps because 0.825**12<0.01 but 0.826**12>0.01
- samples, intermediates = ddim_sampler.sample(ddim_steps, num_samples,
- shape, cond, verbose=False, eta=eta,
- unconditional_guidance_scale=scale,
- unconditional_conditioning=un_cond)
-
- if config.save_memory:
- model.low_vram_shift(is_diffusing=False)
-
- x_samples = model.decode_first_stage(samples)
- x_samples = (einops.rearrange(x_samples, 'b c h w -> b h w c')
- * 127.5 + 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
-
- results = [x_samples[i] for i in range(num_samples)]
-
- return [255 - detected_map] + results
-
-
-def toggle(choice):
- if choice == "file":
- return gr.update(visible=True, value=None), gr.update(visible=False, value=None)
- elif choice == "webcam":
- return gr.update(visible=False, value=None), gr.update(visible=True, value=canvas_html)
-
-
-block = gr.Blocks().queue()
-with block:
- live_conditioning = gr.JSON(value={}, visible=False)
- with gr.Row():
- gr.Markdown("## Control Stable Diffusion with Face Landmarks")
- with gr.Row():
- with gr.Column():
- image_file_live_opt = gr.Radio(["file", "webcam"], value="file",
- label="How would you like to upload your image?")
- input_image = gr.Image(source="upload", visible=True, type="pil")
- canvas = gr.HTML(None, elem_id="canvas_html", visible=False)
-
- image_file_live_opt.change(fn=toggle,
- inputs=[image_file_live_opt],
- outputs=[input_image, canvas],
- queue=False)
-
- prompt = gr.Textbox(label="Prompt")
- run_button = gr.Button(label="Run")
- with gr.Accordion("Advanced options", open=False):
- num_samples = gr.Slider(
- label="Images", minimum=1, maximum=2, value=1, step=1)
- image_resolution = gr.Slider(
- label="Image Resolution", minimum=256, maximum=768, value=512, step=64)
- strength = gr.Slider(
- label="Control Strength", minimum=0.0, maximum=2.0, value=1.0, step=0.01)
- guess_mode = gr.Checkbox(label='Guess Mode', value=False)
- landmark_direct_mode = gr.Checkbox(
- label='Input Landmark Directly', value=False)
- ddim_steps = gr.Slider(
- label="Steps", minimum=1, maximum=100, value=20, step=1)
- scale = gr.Slider(label="Guidance Scale",
- minimum=0.1, maximum=30.0, value=9.0, step=0.1)
- seed = gr.Slider(label="Seed", minimum=-1,
- maximum=2147483647, step=1, randomize=True)
- eta = gr.Number(label="eta (DDIM)", value=0.0)
- a_prompt = gr.Textbox(
- label="Added Prompt", value='best quality, extremely detailed')
- n_prompt = gr.Textbox(label="Negative Prompt",
- value='cartoon, disfigured, bad art, deformed, poorly drawn, extra limbs, weird colors, blurry, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality')
- with gr.Column():
- result_gallery = gr.Gallery(
- label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto')
- ips = [input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution,
- ddim_steps, guess_mode, landmark_direct_mode, strength, scale, seed, eta]
-
- gr.Examples(fn=process, examples=[
- ["examples/image0.jpg", "a silly clown face", "best quality, extremely detailed",
- "cartoon, disfigured, bad art, deformed, poorly drawn, extra limbs, weird colors, blurry, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality", 1, 512, 20, False, False, 1.0, 9.0, -1, 0.0],
- ["examples/image1.png", "a photo of a woman wearing glasses", "best quality, extremely detailed",
- "cartoon, disfigured, bad art, deformed, poorly drawn, extra limbs, weird colors, blurry, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality", 1, 512, 20, False, False, 1.0, 9.0, -1, 0.0],
- ["examples/image2.png", "a silly portrait of man with head tilted and a beautiful hair on the side", "best quality, extremely detailed",
- "cartoon, disfigured, bad art, deformed, poorly drawn, extra limbs, weird colors, blurry, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality", 1, 512, 20, False, False, 1.0, 9.0, -1, 0.0],
- ["examples/image3.png", "portrait handsome men", "best quality, extremely detailed",
- "cartoon, disfigured, bad art, deformed, poorly drawn, extra limbs, weird colors, blurry, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality", 1, 512, 20, False, False, 1.0, 9.0, -1, 0.0],
- ["examples/image4.jpg", "a beautiful woman looking at the sky", "best quality, extremely detailed",
- "cartoon, disfigured, bad art, deformed, poorly drawn, extra limbs, weird colors, blurry, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality", 1, 512, 20, False, False, 1.0, 9.0, -1, 0.0],
- ], inputs=ips, outputs=[result_gallery], cache_examples=True)
- run_button.click(fn=process, inputs=ips + [image_file_live_opt],
- outputs=[result_gallery], _js=get_js_image)
- block.load(None, None, None, _js=load_js)
-
-
-block.launch()
diff --git a/spaces/ghoskno/ColorCanny-Controlnet/README.md b/spaces/ghoskno/ColorCanny-Controlnet/README.md
deleted file mode 100644
index 4663355ec649e1835eef8737e3ea105b61dd62e3..0000000000000000000000000000000000000000
--- a/spaces/ghoskno/ColorCanny-Controlnet/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: ColorCanny Controlnet
-emoji: 🐨
-colorFrom: purple
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
-tags:
-- jax-diffusers-event
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Antamedia Internet Caffe 5.4 0 (Max 250 Clients) Crackl Latest Updates and News.md b/spaces/gotiQspiryo/whisper-ui/examples/Antamedia Internet Caffe 5.4 0 (Max 250 Clients) Crackl Latest Updates and News.md
deleted file mode 100644
index 317333cdfcc3a59a716d90368e5ebe1324363f3d..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Antamedia Internet Caffe 5.4 0 (Max 250 Clients) Crackl Latest Updates and News.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Antamedia Internet Caffe 5.4 0 (Max 250 Clients) Crackl
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Ink Plane Download For Pc [Torrent] How to Get the Game for Free on Steam.md b/spaces/gotiQspiryo/whisper-ui/examples/Ink Plane Download For Pc [Torrent] How to Get the Game for Free on Steam.md
deleted file mode 100644
index a19ba9edb2c23b4609c8fa1a3f16c0f3640a63ec..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Ink Plane Download For Pc [Torrent] How to Get the Game for Free on Steam.md
+++ /dev/null
@@ -1,21 +0,0 @@
-
-
Similar to other torrent downloaders, Vuze for Windows offers instant search, simultaneous downloads, and queuing options. But, apart from these, you can use the tool to discover relevant content, adjust user experience settings, access the software using a remote application, and install plugins and extensions.
Vuze download and installation is quite simple and only takes a few minutes. Once complete, you get access to the easy-to-use interface that consists of a display panel, menu bar, and a navigation panel. The dashboard also comes with an in-built video player, search bar, remote control, and automatic transcoding - all of which are subtly placed so as not to overwhelm beginner users. Experts can customize the dashboard using tools and plugins.
-
The software also provides users more control on the dashboard and downloads as compared to other torrent downloaders. It lets users block IP addresses if they send bad data. It also lets them control the application using a mobile app. Users can start, pause, or stop a download from anywhere. While the app to control Vuze remotely can be downloaded and used for free, it is only available for Android devices.
-
While the in-built antivirus protection only comes with the paid version, the free version is also safe to download and use. The program comes with different safety services that make downloading files safe and is considered free from malware. You should also explore the comments section and check the rating before downloading torrents.
-
Vuze is quite popular among torrent clients but can seem overwhelming to beginners. Other BitTorrent clients that are lightweight and offer good features are, uTorrent, BitTorrent, qBittorrent, and Deluge.
-
-
If you work for academic, government, or non-profit institutions you may download and use this software for free. We only ask that you acknowledge the use of these programs in your published papers, talks, and reports. We also ask that you reference the key scientific publications where the algorithms behind the programs are described.
-
A comprehensive manual is included with the zip archive. For Mac Users still on Mac OS X 10.5 and lower (Leopard, Tiger, etc.), you can download a Carbon version of Stereonet. Note that this version will not be kept up to date with the above Cocoa version.
-
With File Browser you can open, download, rename and delete remote files without mount.
File Browser works without overheads of Windows Explorer and macOS Finder and provides easy and fast access to your files.
-
Flight simulators are the perfect option for aviation enthusiasts who are stuck at home. You can take control of your favorite plane with true-to-life cockpits, fly in and out of popular airports, navigate real-life weather models, and experience incredibly detailed 3D graphics.
-
The game offers 200 different airport destinations that you can fly into along with planes like the Robin DR-400 for sightseeing, the Extra 330 for aerobatic skills, or the F-18 for high-speed flying.
-
With FlyInside, you can slip on a VR headset and feel as though you are truly flying your favorite plane. While you can still play the game in the desktop version, the best flying experience comes from the full immersion using a VR motion controller and headset.
-
The controls are a bit more barebones and not as involved as other options on this list, but this does make the learning curve quite a bit easier for those just looking for a fun airplane combat experience.
-
The free version of Concepts is a sketchbook on steroids. Use an infinite canvas, gorgeous brushes, 5 layers, and a whole lot of creative freedom. No account or signup required - just download the app and start sketching.
-
One of the best reasons for using Adobe Digital Editions is its support for EPUB 3 standard which gives users a richer reading experience by bringing support for right to left reading, dynamic image resizing without loss in clarity, interactive quizzes, better rendering of math formulas, and more.Adobe Digital Editions also brings a ton of other convenient features like exceptional search capabilities, the ability to rent or borrow Epub version of books from your local and public libraries, multi-lingual support, bookmarking, highlighting, notes, and more. If you are looking for a full-fledged, Epub reading experience, Adobe Digital Edition is the right app to do that.Supported Platforms: Windows 11, Windows 10, Windows 8, Windows 8.1, Windows Vista and Windows 7ProsConsEasily sync books across devicesThe reading mode is not user customizableGood book organization featuresSlow to load if you have a large libraryGood reading experience with support for EPUB 3 standardNeed an Adobe account to use itSupport for bookmarks, highlights, and notesDoes not sync across devicesDownload: Free9. BibliovoreBibliovore is yet another great free Epub reader for your Windows machine. The app can be easily downloaded from the Windows app store and is completely free to download and use. I love this app because it brings fantastic organizational features allowing you to manage even a large library of books with ease.
-
The app also allows you to easily adjust font parameters, manage reading themes, edit book metadata, use day/night reading mode, and more. One of my favorite features of this app is that despite being free, it syncs all your books across devices using OneDrive. I think this is one of the best epub readers for Windows 10 that you can use right now.Supported Platforms: Windows 11, Windows 10, Windows 8.1 (x86, x64)ProsConsGood reading expereince with support for themesNeeds more customization features for fonts, spacing, etc.Good organization featuresSupport for book metadata editingGroups books in a seriesDownload: Free10. BookviserBookviser is an Epub reader for Windows which wants to give you a reading experience that is similar to reading physical books. It does that by designing its UI in such a way that it looks like a real book. That said, if you are not fond of such a UI, you can easily get into the settings to get a more traditional Epub reader experience.Just like Freda, Bookviser also allows you to download free classics from public catalogs including Feedbooks, Project Gutenberg, and Smashwords. Rest of the Epub reader features like progress tracking, theming, dictionary support and more can also be found here.
-
Have you seriously not visited ANY app-store previously?!? 90% of apps ARE FREE to download and to use/try and then you can CAN BUY more content etc. Lets not call it lying mkay. Try building an app yourself and use 1000s of hours on it and then ask nothing for it.. ?
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Mount Blade With Fire and Sword 1.138 Serial Key Generator The Best Way to Activate the Game.md b/spaces/gotiQspiryo/whisper-ui/examples/Mount Blade With Fire and Sword 1.138 Serial Key Generator The Best Way to Activate the Game.md
deleted file mode 100644
index e2ae364700b5e8858868becdcaeb7e92933e0024..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Mount Blade With Fire and Sword 1.138 Serial Key Generator The Best Way to Activate the Game.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/gradio/HuBERT/fairseq/data/encoders/gpt2_bpe_utils.py b/spaces/gradio/HuBERT/fairseq/data/encoders/gpt2_bpe_utils.py
deleted file mode 100644
index 688d4e36e358df2dcc432d37d3e57bd81e2f1ed1..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/fairseq/data/encoders/gpt2_bpe_utils.py
+++ /dev/null
@@ -1,140 +0,0 @@
-"""
-Byte pair encoding utilities from GPT-2.
-
-Original source: https://github.com/openai/gpt-2/blob/master/src/encoder.py
-Original license: MIT
-"""
-
-import json
-from functools import lru_cache
-
-
-@lru_cache()
-def bytes_to_unicode():
- """
- Returns list of utf-8 byte and a corresponding list of unicode strings.
- The reversible bpe codes work on unicode strings.
- This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
- When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
- This is a signficant percentage of your normal, say, 32K bpe vocab.
- To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
- And avoids mapping to whitespace/control characters the bpe code barfs on.
- """
- bs = (
- list(range(ord("!"), ord("~") + 1))
- + list(range(ord("¡"), ord("¬") + 1))
- + list(range(ord("®"), ord("ÿ") + 1))
- )
- cs = bs[:]
- n = 0
- for b in range(2 ** 8):
- if b not in bs:
- bs.append(b)
- cs.append(2 ** 8 + n)
- n += 1
- cs = [chr(n) for n in cs]
- return dict(zip(bs, cs))
-
-
-def get_pairs(word):
- """Return set of symbol pairs in a word.
- Word is represented as tuple of symbols (symbols being variable-length strings).
- """
- pairs = set()
- prev_char = word[0]
- for char in word[1:]:
- pairs.add((prev_char, char))
- prev_char = char
- return pairs
-
-
-class Encoder:
- def __init__(self, encoder, bpe_merges, errors="replace"):
- self.encoder = encoder
- self.decoder = {v: k for k, v in self.encoder.items()}
- self.errors = errors # how to handle errors in decoding
- self.byte_encoder = bytes_to_unicode()
- self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
- self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges))))
- self.cache = {}
-
- try:
- import regex as re
-
- self.re = re
- except ImportError:
- raise ImportError("Please install regex with: pip install regex")
-
- # Should haved added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions
- self.pat = self.re.compile(
- r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+"""
- )
-
- def bpe(self, token):
- if token in self.cache:
- return self.cache[token]
- word = tuple(token)
- pairs = get_pairs(word)
-
- if not pairs:
- return token
-
- while True:
- bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf")))
- if bigram not in self.bpe_ranks:
- break
- first, second = bigram
- new_word = []
- i = 0
- while i < len(word):
- try:
- j = word.index(first, i)
- new_word.extend(word[i:j])
- i = j
- except:
- new_word.extend(word[i:])
- break
-
- if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
- new_word.append(first + second)
- i += 2
- else:
- new_word.append(word[i])
- i += 1
- new_word = tuple(new_word)
- word = new_word
- if len(word) == 1:
- break
- else:
- pairs = get_pairs(word)
- word = " ".join(word)
- self.cache[token] = word
- return word
-
- def encode(self, text):
- bpe_tokens = []
- for token in self.re.findall(self.pat, text):
- token = "".join(self.byte_encoder[b] for b in token.encode("utf-8"))
- bpe_tokens.extend(
- self.encoder[bpe_token] for bpe_token in self.bpe(token).split(" ")
- )
- return bpe_tokens
-
- def decode(self, tokens):
- text = "".join([self.decoder.get(token, token) for token in tokens])
- text = bytearray([self.byte_decoder[c] for c in text]).decode(
- "utf-8", errors=self.errors
- )
- return text
-
-
-def get_encoder(encoder_json_path, vocab_bpe_path):
- with open(encoder_json_path, "r") as f:
- encoder = json.load(f)
- with open(vocab_bpe_path, "r", encoding="utf-8") as f:
- bpe_data = f.read()
- bpe_merges = [tuple(merge_str.split()) for merge_str in bpe_data.split("\n")[1:-1]]
- return Encoder(
- encoder=encoder,
- bpe_merges=bpe_merges,
- )
diff --git a/spaces/gradio/neon-tts-plugin-coqui_main/README.md b/spaces/gradio/neon-tts-plugin-coqui_main/README.md
deleted file mode 100644
index 9c3ff2128d6158fe6d8366fe16cece3104718841..0000000000000000000000000000000000000000
--- a/spaces/gradio/neon-tts-plugin-coqui_main/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
-
----
-title: neon-tts-plugin-coqui_main
-emoji: 🔥
-colorFrom: indigo
-colorTo: indigo
-sdk: gradio
-sdk_version: 4.1.2
-app_file: run.py
-pinned: false
-hf_oauth: true
----
diff --git a/spaces/greco/survey_analytics_spaces/README.md b/spaces/greco/survey_analytics_spaces/README.md
deleted file mode 100644
index f7e25e455b39f013f3b12b887ceed9ae0ebd1bdf..0000000000000000000000000000000000000000
--- a/spaces/greco/survey_analytics_spaces/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Survey Analytics
-emoji: 🐨
-colorFrom: green
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
-license: gpl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/gsaivinay/Llama-2-13B-GGML-UI/Makefile b/spaces/gsaivinay/Llama-2-13B-GGML-UI/Makefile
deleted file mode 100644
index 8dc4e12dc227a0ffe26ac1769fd9da539e5b438c..0000000000000000000000000000000000000000
--- a/spaces/gsaivinay/Llama-2-13B-GGML-UI/Makefile
+++ /dev/null
@@ -1,18 +0,0 @@
-include .env
-
-.PHONY: all
-
-build:
- docker build -t chatbot-ui .
-
-run:
- export $(cat .env | xargs)
- docker stop chatbot-ui || true && docker rm chatbot-ui || true
- docker run --name chatbot-ui --rm -e OPENAI_API_KEY=${OPENAI_API_KEY} -p 3000:3000 chatbot-ui
-
-logs:
- docker logs -f chatbot-ui
-
-push:
- docker tag chatbot-ui:latest ${DOCKER_USER}/chatbot-ui:${DOCKER_TAG}
- docker push ${DOCKER_USER}/chatbot-ui:${DOCKER_TAG}
\ No newline at end of file
diff --git a/spaces/gsaivinay/Llama-2-13B-GGML-UI/pages/api/home/home.context.tsx b/spaces/gsaivinay/Llama-2-13B-GGML-UI/pages/api/home/home.context.tsx
deleted file mode 100644
index be00de03828d0cc84a129522446c6e3de6dbab1f..0000000000000000000000000000000000000000
--- a/spaces/gsaivinay/Llama-2-13B-GGML-UI/pages/api/home/home.context.tsx
+++ /dev/null
@@ -1,27 +0,0 @@
-import { Dispatch, createContext } from 'react';
-
-import { ActionType } from '@/hooks/useCreateReducer';
-
-import { Conversation } from '@/types/chat';
-import { KeyValuePair } from '@/types/data';
-import { FolderType } from '@/types/folder';
-
-import { HomeInitialState } from './home.state';
-
-export interface HomeContextProps {
- state: HomeInitialState;
- dispatch: Dispatch>;
- handleNewConversation: () => void;
- handleCreateFolder: (name: string, type: FolderType) => void;
- handleDeleteFolder: (folderId: string) => void;
- handleUpdateFolder: (folderId: string, name: string) => void;
- handleSelectConversation: (conversation: Conversation) => void;
- handleUpdateConversation: (
- conversation: Conversation,
- data: KeyValuePair,
- ) => void;
-}
-
-const HomeContext = createContext(undefined!);
-
-export default HomeContext;
diff --git a/spaces/gstaff/xkcd/README.md b/spaces/gstaff/xkcd/README.md
deleted file mode 100644
index c4b74ab50862b801f8d9a75813302be6e22c97a0..0000000000000000000000000000000000000000
--- a/spaces/gstaff/xkcd/README.md
+++ /dev/null
@@ -1,27 +0,0 @@
----
-tags:
-- gradio-theme
-- track-1
-- track-4
-title: xkcd Gradio Theme
-emoji: 🚀
-colorFrom: gray
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-# xkcd Gradio Theme
-## Description
-A simple monochrome theme using the font of the famous [xkcd comics](https://xkcd.com/) by Randall Munroe!
-
-Gives a playful and creative look to your designs. Suitable for apps of romance, sarcasm, math, and language.
-
-## Contributions
-This gradio theme was developed by [@gstaff](https://huggingface.co/gstaff)!
-
-The font used here is provided by the [iPython team](https://github.com/ipython/xkcd-font).
-
-Credit and thanks to them for making it available under a [Creative Commons Attribution-NonCommercial 3.0 License](https://github.com/ipython/xkcd-font/blob/master/LICENSE).
\ No newline at end of file
diff --git a/spaces/gwang-kim/DATID-3D/eg3d/training/volumetric_rendering/ray_marcher.py b/spaces/gwang-kim/DATID-3D/eg3d/training/volumetric_rendering/ray_marcher.py
deleted file mode 100644
index c2c427f7499adf3d2a456d2a1f2d2724daa04621..0000000000000000000000000000000000000000
--- a/spaces/gwang-kim/DATID-3D/eg3d/training/volumetric_rendering/ray_marcher.py
+++ /dev/null
@@ -1,63 +0,0 @@
-# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
-#
-# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
-# property and proprietary rights in and to this material, related
-# documentation and any modifications thereto. Any use, reproduction,
-# disclosure or distribution of this material and related documentation
-# without an express license agreement from NVIDIA CORPORATION or
-# its affiliates is strictly prohibited.
-
-"""
-The ray marcher takes the raw output of the implicit representation and uses the volume rendering equation to produce composited colors and depths.
-Based off of the implementation in MipNeRF (this one doesn't do any cone tracing though!)
-"""
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-class MipRayMarcher2(nn.Module):
- def __init__(self):
- super().__init__()
-
-
- def run_forward(self, colors, densities, depths, rendering_options):
- deltas = depths[:, :, 1:] - depths[:, :, :-1]
- colors_mid = (colors[:, :, :-1] + colors[:, :, 1:]) / 2
- densities_mid = (densities[:, :, :-1] + densities[:, :, 1:]) / 2
- depths_mid = (depths[:, :, :-1] + depths[:, :, 1:]) / 2
-
-
- if rendering_options['clamp_mode'] == 'softplus':
- densities_mid = F.softplus(densities_mid - 1) # activation bias of -1 makes things initialize better
- else:
- assert False, "MipRayMarcher only supports `clamp_mode`=`softplus`!"
-
- density_delta = densities_mid * deltas
-
- alpha = 1 - torch.exp(-density_delta)
-
- alpha_shifted = torch.cat([torch.ones_like(alpha[:, :, :1]), 1-alpha + 1e-10], -2)
- weights = alpha * torch.cumprod(alpha_shifted, -2)[:, :, :-1]
-
- composite_rgb = torch.sum(weights * colors_mid, -2)
- weight_total = weights.sum(2)
- composite_depth = torch.sum(weights * depths_mid, -2) / weight_total
-
- # clip the composite to min/max range of depths
- composite_depth = torch.nan_to_num(composite_depth, float('inf'))
- composite_depth = torch.clamp(composite_depth, torch.min(depths), torch.max(depths))
-
- if rendering_options.get('white_back', False):
- composite_rgb = composite_rgb + 1 - weight_total
-
- composite_rgb = composite_rgb * 2 - 1 # Scale to (-1, 1)
-
- return composite_rgb, composite_depth, weights
-
-
- def forward(self, colors, densities, depths, rendering_options):
- composite_rgb, composite_depth, weights = self.run_forward(colors, densities, depths, rendering_options)
-
- return composite_rgb, composite_depth, weights
\ No newline at end of file
diff --git a/spaces/haakohu/deep_privacy2_face/dp2/generator/stylegan_unet.py b/spaces/haakohu/deep_privacy2_face/dp2/generator/stylegan_unet.py
deleted file mode 100644
index 6c3dfc46da323d04919cf5c166ec038820eac1ad..0000000000000000000000000000000000000000
--- a/spaces/haakohu/deep_privacy2_face/dp2/generator/stylegan_unet.py
+++ /dev/null
@@ -1,211 +0,0 @@
-import torch
-import numpy as np
-from dp2.layers import Sequential
-from dp2.layers.sg2_layers import Conv2d, FullyConnectedLayer, ResidualBlock
-from .base import BaseStyleGAN
-from typing import List, Tuple
-from .utils import spatial_embed_keypoints, mask_output
-
-
-def get_chsize(imsize, cnum, max_imsize, max_cnum_mul):
- n = int(np.log2(max_imsize) - np.log2(imsize))
- mul = min(2**n, max_cnum_mul)
- ch = cnum * mul
- return int(ch)
-
-
-class StyleGANUnet(BaseStyleGAN):
- def __init__(
- self,
- scale_grad: bool,
- im_channels: int,
- min_fmap_resolution: int,
- imsize: List[int],
- cnum: int,
- max_cnum_mul: int,
- mask_output: bool,
- conv_clamp: int,
- input_cse: bool,
- cse_nc: int,
- n_middle_blocks: int,
- input_keypoints: bool,
- n_keypoints: int,
- input_keypoint_indices: Tuple[int],
- fix_errors: bool,
- **kwargs
- ) -> None:
- super().__init__(**kwargs)
- self.n_keypoints = n_keypoints
- self.input_keypoint_indices = list(input_keypoint_indices)
- self.input_keypoints = input_keypoints
- assert not (input_cse and input_keypoints)
- cse_nc = 0 if cse_nc is None else cse_nc
- self.imsize = imsize
- self._cnum = cnum
- self._max_cnum_mul = max_cnum_mul
- self._min_fmap_resolution = min_fmap_resolution
- self._image_channels = im_channels
- self._max_imsize = max(imsize)
- self.input_cse = input_cse
- self.gain_unet = np.sqrt(1/3)
- n_levels = int(np.log2(self._max_imsize) - np.log2(min_fmap_resolution))+1
- encoder_layers = []
- self.from_rgb = Conv2d(
- im_channels + 1 + input_cse*(cse_nc+1) + input_keypoints*len(self.input_keypoint_indices),
- cnum, 1
- )
- for i in range(n_levels): # Encoder layers
- resolution = [x//2**i for x in imsize]
- in_ch = get_chsize(max(resolution), cnum, self._max_imsize, max_cnum_mul)
- second_ch = in_ch
- out_ch = get_chsize(max(resolution)//2, cnum, self._max_imsize, max_cnum_mul)
- down = 2
-
- if i == 0: # first (lowest) block. Downsampling is performed at the start of the block
- down = 1
- if i == n_levels - 1:
- out_ch = second_ch
- block = ResidualBlock(in_ch, out_ch, down=down, conv_clamp=conv_clamp, fix_residual=fix_errors)
- encoder_layers.append(block)
- self._encoder_out_shape = [
- get_chsize(min_fmap_resolution, cnum, self._max_imsize, max_cnum_mul),
- *resolution]
-
- self.encoder = torch.nn.ModuleList(encoder_layers)
-
- # initialize decoder
- decoder_layers = []
- for i in range(n_levels):
- resolution = [x//2**(n_levels-1-i) for x in imsize]
- in_ch = get_chsize(max(resolution)//2, cnum, self._max_imsize, max_cnum_mul)
- out_ch = get_chsize(max(resolution), cnum, self._max_imsize, max_cnum_mul)
- if i == 0: # first (lowest) block
- in_ch = get_chsize(max(resolution), cnum, self._max_imsize, max_cnum_mul)
-
- up = 1
- if i != n_levels - 1:
- up = 2
- block = ResidualBlock(
- in_ch, out_ch, conv_clamp=conv_clamp, gain_out=np.sqrt(1/3),
- w_dim=self.style_net.w_dim, norm=True, up=up,
- fix_residual=fix_errors
- )
- decoder_layers.append(block)
- if i != 0:
- unet_block = Conv2d(
- in_ch, in_ch, kernel_size=1, conv_clamp=conv_clamp, norm=True,
- gain=np.sqrt(1/3) if fix_errors else np.sqrt(.5))
- setattr(self, f"unet_block{i}", unet_block)
-
- # Initialize "middle blocks" that do not have down/up sample
- middle_blocks = []
- for i in range(n_middle_blocks):
- ch = get_chsize(min_fmap_resolution, cnum, self._max_imsize, max_cnum_mul)
- block = ResidualBlock(
- ch, ch, conv_clamp=conv_clamp, gain_out=np.sqrt(.5) if fix_errors else np.sqrt(1/3),
- w_dim=self.style_net.w_dim, norm=True,
- )
- middle_blocks.append(block)
- if n_middle_blocks != 0:
- self.middle_blocks = Sequential(*middle_blocks)
- self.decoder = torch.nn.ModuleList(decoder_layers)
- self.to_rgb = Conv2d(cnum, im_channels, 1, activation="linear", conv_clamp=conv_clamp)
- # Initialize "middle blocks" that do not have down/up sample
- self.decoder = torch.nn.ModuleList(decoder_layers)
- self.scale_grad = scale_grad
- self.mask_output = mask_output
-
- def forward_dec(self, x, w, unet_features, condition, mask, s, **kwargs):
- for i, layer in enumerate(self.decoder):
- if i != 0:
- unet_layer = getattr(self, f"unet_block{i}")
- x = x + unet_layer(unet_features[-i])
- x = layer(x, w=w, s=s)
- x = self.to_rgb(x)
- if self.mask_output:
- x = mask_output(True, condition, x, mask)
- return dict(img=x)
-
- def forward_enc(self, condition, mask, embedding, keypoints, E_mask, **kwargs):
- if self.input_cse:
- x = torch.cat((condition, mask, embedding, E_mask), dim=1)
- else:
- x = torch.cat((condition, mask), dim=1)
- if self.input_keypoints:
- keypoints = keypoints[:, self.input_keypoint_indices]
- one_hot_pose = spatial_embed_keypoints(keypoints, x)
- x = torch.cat((x, one_hot_pose), dim=1)
- x = self.from_rgb(x)
-
- unet_features = []
- for i, layer in enumerate(self.encoder):
- x = layer(x)
- if i != len(self.encoder)-1:
- unet_features.append(x)
- if hasattr(self, "middle_blocks"):
- for layer in self.middle_blocks:
- x = layer(x)
- return x, unet_features
-
- def forward(
- self, condition, mask,
- z=None, embedding=None, w=None, update_emas=False, x=None,
- s=None,
- keypoints=None,
- unet_features=None,
- E_mask=None,
- **kwargs):
- # Used to skip sampling from encoder in inference. E.g. for w projection.
- if x is not None and unet_features is not None:
- assert not self.training
- else:
- x, unet_features = self.forward_enc(condition, mask, embedding, keypoints, E_mask, **kwargs)
- if w is None:
- if z is None:
- z = self.get_z(condition)
- w = self.get_w(z, update_emas=update_emas)
- return self.forward_dec(x, w, unet_features, condition, mask, s, **kwargs)
-
-
-class ComodStyleUNet(StyleGANUnet):
-
- def __init__(self, min_comod_res=4, lr_multiplier_comod=1, **kwargs) -> None:
- super().__init__(**kwargs)
- min_fmap = min(self._encoder_out_shape[1:])
- enc_out_ch = self._encoder_out_shape[0]
- n_down = int(np.ceil(np.log2(min_fmap) - np.log2(min_comod_res)))
- comod_layers = []
- in_ch = enc_out_ch
- for i in range(n_down):
- comod_layers.append(Conv2d(enc_out_ch, 256, kernel_size=3, down=2, lr_multiplier=lr_multiplier_comod))
- in_ch = 256
- if n_down == 0:
- comod_layers = [Conv2d(in_ch, 256, kernel_size=3)]
- comod_layers.append(torch.nn.Flatten())
- out_res = [x//2**n_down for x in self._encoder_out_shape[1:]]
- in_ch_fc = np.prod(out_res) * 256
- comod_layers.append(FullyConnectedLayer(in_ch_fc, 512, lr_multiplier=lr_multiplier_comod))
- self.comod_block = Sequential(*comod_layers)
- self.comod_fc = FullyConnectedLayer(
- 512+self.style_net.w_dim, self.style_net.w_dim, lr_multiplier=lr_multiplier_comod)
-
- def forward_dec(self, x, w, unet_features, condition, mask, **kwargs):
- y = self.comod_block(x)
- y = torch.cat((y, w), dim=1)
- y = self.comod_fc(y)
- for i, layer in enumerate(self.decoder):
- if i != 0:
- unet_layer = getattr(self, f"unet_block{i}")
- x = x + unet_layer(unet_features[-i], gain=np.sqrt(.5))
- x = layer(x, w=y)
- x = self.to_rgb(x)
- if self.mask_output:
- x = mask_output(True, condition, x, mask)
- return dict(img=x)
-
- def get_comod_y(self, batch, w):
- x, unet_features = self.forward_enc(**batch)
- y = self.comod_block(x)
- y = torch.cat((y, w), dim=1)
- y = self.comod_fc(y)
- return y
diff --git a/spaces/hackathon-pln-es/modelo-juridico-mexicano/app_details.py b/spaces/hackathon-pln-es/modelo-juridico-mexicano/app_details.py
deleted file mode 100644
index 1b9425530760d7966449cb33d180c36c91859072..0000000000000000000000000000000000000000
--- a/spaces/hackathon-pln-es/modelo-juridico-mexicano/app_details.py
+++ /dev/null
@@ -1,149 +0,0 @@
-title = "Modelo Jurídico Mexicano"
-description = """
-
-
-
-
-
-
-
16.3 Promover el estado de derecho en los planos nacional e internacional y garantizar la igualdad de acceso a la justicia para todos.
-
16.10 Garantizar el acceso público a la información y proteger las libertades fundamentales, de conformidad con las leyes nacionales y los acuerdos internacionales.
-
-
-
-
-
-
-
-
4.4 De aquí a 2030, aumentar considerablemente el número de jóvenes y adultos que tienen las competencias necesarias, en particular técnicas y profesionales, para acceder al empleo, el trabajo decente y el emprendimiento.
-
4.7 De aquí a 2030, asegurar que todos los alumnos adquieran los conocimientos teóricos y prácticos necesarios para promover el desarrollo sostenible, entre otras cosas mediante la educación para el desarrollo sostenible y los estilos de vida sostenibles, los derechos humanos, la igualdad de género, la promoción de una cultura de paz y no violencia, la ciudadanía mundial y la valoración de la diversidad cultural y la contribución de la cultura al desarrollo sostenible.
-
-
-
-
-
-
-
-
10.3 Garantizar la igualdad de oportunidades y reducir la desigualdad de resultados, incluso eliminando las leyes, políticas y prácticas discriminatorias y promoviendo legislaciones, políticas y medidas adecuadas a ese respecto.
-
-
-
-
-
-## Motivación
-- El gran esfuerzo y tiempo que se requiere analizar grandes cantidades de información que constantemente se encuentran cambiando.
-- Buscar información puede llevarte demasiado tiempo no tanto por la acción en si, si no por el tiempo que inviertes en buscar la información necesaria y desechar toda aquella que no te aporta nada relacionado a tu tema de interés.
-- Aún el cerebro humano con una gran capacidad de almacenamiento no puede competir con la cantidad de información que se genera día con día.
-- Es difícil exigir algo que desconoces.
-
-Por ello decidimos aventurarnos en la creación de modelos que permiten en términos generales:
-
-- Extraer y recuperar información.
-- Clasificar documentos.
-- Identificar si los documentos son tan parecidos que podrían tartar de un mismo tema o incluso que se traten de los mismos.
-
-Estos modelos integrados en diversos sistemas se pueden obtener beneficios como:
-
-- Agilizar y facilitar el trabajo de quienes imparten justicia.
-- Facilitar la búsqueda de los estudiantes e investigadores de derecho.
-- Ayudar a la ciudadanía permitiéndole identificar si se esta violentando alguno de los Derechos Humanos que protegen el Sistema Universal o la Convención Americana de Derechos Humanos.
-- Coadyuvar en la generación de indicadores sobre violaciones a los Derechos Humanos.
-
-### Este proyecto esta compuesto por los siguientes modelos:
-
-- [hackathon-pln-es/jurisbert-finetuning-ner](https://huggingface.co/hackathon-pln-es/jurisbert-finetuning-ner)
-- [hackathon-pln-es/jurisbert-class-tratados-internacionales-sistema-universal](https://huggingface.co/hackathon-pln-es/jurisbert-class-tratados-internacionales-sistema-universal)
-- [hackathon-pln-es/jurisbert-clas-art-convencion-americana-dh](https://huggingface.co/hackathon-pln-es/jurisbert-clas-art-convencion-americana-dh)
-- [hackathon-pln-es/jurisbert-tsdae-sentence-transformer](https://huggingface.co/hackathon-pln-es/jurisbert-tsdae-sentence-transformer)
-
-### Como funciona el demo:
-
-1. Requiere que se proporcionen dos textos (el primero denominada texto a analizar y el segundo texto a comparar), los cuales se pueden seleccionar de la lista de ejemplos.
-
-2. Cada uno de estos textos pasa por cada uno de los modelos que conforman el proyecto.
-
- * Primero, se utiliza el modelo de reconocimiento de entidades **jurisbert-finetuning-ner**. El cual, podría encontrar alguna entidad de tipo LEY o TRAT_INTL.
-
- * Segundo, se utiliza el modelo de clasificación **jurisbert-class-tratados-internacionales-sistema-universal** acorde al sistema universal de **Derechos Humanos** el cual se fundamenta en convenciones o pactos para identificar si podria existir alguna violación acorde a lo definido por la **ONU**.
-
- * Tercero, se utiliza el modelo de clasificación **jurisbert-clas-art-convencion-americana-dh** para identificar cual de los artículos de la **[Convención Americana de Derechos Humanos](https://www.cndh.org.mx/sites/default/files/doc/Programas/TrataPersonas/MarcoNormativoTrata/InsInternacionales/Regionales/Convencion_ADH.pdf)** se podría estar violentando.
-
- * Cuarto, para poder ejemplificar el modelo **jurisbert-tsdae-sentence-transformer** se aprovechan el texto a analizar y el texto a comparar para calcular la similitud entre ambos.
-
-3. Se presentan los resultados obtenidos en el orden siguiente:
-
- * Primero lo obtenido para el texto a analizar.
- * Segundo, el porcentaje de similitud entre ambos textos.
- * Tercero, lo obtenido para el texto a comparar.
-
-"""
-
-article="""
-### Retos
-
-#### Creación de los datasets
-
-El principal problema de entrenar modelos que pertenezcan a un dominio especializado como el **jurídico** que además sea en **español** se centra en la construcción de los **datasets** por la prácticamente inexistencia de los mismos.
-
-Es por ello que tuvimos que crear dos datasets:
-
-- [scjnugacj/scjn_dataset_corpus_tesis] (https://huggingface.co/datasets/scjnugacj/scjn_dataset_corpus_tesis) la información base fue obtenida del **[Buscador Juridico de la SCJN de México]** (https://bj.scjn.gob.mx/) utilizando como fuente de información: Tesis y filtrando la información por décima y undécima época; sin embargo, fue necesario realizar procesos de ETL para la limpieza de información no relevante y estructuración de los campos:
- * `id`: a `string` feature.
- * `text`: a `string` features.
-- [scjnugacj/scjn_dataset_ner](https://huggingface.co/datasets/scjnugacj/scjn_dataset_ner) el primer reto para este dataset fue entender la estructura que debía tener para ser utilizado la tarea **NER** afortunadamente esto fue relativamente sencillo de encontrar y nos dimos cuenta que no éramos el único equipo con el mismo problema. La estructura del dataset para esta tarea es el siguiente:
-
- * `id`: a `string` feature.
- * `tokens`: a `list` of `string` features.
- * `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices: {'O': 0, 'B-LEY': 1, 'I-LEY': 2, 'B-TRAT_INTL': 3, 'I-TRAT_INTL': 4}
-
-
-
-Afortunadamente, teníamos claro que entidades nos interesaba identificar pero el reto estaba en crear el corpus anotado por la cantidad de ejemplos considerando como base los 27913 del dataset **scjn_corpus_tesis** aún utilizando una herramienta para realizar las anotaciones de manualmente el tiempo requerido era elevado es por ello que nos dimos a la rarea de crear un notebook que recibe una lista de los nombres de las leyes y tratados internacionales y realiza el ETL necesario para las anotaciones automáticamente, para asegurarnos de que todo estaba anotado acorde a lo esperado se extrajo una muestra para su verificación manual.
-
-
-#### Compartir los datasets en HugginFace
-
-Realizar la investigación de como compartir los datasets en HuggingFace represento un tiempo importante y la mejor forma que encontramos para hacerlo fue:
-
-- Crear un script para utilizar la función **load_dataset** que lee desde un repositorio en github los archivos train.txt y dev.txt y los convierte en un **DatasetDict** para finalmente publicarlos con la función **push_to_hub**.
-
-## Entrenamiento de los modelos
-- Crear la línea base de los modelos.
-- **hackathon-pln-es/jurisbert-finetuning-ner**
- * Espacio de almacenamiento para almacenar los checkpoints que requerían 1.4 GB de almacenamiento por lo que no podíamos entrenar de forma continua.
- * Los resultados de **F1** eran muy bajos.
- * La cantidad de datos en el corpus era tan elevado y disparejo que el tiempo para entrenar una época era muy alto.
- * Realizar múltiples entrenamientos hasta identificar cual era el mejor para realizar cual sería utilizado como base para el entrenamiento siguiente.
- * Fue necesario dar un paso atrás y revisar el dataset para realizar un análisis exploratorio e idear estrategias para balancear la muestra por lo que se acoto a:
-
-| name |train|validation|test|
-|---------|----:|---------:|---:|
-|SCJNNER|1396|345|0|
-
-| annotations|train|validation|test|
-|---------|----:|---------:|---:|
-|LEY|1084|329|0|
-|TRAT_INTL|935|161|0|
-
-- **jurisbert-class-tratados-internacionales-sistema-unviersal**
- * Se entrenó con un conjunto de datos que consta de 3,799 textos con su etiquetado a diferentes 8 tipos de convenios.
- * Los textos se transforman utilizando SimpleTransformers en el que se entrenó tres épocas con modelo base Roberta y modelo especifico Jurisbert el cual es un modelo de enmascaramiento con corpus jurídico en español.
- * La métrica de evaluación utilizada fue **Accuracy**.
-- **jurisbert-clas-art-convencion-americana-dh**
- * Se entrenó con un conjunto de datos que consta de 6,089 textos con su etiquetado a diferentes 30 tipos de artículos.
- * Los textos se transforman utilizando SimpleTransformers en el que se entrenó tres épocas con modelo base Roberta y modelo especifico Jurisbert el cual es un modelo de enmascaramiento con corpus jurídico en español.
- * La métrica de evaluación utilizada fue **Accuracy**.
-- **jurisbert-tsdae-sentence-transformer**
- * Se entreno utilizando el dataset scjnugacj/scjn_dataset_corpus_tesis del cual se tomo una muestra de 25000 ejemplos.
-
-
-### Team
-
-El equipo esta conformado por [gpalomeque](https://huggingface.co/GPalomeque), [aureliopvs](https://huggingface.co/aureliopvs), [ceciliamacias](https://huggingface.co/ceciliamacias), [giomadariaga](https://huggingface.co/giomadariaga) y [cattsytabla](https://huggingface.co/cattsytabla)
-
-### Consideraciones generales y futuro
-
-Como parte de pilares del Gobierno Abierto mediante el uso de sus ejes de colaboración e innovación se tiene como meta poder continuar con la creación de modelos que permitan crear plataformas de recuperación de información que brinde de manera oportuna y eficiente datos que agilicen tanto el acceso, así como la impartición de justicia.
-
-"""
-
diff --git a/spaces/hackathon-pln-es/sonnet-poetry-generator-spanish/README.md b/spaces/hackathon-pln-es/sonnet-poetry-generator-spanish/README.md
deleted file mode 100644
index 3d52f1c67c08bec91312df4eb624336dcabd2f4d..0000000000000000000000000000000000000000
--- a/spaces/hackathon-pln-es/sonnet-poetry-generator-spanish/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Sonnet Poetry Generator Spanish
-emoji: ✍️ 🤗 📜
-colorFrom: purple
-colorTo: blue
-sdk: gradio
-sdk_version: 2.8.12
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/__init__.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/harmdevries/transformer_inference/README.md b/spaces/harmdevries/transformer_inference/README.md
deleted file mode 100644
index cf979a849be91594b81064f0aba3760285d1bb44..0000000000000000000000000000000000000000
--- a/spaces/harmdevries/transformer_inference/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Mqa
-emoji: 📉
-colorFrom: red
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
-license: cc-by-sa-4.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/modeling/test_time_augmentation.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/modeling/test_time_augmentation.py
deleted file mode 100644
index fcf69db1b6e4c687bc4e284e2795cab61ebf043f..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/modeling/test_time_augmentation.py
+++ /dev/null
@@ -1,75 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-from detectron2.modeling.test_time_augmentation import GeneralizedRCNNWithTTA
-
-
-class DensePoseGeneralizedRCNNWithTTA(GeneralizedRCNNWithTTA):
- def __init__(self, cfg, model, transform_data, tta_mapper=None, batch_size=1):
- """
- Args:
- cfg (CfgNode):
- model (GeneralizedRCNN): a GeneralizedRCNN to apply TTA on.
- transform_data (DensePoseTransformData): contains symmetry label
- transforms used for horizontal flip
- tta_mapper (callable): takes a dataset dict and returns a list of
- augmented versions of the dataset dict. Defaults to
- `DatasetMapperTTA(cfg)`.
- batch_size (int): batch the augmented images into this batch size for inference.
- """
- self._transform_data = transform_data
- super().__init__(cfg=cfg, model=model, tta_mapper=tta_mapper, batch_size=batch_size)
-
- # the implementation follows closely the one from detectron2/modeling
- def _inference_one_image(self, input):
- """
- Args:
- input (dict): one dataset dict
-
- Returns:
- dict: one output dict
- """
-
- augmented_inputs, aug_vars = self._get_augmented_inputs(input)
- # Detect boxes from all augmented versions
- with self._turn_off_roi_heads(["mask_on", "keypoint_on", "densepose_on"]):
- # temporarily disable roi heads
- all_boxes, all_scores, all_classes = self._get_augmented_boxes(
- augmented_inputs, aug_vars
- )
- merged_instances = self._merge_detections(
- all_boxes, all_scores, all_classes, (aug_vars["height"], aug_vars["width"])
- )
-
- if self.cfg.MODEL.MASK_ON or self.cfg.MODEL.DENSEPOSE_ON:
- # Use the detected boxes to obtain new fields
- augmented_instances = self._rescale_detected_boxes(
- augmented_inputs, merged_instances, aug_vars
- )
- # run forward on the detected boxes
- outputs = self._batch_inference(
- augmented_inputs, augmented_instances, do_postprocess=False
- )
- # Delete now useless variables to avoid being out of memory
- del augmented_inputs, augmented_instances, merged_instances
- # average the predictions
- if self.cfg.MODEL.MASK_ON:
- outputs[0].pred_masks = self._reduce_pred_masks(outputs, aug_vars)
- if self.cfg.MODEL.DENSEPOSE_ON:
- outputs[0].pred_densepose = self._reduce_pred_densepose(outputs, aug_vars)
- # postprocess
- output = self._detector_postprocess(outputs[0], aug_vars)
- return {"instances": output}
- else:
- return {"instances": merged_instances}
-
- def _reduce_pred_densepose(self, outputs, aug_vars):
- for idx, output in enumerate(outputs):
- if aug_vars["do_hflip"][idx]:
- output.pred_densepose.hflip(self._transform_data)
- # Less memory-intensive averaging
- for attr in "SIUV":
- setattr(
- outputs[0].pred_densepose,
- attr,
- sum(getattr(o.pred_densepose, attr) for o in outputs) / len(outputs),
- )
- return outputs[0].pred_densepose
diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/tests/common.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/tests/common.py
deleted file mode 100644
index 13bf0dd3ca113e0756d3023e36272675c6b972f9..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/tests/common.py
+++ /dev/null
@@ -1,110 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-
-import os
-import torch
-
-from detectron2.config import get_cfg
-from detectron2.engine import default_setup
-from detectron2.modeling import build_model
-
-from densepose import add_dataset_category_config, add_densepose_config
-
-_BASE_CONFIG_DIR = "configs"
-_EVOLUTION_CONFIG_SUB_DIR = "evolution"
-_QUICK_SCHEDULES_CONFIG_SUB_DIR = "quick_schedules"
-_BASE_CONFIG_FILE_PREFIX = "Base-"
-_CONFIG_FILE_EXT = ".yaml"
-
-
-def _get_base_config_dir():
- """
- Return the base directory for configurations
- """
- return os.path.join(os.path.dirname(os.path.realpath(__file__)), "..", _BASE_CONFIG_DIR)
-
-
-def _get_evolution_config_dir():
- """
- Return the base directory for evolution configurations
- """
- return os.path.join(_get_base_config_dir(), _EVOLUTION_CONFIG_SUB_DIR)
-
-
-def _get_quick_schedules_config_dir():
- """
- Return the base directory for quick schedules configurations
- """
- return os.path.join(_get_base_config_dir(), _QUICK_SCHEDULES_CONFIG_SUB_DIR)
-
-
-def _collect_config_files(config_dir):
- """
- Collect all configuration files (i.e. densepose_*.yaml) directly in the specified directory
- """
- start = _get_base_config_dir()
- results = []
- for entry in os.listdir(config_dir):
- path = os.path.join(config_dir, entry)
- if not os.path.isfile(path):
- continue
- _, ext = os.path.splitext(entry)
- if ext != _CONFIG_FILE_EXT:
- continue
- if entry.startswith(_BASE_CONFIG_FILE_PREFIX):
- continue
- config_file = os.path.relpath(path, start)
- results.append(config_file)
- return results
-
-
-def get_config_files():
- """
- Get all the configuration files (relative to the base configuration directory)
- """
- return _collect_config_files(_get_base_config_dir())
-
-
-def get_evolution_config_files():
- """
- Get all the evolution configuration files (relative to the base configuration directory)
- """
- return _collect_config_files(_get_evolution_config_dir())
-
-
-def get_quick_schedules_config_files():
- """
- Get all the quick schedules configuration files (relative to the base configuration directory)
- """
- return _collect_config_files(_get_quick_schedules_config_dir())
-
-
-def _get_model_config(config_file):
- """
- Load and return the configuration from the specified file (relative to the base configuration
- directory)
- """
- cfg = get_cfg()
- add_dataset_category_config(cfg)
- add_densepose_config(cfg)
- path = os.path.join(_get_base_config_dir(), config_file)
- cfg.merge_from_file(path)
- if not torch.cuda.is_available():
- cfg.MODEL_DEVICE = "cpu"
- return cfg
-
-
-def get_model(config_file):
- """
- Get the model from the specified file (relative to the base configuration directory)
- """
- cfg = _get_model_config(config_file)
- return build_model(cfg)
-
-
-def setup(config_file):
- """
- Setup the configuration from the specified file (relative to the base configuration directory)
- """
- cfg = _get_model_config(config_file)
- cfg.freeze()
- default_setup(cfg, {})
diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/modules/src/inplace_abn.h b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/modules/src/inplace_abn.h
deleted file mode 100644
index 17afd1196449ecb6376f28961e54b55e1537492f..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/modules/src/inplace_abn.h
+++ /dev/null
@@ -1,88 +0,0 @@
-#pragma once
-
-#include
-
-#include
-
-std::vector mean_var_cpu(at::Tensor x);
-std::vector mean_var_cuda(at::Tensor x);
-std::vector mean_var_cuda_h(at::Tensor x);
-
-at::Tensor forward_cpu(at::Tensor x, at::Tensor mean, at::Tensor var, at::Tensor weight, at::Tensor bias,
- bool affine, float eps);
-at::Tensor forward_cuda(at::Tensor x, at::Tensor mean, at::Tensor var, at::Tensor weight, at::Tensor bias,
- bool affine, float eps);
-at::Tensor forward_cuda_h(at::Tensor x, at::Tensor mean, at::Tensor var, at::Tensor weight, at::Tensor bias,
- bool affine, float eps);
-
-std::vector edz_eydz_cpu(at::Tensor z, at::Tensor dz, at::Tensor weight, at::Tensor bias,
- bool affine, float eps);
-std::vector edz_eydz_cuda(at::Tensor z, at::Tensor dz, at::Tensor weight, at::Tensor bias,
- bool affine, float eps);
-std::vector edz_eydz_cuda_h(at::Tensor z, at::Tensor dz, at::Tensor weight, at::Tensor bias,
- bool affine, float eps);
-
-at::Tensor backward_cpu(at::Tensor z, at::Tensor dz, at::Tensor var, at::Tensor weight, at::Tensor bias,
- at::Tensor edz, at::Tensor eydz, bool affine, float eps);
-at::Tensor backward_cuda(at::Tensor z, at::Tensor dz, at::Tensor var, at::Tensor weight, at::Tensor bias,
- at::Tensor edz, at::Tensor eydz, bool affine, float eps);
-at::Tensor backward_cuda_h(at::Tensor z, at::Tensor dz, at::Tensor var, at::Tensor weight, at::Tensor bias,
- at::Tensor edz, at::Tensor eydz, bool affine, float eps);
-
-void leaky_relu_backward_cpu(at::Tensor z, at::Tensor dz, float slope);
-void leaky_relu_backward_cuda(at::Tensor z, at::Tensor dz, float slope);
-void leaky_relu_backward_cuda_h(at::Tensor z, at::Tensor dz, float slope);
-
-void elu_backward_cpu(at::Tensor z, at::Tensor dz);
-void elu_backward_cuda(at::Tensor z, at::Tensor dz);
-
-static void get_dims(at::Tensor x, int64_t& num, int64_t& chn, int64_t& sp) {
- num = x.size(0);
- chn = x.size(1);
- sp = 1;
- for (int64_t i = 2; i < x.ndimension(); ++i)
- sp *= x.size(i);
-}
-
-/*
- * Specialized CUDA reduction functions for BN
- */
-#ifdef __CUDACC__
-
-#include "utils/cuda.cuh"
-
-template
-__device__ T reduce(Op op, int plane, int N, int S) {
- T sum = (T)0;
- for (int batch = 0; batch < N; ++batch) {
- for (int x = threadIdx.x; x < S; x += blockDim.x) {
- sum += op(batch, plane, x);
- }
- }
-
- // sum over NumThreads within a warp
- sum = warpSum(sum);
-
- // 'transpose', and reduce within warp again
- __shared__ T shared[32];
- __syncthreads();
- if (threadIdx.x % WARP_SIZE == 0) {
- shared[threadIdx.x / WARP_SIZE] = sum;
- }
- if (threadIdx.x >= blockDim.x / WARP_SIZE && threadIdx.x < WARP_SIZE) {
- // zero out the other entries in shared
- shared[threadIdx.x] = (T)0;
- }
- __syncthreads();
- if (threadIdx.x / WARP_SIZE == 0) {
- sum = warpSum(shared[threadIdx.x]);
- if (threadIdx.x == 0) {
- shared[0] = sum;
- }
- }
- __syncthreads();
-
- // Everyone picks it up, should be broadcast into the whole gradInput
- return shared[0];
-}
-#endif
diff --git a/spaces/hekbobo/bingo/src/components/ui/icons.tsx b/spaces/hekbobo/bingo/src/components/ui/icons.tsx
deleted file mode 100644
index 742b489b50437c5b64c86082f2ebc712eeb6a2b0..0000000000000000000000000000000000000000
--- a/spaces/hekbobo/bingo/src/components/ui/icons.tsx
+++ /dev/null
@@ -1,504 +0,0 @@
-'use client'
-
-import * as React from 'react'
-
-import { cn } from '@/lib/utils'
-
-function IconNextChat({
- className,
- inverted,
- ...props
-}: React.ComponentProps<'svg'> & { inverted?: boolean }) {
- const id = React.useId()
-
- return (
-
- )
-}
-
-function IconOpenAI({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconGitHub({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconSeparator({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconArrowDown({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconArrowRight({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconUser({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconPlus({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconArrowElbow({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconSpinner({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconMessage({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconTrash({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconMore({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconRefresh({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconStop({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconSidebar({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconMoon({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconSun({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconCopy({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconCheck({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconDownload({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconClose({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconEdit({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconShare({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconUsers({ className, ...props }: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconExternalLink({
- className,
- ...props
-}: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-function IconChevronUpDown({
- className,
- ...props
-}: React.ComponentProps<'svg'>) {
- return (
-
- )
-}
-
-export {
- IconEdit,
- IconNextChat,
- IconOpenAI,
- IconGitHub,
- IconSeparator,
- IconArrowDown,
- IconArrowRight,
- IconUser,
- IconPlus,
- IconArrowElbow,
- IconSpinner,
- IconMessage,
- IconTrash,
- IconMore,
- IconRefresh,
- IconStop,
- IconSidebar,
- IconMoon,
- IconSun,
- IconCopy,
- IconCheck,
- IconDownload,
- IconClose,
- IconShare,
- IconUsers,
- IconExternalLink,
- IconChevronUpDown
-}
diff --git a/spaces/hezhaoqia/vits-simple-api/vits/text/ngu_dialect.py b/spaces/hezhaoqia/vits-simple-api/vits/text/ngu_dialect.py
deleted file mode 100644
index ce3e12bbf0469426872eed5f681985d3e1be9b26..0000000000000000000000000000000000000000
--- a/spaces/hezhaoqia/vits-simple-api/vits/text/ngu_dialect.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import re
-import opencc
-
-
-dialects = {'SZ': 'suzhou', 'WX': 'wuxi', 'CZ': 'changzhou', 'HZ': 'hangzhou',
- 'SX': 'shaoxing', 'NB': 'ningbo', 'JJ': 'jingjiang', 'YX': 'yixing',
- 'JD': 'jiading', 'ZR': 'zhenru', 'PH': 'pinghu', 'TX': 'tongxiang',
- 'JS': 'jiashan', 'HN': 'xiashi', 'LP': 'linping', 'XS': 'xiaoshan',
- 'FY': 'fuyang', 'RA': 'ruao', 'CX': 'cixi', 'SM': 'sanmen',
- 'TT': 'tiantai', 'WZ': 'wenzhou', 'SC': 'suichang', 'YB': 'youbu'}
-
-converters = {}
-
-for dialect in dialects.values():
- try:
- converters[dialect] = opencc.OpenCC(dialect)
- except:
- pass
-
-
-def ngu_dialect_to_ipa(text, dialect):
- dialect = dialects[dialect]
- text = converters[dialect].convert(text).replace('-','').replace('$',' ')
- text = re.sub(r'[、;:]', ',', text)
- text = re.sub(r'\s*,\s*', ', ', text)
- text = re.sub(r'\s*。\s*', '. ', text)
- text = re.sub(r'\s*?\s*', '? ', text)
- text = re.sub(r'\s*!\s*', '! ', text)
- text = re.sub(r'\s*$', '', text)
- return text
diff --git a/spaces/hf4all/web-ui/404.html b/spaces/hf4all/web-ui/404.html
deleted file mode 100644
index ffb373d061ee3f950f0952435efd1ee567baa02f..0000000000000000000000000000000000000000
--- a/spaces/hf4all/web-ui/404.html
+++ /dev/null
@@ -1 +0,0 @@
-404: This page could not be found
404
This page could not be found.
\ No newline at end of file
diff --git a/spaces/hkunlp/Binder/nsql/parser.py b/spaces/hkunlp/Binder/nsql/parser.py
deleted file mode 100644
index 7772b2f284cff94f2faaea3f6b747b5960bcd4c1..0000000000000000000000000000000000000000
--- a/spaces/hkunlp/Binder/nsql/parser.py
+++ /dev/null
@@ -1,179 +0,0 @@
-from typing import List
-import re
-import sqlparse
-
-
-class TreeNode(object):
- def __init__(self, name=None, father=None):
- self.name: str = name
- self.rename: str = name
- self.father: TreeNode = father
- self.children: List = []
- self.produced_col_name_s = None
-
- def __eq__(self, other):
- return self.rename == other.rename
-
- def __hash__(self):
- return hash(self.rename)
-
- def set_name(self, name):
- self.name = name
- self.rename = name
-
- def add_child(self, child):
- self.children.append(child)
- child.father = self
-
- def rename_father_col(self, col_idx: int, col_prefix: str = "col_"):
- new_col_name = "{}{}".format(col_prefix, col_idx)
- self.father.rename = self.father.rename.replace(self.name, "{}".format(new_col_name))
- self.produced_col_name_s = [new_col_name] # fixme when multiple outputs for a qa func
-
- def rename_father_val(self, val_names):
- if len(val_names) == 1:
- val_name = val_names[0]
- new_val_equals_str = "'{}'".format(val_name) if isinstance(convert_type(val_name), str) else "{}".format(
- val_name)
- else:
- new_val_equals_str = '({})'.format(', '.join(["'{}'".format(val_name) for val_name in val_names]))
- self.father.rename = self.father.rename.replace(self.name, new_val_equals_str)
-
-
-def get_cfg_tree(nsql: str):
- """
- Parse QA() into a tree for execution guiding.
- @param nsql:
- @return:
- """
-
- stack: List = [] # Saving the state of the char.
- expression_stack: List = [] # Saving the state of the expression.
- current_tree_node = TreeNode(name=nsql)
-
- for idx in range(len(nsql)):
- if nsql[idx] == "(":
- stack.append(idx)
- if idx > 1 and nsql[idx - 2:idx + 1] == "QA(" and idx - 2 != 0:
- tree_node = TreeNode()
- current_tree_node.add_child(tree_node)
- expression_stack.append(current_tree_node)
- current_tree_node = tree_node
- elif nsql[idx] == ")":
- left_clause_idx = stack.pop()
- if idx > 1 and nsql[left_clause_idx - 2:left_clause_idx + 1] == "QA(" and left_clause_idx - 2 != 0:
- # the QA clause
- nsql_span = nsql[left_clause_idx - 2:idx + 1]
- current_tree_node.set_name(nsql_span)
- current_tree_node = expression_stack.pop()
-
- return current_tree_node
-
-
-def get_steps(tree_node: TreeNode, steps: List):
- """Pred-Order Traversal"""
- for child in tree_node.children:
- get_steps(child, steps)
- steps.append(tree_node)
-
-
-def parse_question_paras(nsql: str, qa_model):
- # We assume there's no nested qa inside when running this func
- nsql = nsql.strip(" ;")
- assert nsql[:3] == "QA(" and nsql[-1] == ")", "must start with QA( symbol and end with )"
- assert not "QA" in nsql[2:-1], "must have no nested qa inside"
-
- # Get question and the left part(paras_raw_str)
- all_quote_idx = [i.start() for i in re.finditer('\"', nsql)]
- question = nsql[all_quote_idx[0] + 1: all_quote_idx[1]]
- paras_raw_str = nsql[all_quote_idx[1] + 1:-1].strip(" ;")
-
- # Split Parameters(SQL/column/value) from all parameters.
- paras = [_para.strip(' ;') for _para in sqlparse.split(paras_raw_str)]
- return question, paras
-
-
-def convert_type(value):
- try:
- return eval(value)
- except Exception as e:
- return value
-
-
-def nsql_role_recognize(nsql_like_str, all_headers, all_passage_titles, all_image_titles):
- """Recognize role. (SQL/column/value) """
- orig_nsql_like_str = nsql_like_str
-
- # strip the first and the last '`'
- if nsql_like_str.startswith('`') and nsql_like_str.endswith('`'):
- nsql_like_str = nsql_like_str[1:-1]
-
- # Case 1: if col in header, it is column type.
- if nsql_like_str in all_headers or nsql_like_str in list(map(lambda x: x.lower(), all_headers)):
- return 'col', orig_nsql_like_str
-
- # fixme: add case when the this nsql_like_str both in table headers, images title and in passages title.
- # Case 2.1: if it is title of certain passage.
- if (nsql_like_str.lower() in list(map(lambda x: x.lower(), all_passage_titles))) \
- and (nsql_like_str.lower() in list(map(lambda x: x.lower(), all_image_titles))):
- return "passage_title_and_image_title", orig_nsql_like_str
- else:
- try:
- nsql_like_str_evaled = str(eval(nsql_like_str))
- if (nsql_like_str_evaled.lower() in list(map(lambda x: x.lower(), all_passage_titles))) \
- and (nsql_like_str_evaled.lower() in list(map(lambda x: x.lower(), all_image_titles))):
- return "passage_title_and_image_title", nsql_like_str_evaled
- except:
- pass
-
- # Case 2.2: if it is title of certain passage.
- if nsql_like_str.lower() in list(map(lambda x: x.lower(), all_passage_titles)):
- return "passage_title", orig_nsql_like_str
- else:
- try:
- nsql_like_str_evaled = str(eval(nsql_like_str))
- if nsql_like_str_evaled.lower() in list(map(lambda x: x.lower(), all_passage_titles)):
- return "passage_title", nsql_like_str_evaled
- except:
- pass
-
- # Case 2.3: if it is title of certain picture.
- if nsql_like_str.lower() in list(map(lambda x: x.lower(), all_image_titles)):
- return "image_title", orig_nsql_like_str
- else:
- try:
- nsql_like_str_evaled = str(eval(nsql_like_str))
- if nsql_like_str_evaled.lower() in list(map(lambda x: x.lower(), all_image_titles)):
- return "image_title", nsql_like_str_evaled
- except:
- pass
-
- # Case 4: if it can be parsed by eval(), it is value type.
- try:
- eval(nsql_like_str)
- return 'val', orig_nsql_like_str
- except Exception as e:
- pass
-
- # Case 5: else it should be the sql, if it isn't, exception will be raised.
- return 'complete_sql', orig_nsql_like_str
-
-
-def remove_duplicate(original_list):
- no_duplicate_list = []
- [no_duplicate_list.append(i) for i in original_list if i not in no_duplicate_list]
- return no_duplicate_list
-
-
-def extract_answers(sub_table):
- if not sub_table or sub_table['header'] is None:
- return []
- answer = []
- if 'row_id' in sub_table['header']:
- for _row in sub_table['rows']:
- answer.extend(_row[1:])
- return answer
- else:
- for _row in sub_table['rows']:
- answer.extend(_row)
- return answer
diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/evaluation/add_dummy_task_with_mean_over_all_tasks.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/evaluation/add_dummy_task_with_mean_over_all_tasks.py
deleted file mode 100644
index 670bf20c71e777d34afac31a729e0da2e6d9c6cd..0000000000000000000000000000000000000000
--- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/evaluation/add_dummy_task_with_mean_over_all_tasks.py
+++ /dev/null
@@ -1,77 +0,0 @@
-# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import json
-import numpy as np
-from batchgenerators.utilities.file_and_folder_operations import subfiles
-import os
-from collections import OrderedDict
-
-folder = "/home/fabian/drives/E132-Projekte/Projects/2018_MedicalDecathlon/Leaderboard"
-task_descriptors = ['2D final 2',
- '2D final, less pool, dc and topK, fold0',
- '2D final pseudo3d 7, fold0',
- '2D final, less pool, dc and ce, fold0',
- '3D stage0 final 2, fold0',
- '3D fullres final 2, fold0']
-task_ids_with_no_stage0 = ["Task001_BrainTumour", "Task004_Hippocampus", "Task005_Prostate"]
-
-mean_scores = OrderedDict()
-for t in task_descriptors:
- mean_scores[t] = OrderedDict()
-
-json_files = subfiles(folder, True, None, ".json", True)
-json_files = [i for i in json_files if not i.split("/")[-1].startswith(".")] # stupid mac
-for j in json_files:
- with open(j, 'r') as f:
- res = json.load(f)
- task = res['task']
- if task != "Task999_ALL":
- name = res['name']
- if name in task_descriptors:
- if task not in list(mean_scores[name].keys()):
- mean_scores[name][task] = res['results']['mean']['mean']
- else:
- raise RuntimeError("duplicate task %s for description %s" % (task, name))
-
-for t in task_ids_with_no_stage0:
- mean_scores["3D stage0 final 2, fold0"][t] = mean_scores["3D fullres final 2, fold0"][t]
-
-a = set()
-for i in mean_scores.keys():
- a = a.union(list(mean_scores[i].keys()))
-
-for i in mean_scores.keys():
- try:
- for t in list(a):
- assert t in mean_scores[i].keys(), "did not find task %s for experiment %s" % (t, i)
- new_res = OrderedDict()
- new_res['name'] = i
- new_res['author'] = "Fabian"
- new_res['task'] = "Task999_ALL"
- new_res['results'] = OrderedDict()
- new_res['results']['mean'] = OrderedDict()
- new_res['results']['mean']['mean'] = OrderedDict()
- tasks = list(mean_scores[i].keys())
- metrics = mean_scores[i][tasks[0]].keys()
- for m in metrics:
- foreground_values = [mean_scores[i][n][m] for n in tasks]
- new_res['results']['mean']["mean"][m] = np.nanmean(foreground_values)
- output_fname = i.replace(" ", "_") + "_globalMean.json"
- with open(os.path.join(folder, output_fname), 'w') as f:
- json.dump(new_res, f)
- except AssertionError:
- print("could not process experiment %s" % i)
- print("did not find task %s for experiment %s" % (t, i))
-
diff --git a/spaces/huaiji3y/bingo-Public/src/pages/api/proxy.ts b/spaces/huaiji3y/bingo-Public/src/pages/api/proxy.ts
deleted file mode 100644
index 240b5fb5561d993c6381649bf4544ce12f3cdab2..0000000000000000000000000000000000000000
--- a/spaces/huaiji3y/bingo-Public/src/pages/api/proxy.ts
+++ /dev/null
@@ -1,24 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import { fetch } from '@/lib/isomorphic'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- try {
- const { url, headers, method = 'GET', body } = req.body
- if (!url) {
- return res.end('ok')
- }
- const response = await fetch(url, { headers, method, body, redirect: 'manual' })
- const text = await response.text()
- res.writeHead(200, {
- 'Content-Type': 'application/text',
- 'x-url': response.url,
- 'x-status': response.status,
- })
- res.end(text)
- } catch (e) {
- console.log(e)
- return res.end(e)
- }
-}
diff --git a/spaces/huggingface-projects/diffuse-the-rest/README.md b/spaces/huggingface-projects/diffuse-the-rest/README.md
deleted file mode 100644
index 0970cc743bba0c1ea9ca487f6e8888917fd4bd74..0000000000000000000000000000000000000000
--- a/spaces/huggingface-projects/diffuse-the-rest/README.md
+++ /dev/null
@@ -1,20 +0,0 @@
----
-title: Diffuse The Rest
-emoji: 🦉
-colorFrom: indigo
-colorTo: green
-sdk: static
-pinned: false
-app_file: build/index.html
----
-
-# Diffuse The Rest
-
-To develop locally:
-
-```
-git clone https://huggingface.co/spaces/huggingface-projects/diffuse-the-rest
-cd diffuse-the-rest
-npm ci
-NODE_ENV="development" npm run dev -- --open
-```
diff --git a/spaces/huggingface-projects/diffuse-the-rest/vite.config.js b/spaces/huggingface-projects/diffuse-the-rest/vite.config.js
deleted file mode 100644
index 8747050534d8417cdf8d5d0535bc5d4edba4046d..0000000000000000000000000000000000000000
--- a/spaces/huggingface-projects/diffuse-the-rest/vite.config.js
+++ /dev/null
@@ -1,8 +0,0 @@
-import { sveltekit } from '@sveltejs/kit/vite';
-
-/** @type {import('vite').UserConfig} */
-const config = {
- plugins: [sveltekit()]
-};
-
-export default config;
diff --git a/spaces/hzwluoye/gpt4/g4f/Provider/__init__.py b/spaces/hzwluoye/gpt4/g4f/Provider/__init__.py
deleted file mode 100644
index 63c445f1deb7ec50e91680da122450b842cda3fc..0000000000000000000000000000000000000000
--- a/spaces/hzwluoye/gpt4/g4f/Provider/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from . import Provider
-from .Providers import (
- Chimera,
-)
diff --git a/spaces/iccv23-diffusers-demo/LoraTheExplorer/custom.css b/spaces/iccv23-diffusers-demo/LoraTheExplorer/custom.css
deleted file mode 100644
index ff1e95e9a829e666770be4ca98b8c6c4fd7326e7..0000000000000000000000000000000000000000
--- a/spaces/iccv23-diffusers-demo/LoraTheExplorer/custom.css
+++ /dev/null
@@ -1,31 +0,0 @@
-#title{text-align: center;}
-#title h1{font-size: 3em; display:inline-flex; align-items:center}
-#title img{width: 100px; margin-right: 0.5em}
-#prompt input{width: calc(100% - 160px);border-top-right-radius: 0px;border-bottom-right-radius: 0px;}
-#run_button{position:absolute;margin-top: 11px;right: 0;margin-right: 0.8em;border-bottom-left-radius: 0px;border-top-left-radius: 0px;}
-#gallery{display:flex;}
-#gallery .grid-wrap{min-height: 100%;}
-#accordion code{word-break: break-all;word-wrap: break-word;white-space: pre-wrap}
-#soon{opacity: 0.55; pointer-events: none}
-#soon button{width: 100%}
-#share-btn-container {padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; max-width: 13rem; margin-left: auto;}
-div#share-btn-container > div {flex-direction: row;background: black;align-items: center}
-#share-btn-container:hover {background-color: #060606}
-#share-btn {all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.5rem !important; padding-bottom: 0.5rem !important;right:0;}
-#share-btn * {all: unset}
-#share-btn-container div:nth-child(-n+2){width: auto !important;min-height: 0px !important;}
-#share-btn-container .wrap {display: none !important}
-#share-btn-container.hidden {display: none!important}
-#extra_info{margin-top: 1em}
-.pending .min {min-height: auto}
-#gallery_box{padding-top: 0}
-#gallery_box .form{border: 0 !important}
-#order_radio{border: 0;padding-left: 0}
-#order_radio .form{border:0 !important; padding-bottom: 0.25em}
-#order_radio [data-testid="block-info"]{float: left;margin-top: 2px;margin-right: 6px}
-#order_radio label{padding: 0.25em 0.75em !important;font-size: 85% !important}
-@media (max-width: 527px) {
- #title h1{font-size: 2.2em}
- #title img{width: 80px;}
- #gallery {max-height: 370px}
-}
\ No newline at end of file
diff --git a/spaces/innnky/soft-vits-vc/monotonic_align/__init__.py b/spaces/innnky/soft-vits-vc/monotonic_align/__init__.py
deleted file mode 100644
index 3d7009c40fea3a98168e3e3bc9ae061e91327422..0000000000000000000000000000000000000000
--- a/spaces/innnky/soft-vits-vc/monotonic_align/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import numpy as np
-import torch
-from .monotonic_align.core import maximum_path_c
-
-
-def maximum_path(neg_cent, mask):
- """ Cython optimized version.
- neg_cent: [b, t_t, t_s]
- mask: [b, t_t, t_s]
- """
- device = neg_cent.device
- dtype = neg_cent.dtype
- neg_cent = neg_cent.data.cpu().numpy().astype(np.float32)
- path = np.zeros(neg_cent.shape, dtype=np.int32)
-
- t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32)
- t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32)
- maximum_path_c(path, neg_cent, t_t_max, t_s_max)
- return torch.from_numpy(path).to(device=device, dtype=dtype)
diff --git a/spaces/innnky/vits-nyaru/monotonic_align/__init__.py b/spaces/innnky/vits-nyaru/monotonic_align/__init__.py
deleted file mode 100644
index 3d7009c40fea3a98168e3e3bc9ae061e91327422..0000000000000000000000000000000000000000
--- a/spaces/innnky/vits-nyaru/monotonic_align/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import numpy as np
-import torch
-from .monotonic_align.core import maximum_path_c
-
-
-def maximum_path(neg_cent, mask):
- """ Cython optimized version.
- neg_cent: [b, t_t, t_s]
- mask: [b, t_t, t_s]
- """
- device = neg_cent.device
- dtype = neg_cent.dtype
- neg_cent = neg_cent.data.cpu().numpy().astype(np.float32)
- path = np.zeros(neg_cent.shape, dtype=np.int32)
-
- t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32)
- t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32)
- maximum_path_c(path, neg_cent, t_t_max, t_s_max)
- return torch.from_numpy(path).to(device=device, dtype=dtype)
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/((HOT)) Download Bhaiyyaji Superhit Movie In 720p Movies.md b/spaces/inplisQlawa/anything-midjourney-v4-1/((HOT)) Download Bhaiyyaji Superhit Movie In 720p Movies.md
deleted file mode 100644
index db7394d456a26fa1f2e7cf11255e1f5573f9eb85..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/((HOT)) Download Bhaiyyaji Superhit Movie In 720p Movies.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
-Preity Zinta says her wife don character in her upcoming movie Bhaiaji Superhit is not what she has . So, when the actress appeared at a recent Kerala festival, she confirmed that she has no history with her character in Bhaiaji.
-She said, "I don't have a story for Bhaiaji because I didn't do one film."
-She was asked about this in an interview with a reporter and she explained, “I didn’t do one film, but I did the movie Sujay.
-I didn't do this movie because I wasn't in the area.
-I was filming Mukham Sujay because I was in Sagar State. 8a78ff9644
-
-
-
diff --git "a/spaces/inplisQlawa/anything-midjourney-v4-1/Adobe Universal Patcher (Latest CC 2014) Is Here\302\240!!! WORK.md" "b/spaces/inplisQlawa/anything-midjourney-v4-1/Adobe Universal Patcher (Latest CC 2014) Is Here\302\240!!! WORK.md"
deleted file mode 100644
index 520deb0ac22c4d546b491fb1085fe3caa60d2d78..0000000000000000000000000000000000000000
--- "a/spaces/inplisQlawa/anything-midjourney-v4-1/Adobe Universal Patcher (Latest CC 2014) Is Here\302\240!!! WORK.md"
+++ /dev/null
@@ -1,6 +0,0 @@
-
Adobe Universal Patcher (Latest CC 2014) is Here !!!
-
-HP provides the DMIFIT and WNDMIFIT tools for re-flashing the DMI region: This application use to ... tool download · Next All In One HP DMI Tool free download ... 1fdad05405
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Mapper Denon Mc6000 Virtual Dj 8 Crack.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Mapper Denon Mc6000 Virtual Dj 8 Crack.md
deleted file mode 100644
index 28e2e6ed53c9f5af34675fbeff73a375d08464e2..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Mapper Denon Mc6000 Virtual Dj 8 Crack.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
The Denon MC6000 Mk2 is a DJ controller that has been upgraded to allow DJ software such as Traktor, Virtual DJ, and. Oct 16, 2017. This is a video showing you the complete mapping for Denon MC4000 or Denon DN-MC6000. Q:
denon mc4000 or denon dn-mc6000 or denon dn-mc6000 mk2 or denon dj songbook. oct 16, 2017. this is a video showing you the complete mapping for denon mc4000 or denon dn-mc6000. browse and support our website: inmap.org/en/denon-dj-controlers/mc4000 in this tutorial i will show you the mapping for denon mc4000 or denon dn-mc6000. the mapping is the same for denon mc4000 and denon dn-mc6000. virtual dj for denon mc4000 mk2 or denon mc4000. the denon mc4000 does not support mapping in virtual dj 8. i have created a tutorial on how to map the denon mc4000.
-
3 oct 2017 audio and midi; power; usb. cables and mapper. with the denon mc6000, denon m8 and denon mc8, denon dj. virtual dj 8 uses the mapping editor to let djs tweak their. monster mpp1 usb audio midi controller mapper dj-pad usb midi controller for. denon dj mc4000s uplink: the genuine denon mapper 6 + the best. denon mc4000s uplink: the genuine denon mapper 6 + the best. denon mc4000s the best way to dvs- on dj controller.denon dj pro scratch dj controller sc-4200 virtual dj.
-
virtual dj pro 8 was released last week for windows and mac os x platforms, and the pro version has finally arrived. a free version of the software is available for those who are not ready to use the full.
- 899543212b
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/MyLanViewer V4.18.6 Incl Patch [BETTER].md b/spaces/inplisQlawa/anything-midjourney-v4-1/MyLanViewer V4.18.6 Incl Patch [BETTER].md
deleted file mode 100644
index 578e5c029a4dbe1d16325127550cf4bf0b96b0c4..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/MyLanViewer V4.18.6 Incl Patch [BETTER].md
+++ /dev/null
@@ -1,168 +0,0 @@
-
-
MyLanViewer v4.18.6 Incl Patch: A Powerful Network Tool for Windows
-
-
If you are looking for a network tool that can help you scan and manage your local area network (LAN), you might want to try MyLanViewer v4.18.6 Incl Patch. This is a software that can help you find all IP addresses, MAC addresses and shared folders of computers on your wired or wireless (Wi-Fi) network. It can also perform remote operations such as shutdown, wake on LAN, view and control shared folders, terminate user sessions and more.
-
-
In this article, we will give you a brief overview of MyLanViewer v4.18.6 Incl Patch, its features and benefits, and how to download and install it on your Windows PC.
MyLanViewer v4.18.6 Incl Patch is a network tool that can help you scan and manage your LAN. It is developed by S.K. Software, a company that specializes in network utilities and security software.
-
-
MyLanViewer v4.18.6 Incl Patch has several functions that can help you monitor and control your network computers, such as:
-
-
-
Network/IP Scanner: This function can scan your network and display your network computers in an easy to read, buddy-list style window that provides the computer name, IP address, MAC address, NIC vendor, OS version, logged users, shared folders and other technical details for each computer.
-
Remote Operations: This function can turn on and off remote computers, view and control your shared folders, terminate user sessions, show netstat information, detect rogue DHCP servers and other network tools.
-
Wake On LAN: This function can send magic packets to wake up remote computers that support the Wake-on-LAN technology.
-
External IP Monitor: This function can monitor your external IP address and send email notifications when it changes.
-
Network Alerts: This function can monitor all devices (even hidden) on your subnet, and send alerts when the new devices will be found (for example, to know who is connected to your WiFi router or wireless network).
-
-
-
MyLanViewer v4.18.6 Incl Patch is easy to install and use, and has a user-friendly and beautiful interface. It supports IPv4 and IPv6 protocol.
-
-
What are the features and benefits of MyLanViewer v4.18.6 Incl Patch?
-
-
MyLanViewer v4.18.6 Incl Patch has many features and benefits that can help you scan and manage your LAN efficiently and effectively, such as:
-
-
-
It can help you find all IP addresses, MAC addresses and shared folders of computers on your network.
-
It can help you perform remote operations such as shutdown, wake on LAN, view and control shared folders, terminate user sessions and more.
-
It can help you monitor your external IP address and send email notifications when it changes.
-
It can help you monitor all devices (even hidden) on your subnet, and send alerts when the new devices will be found.
-
It can help you protect your network from unwanted magic packets, broadcast traffic and rogue DHCP servers.
-
It can help you reduce the load on the network infrastructure between subnets.
-
It can help you save time and energy by automating network management tasks.
-
It can help you improve network security and performance by detecting network problems and resolving them quickly.
-
-
-
How to download and install MyLanViewer v4.18.6 Incl Patch?
-
-
If you want to download and install MyLanViewer v4.18.6 Incl Patch on your Windows PC, you can follow these steps:
-
-
-
-
Go to the official website of S.K. Software at http://mylanviewer.com/
-
Click on the Download button next to MyLanViewer Network/IP Scanner (Trial).
-
Save the file MyLanViewer-Setup.exe on your computer.
-
Run the file MyLanViewer-Setup.exe to start the installation process.
-
Follow the instructions on the screen to complete the installation process.
-
Run the program MyLanViewer from the Start menu or desktop shortcut.
-
To activate the full version of MyLanViewer v4.18.6 Incl Patch, copy the file patch.exe from the downloaded folder to the installation folder of MyLanViewer (usually C:\Program Files\MyLanViewer).
-
Run the file patch.exe as administrator and click on Patch button.
-
You have successfully installed MyLanViewer v4.18.6 Incl Patch on your Windows PC.
-
-
-
Conclusion
-
-
MyLanViewer v4.18.6 Incl Patch is a powerful network tool for Windows that can help you scan and manage your LAN easily and effectively. It has many features and benefits that can improve your network security and performance. It is easy to install and use, and has a user-friendly and beautiful interface.
-
-
If you want to download MyLanViewer v4.18.6 Incl Patch for free,
-you can go to http://mylanviewer.com/ or https://nsaneforums.com/topic/241613-mylanviewer-4186-portable/.
-You can also read more reviews about it on https://new.c.mi.com/th/post/280670/MyLanViewer_V4186_Incl_Patch_HOT or https://sway.office.com/ItEcVyqIJiaMXgmW.
-
-
We hope you enjoyed this article about MyLanViewer v4.18.6 Incl Patch
-and learned something new about this amazing network tool.
-
How to use MyLanViewer v4.18.6 Incl Patch to scan and manage your LAN?
-
-
Using MyLanViewer v4.18.6 Incl Patch to scan and manage your LAN is very easy and intuitive. Here are some steps you can follow to get started:
-
-
-
After installing MyLanViewer v4.18.6 Incl Patch on your Windows PC, run the program from the Start menu or desktop shortcut.
-
The main window of MyLanViewer v4.18.6 Incl Patch will show you four tabs: Scanner, History, Favorites and Subnet Monitor.
-
To scan your network, click on the Scanner tab and then click on the Quick Scan button or press F5 on your keyboard. You can also choose Full Scan or Custom Scan from the Commands menu.
-
The program will scan your network and display your network computers in a list that provides the computer name, IP address, MAC address, NIC vendor, OS version, logged users, shared folders and other technical details for each computer.
-
To perform remote operations on a network computer, right-click on it and choose from the context menu. You can turn on and off remote computers, view and control your shared folders, terminate user sessions, show netstat information, detect rogue DHCP servers and other network tools.
-
To send magic packets to wake up remote computers that support the Wake-on-LAN technology, click on the Tools menu and choose Wake On LAN Manager. You can add or edit computers to the list and then click on Wake Up button or press F9 on your keyboard.
-
To monitor your external IP address and send email notifications when it changes, click on the Tools menu and choose External IP Monitor. You can configure your email settings and enable or disable notifications.
-
To monitor all devices (even hidden) on your subnet, and send alerts when the new devices will be found, click on the Subnet Monitor tab and then click on Start button or press F8 on your keyboard.
-
-
-
MyLanViewer v4.18.6 Incl Patch also has many other features and options that you can explore by browsing the menus and dialogs of the program.
-
-
What are the pros and cons of MyLanViewer v4.18.6 Incl Patch?
-
-
MyLanViewer v4.18.6 Incl Patch is a powerful network tool for Windows that has many pros and cons that you should consider before using it. Here are some of them:
-
-
Pros
-
-
-
It can help you scan and manage your LAN easily and effectively.
-
It has many features and benefits that can improve your network security and performance.
-
It is easy to install and use, and has a user-friendly and beautiful interface.
-
It supports IPv4 and IPv6 protocol.
-
It is available as a trial version that you can download for free.
-
-
-
Cons
-
-
-
It is not compatible with other operating systems besides Windows.
-
It may not work well with some firewalls or antivirus software.
-
It may cause some network traffic or interference when scanning or performing remote operations.
-
It may not detect some devices or computers that have different settings or configurations.
-
It requires a license key to activate the full version of the program.
-
-
-
Conclusion
-
-
MyLanViewer v4.18.6 Incl Patch is a powerful network tool for Windows that can help you scan and manage your LAN easily and effectively. It has many features and benefits that can improve your network security and performance. It is easy to install and use, and has a user-friendly and beautiful interface.
-
-
If you want to download MyLanViewer v4.18.6 Incl Patch for free,
-you can go to http://mylanviewer.com/ or https://nsaneforums.com/topic/241613-mylanviewer-4186-portable/.
-You can also read more reviews about it on https://www.softpedia.com/get/Network-Tools/Network-IP-Scanner/MyLanViewer.shtml or https://naturopathicdoctors.com/wp-content/uploads/2022/11/MyLanViewer_v4186_Incl_Patch.pdf.
-
-
We hope you enjoyed this article about MyLanViewer v4.18.6 Incl Patch
-and learned something new about this amazing network tool.
-
How to troubleshoot MyLanViewer v4.18.6 Incl Patch?
-
-
MyLanViewer v4.18.6 Incl Patch is a reliable and stable network tool for Windows, but sometimes it may encounter some problems or errors that can affect its performance or functionality. Here are some common issues and solutions that you can try to troubleshoot MyLanViewer v4.18.6 Incl Patch:
-
-
-
If MyLanViewer v4.18.6 Incl Patch cannot scan your network or find any devices or computers, you should check your network settings and make sure that your firewall or antivirus software is not blocking the program. You should also make sure that your devices or computers are turned on and connected to the same network.
-
If MyLanViewer v4.18.6 Incl Patch cannot perform remote operations on a network computer, you should check the permissions and credentials of the remote computer and make sure that they match with the ones you entered in the program. You should also make sure that the remote computer supports the remote operation you want to perform.
-
If MyLanViewer v4.18.6 Incl Patch cannot send or receive magic packets for wake on LAN, you should check the MAC address and IP address of the target computer and make sure that they are correct and valid. You should also make sure that the target computer supports the Wake-on-LAN technology and has it enabled in its BIOS settings.
-
If MyLanViewer v4.18.6 Incl Patch cannot monitor your external IP address or send email notifications when it changes, you should check your internet connection and make sure that it is working properly. You should also check your email settings and make sure that they are correct and valid.
-
If MyLanViewer v4.18.6 Incl Patch cannot monitor your subnet or send alerts when new devices are found, you should check your subnet settings and make sure that they are correct and valid. You should also make sure that your network devices are configured properly and have unique IP addresses.
-
-
-
If none of these solutions work for you, you can contact the support team of S.K. Software at support@mylanviewer.com and report your problem or error. They will try to help you as soon as possible.
-
-
How to uninstall MyLanViewer v4.18.6 Incl Patch?
-
-
If you want to uninstall MyLanViewer v4.18.6 Incl Patch from your Windows PC, you can follow these steps:
-
-
-
Close the program MyLanViewer if it is running.
-
Go to the Start menu and choose Control Panel.
-
Click on Programs and Features or Add or Remove Programs.
-
Find MyLanViewer in the list of installed programs and click on Uninstall or Remove.
-
Follow the instructions on the screen to complete the uninstallation process.
-
Delete the folder MyLanViewer from your installation directory (usually C:\Program Files\MyLanViewer).
-
Delete any shortcuts or icons of MyLanViewer from your desktop or start menu.
-
You have successfully uninstalled MyLanViewer v4.18.6 Incl Patch from your Windows PC.
-
-
-
Conclusion
-
-
MyLanViewer v4.18.6 Incl Patch is a powerful network tool for Windows that can help you scan and manage your LAN easily and effectively. It has many features and benefits that can improve your network security and performance. It is easy to install and use, and has a user-friendly and beautiful interface.
-
-
If you want to download MyLanViewer v4.18.6 Incl Patch for free,
-you can go to http://mylanviewer.com/ or https://nsaneforums.com/topic/241613-mylanviewer-4186-portable/.
-You can also read more reviews about it on https://www.softpedia.com/get/Network-Tools/Network-IP-Scanner/MyLanViewer.shtml or https://naturopathicdoctors.com/wp-content/uploads/2022/11/MyLanViewer_v4186_Incl_Patch.pdf.
-
-
We hope you enjoyed this article about MyLanViewer v4.18.6 Incl Patch
-and learned something new about this amazing network tool.
-
Conclusion
-
-
MyLanViewer v4.18.6 Incl Patch is a powerful network tool for Windows that can help you scan and manage your LAN easily and effectively. It has many features and benefits that can improve your network security and performance. It is easy to install and use, and has a user-friendly and beautiful interface.
-
-
If you want to download MyLanViewer v4.18.6 Incl Patch for free,
-you can go to http://mylanviewer.com/ or https://nsaneforums.com/topic/241613-mylanviewer-4186-portable/.
-You can also read more reviews about it on https://www.softpedia.com/get/Network-Tools/Network-IP-Scanner/MyLanViewer.shtml or https://naturopathicdoctors.com/wp-content/uploads/2022/11/MyLanViewer_v4186_Incl_Patch.pdf.
-
-
We hope you enjoyed this article about MyLanViewer v4.18.6 Incl Patch
-and learned something new about this amazing network tool.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/inreVtussa/clothingai/Examples/!!HOT!! Downloadbukufilsafatpendidikanislam13.md b/spaces/inreVtussa/clothingai/Examples/!!HOT!! Downloadbukufilsafatpendidikanislam13.md
deleted file mode 100644
index 9cb00718db693e86cfc395819e9e6c0addea98fa..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/!!HOT!! Downloadbukufilsafatpendidikanislam13.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Downloadbukufilsafatpendidikanislam13 --->>> DOWNLOAD Tulisan ini membahas tentang filsafat pendidikan terhadap ilmu pendidikan. ... 12-13. 3 H.M. Arifin ... 4d29de3e1b
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Comentariu In Limba Romana Pes 2013 [BEST] Download Torent.md b/spaces/inreVtussa/clothingai/Examples/Comentariu In Limba Romana Pes 2013 [BEST] Download Torent.md
deleted file mode 100644
index 0cf51ffe880cd9257f7f31c5df623a6c264d9401..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Comentariu In Limba Romana Pes 2013 [BEST] Download Torent.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
comentariu in limba romana pes 2013 download torent
',
-)
-
-iface.launch(debug=True, enable_queue=True)
diff --git a/spaces/jmesikto/whisper-webui/src/source.py b/spaces/jmesikto/whisper-webui/src/source.py
deleted file mode 100644
index e304e278bfae8ef289c999fc76311ce01b547991..0000000000000000000000000000000000000000
--- a/spaces/jmesikto/whisper-webui/src/source.py
+++ /dev/null
@@ -1,80 +0,0 @@
-# Gradio seems to truncate files without keeping the extension, so we need to truncate the file prefix ourself
-import os
-import pathlib
-from typing import List
-import zipfile
-
-import ffmpeg
-from more_itertools import unzip
-
-from src.download import ExceededMaximumDuration, download_url
-
-MAX_FILE_PREFIX_LENGTH = 17
-
-class AudioSource:
- def __init__(self, source_path, source_name = None, audio_duration = None):
- self.source_path = source_path
- self.source_name = source_name
- self._audio_duration = audio_duration
-
- # Load source name if not provided
- if (self.source_name is None):
- file_path = pathlib.Path(self.source_path)
- self.source_name = file_path.name
-
- def get_audio_duration(self):
- if self._audio_duration is None:
- self._audio_duration = float(ffmpeg.probe(self.source_path)["format"]["duration"])
-
- return self._audio_duration
-
- def get_full_name(self):
- return self.source_name
-
- def get_short_name(self, max_length: int = MAX_FILE_PREFIX_LENGTH):
- file_path = pathlib.Path(self.source_name)
- short_name = file_path.stem[:max_length] + file_path.suffix
-
- return short_name
-
- def __str__(self) -> str:
- return self.source_path
-
-class AudioSourceCollection:
- def __init__(self, sources: List[AudioSource]):
- self.sources = sources
-
- def __iter__(self):
- return iter(self.sources)
-
-def get_audio_source_collection(urlData: str, multipleFiles: List, microphoneData: str, input_audio_max_duration: float = -1) -> List[AudioSource]:
- output: List[AudioSource] = []
-
- if urlData:
- # Download from YouTube. This could also be a playlist or a channel.
- output.extend([ AudioSource(x) for x in download_url(urlData, input_audio_max_duration, playlistItems=None) ])
- else:
- # Add input files
- if (multipleFiles is not None):
- output.extend([ AudioSource(x.name) for x in multipleFiles ])
- if (microphoneData is not None):
- output.append(AudioSource(microphoneData))
-
- total_duration = 0
-
- # Calculate total audio length. We do this even if input_audio_max_duration
- # is disabled to ensure that all the audio files are valid.
- for source in output:
- audioDuration = ffmpeg.probe(source.source_path)["format"]["duration"]
- total_duration += float(audioDuration)
-
- # Save audio duration
- source._audio_duration = float(audioDuration)
-
- # Ensure the total duration of the audio is not too long
- if input_audio_max_duration > 0:
- if float(total_duration) > input_audio_max_duration:
- raise ExceededMaximumDuration(videoDuration=total_duration, maxDuration=input_audio_max_duration, message="Video(s) is too long")
-
- # Return a list of audio sources
- return output
\ No newline at end of file
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/ImageMath.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/ImageMath.py
deleted file mode 100644
index ac7d36b698c2ec9839d8a771734c9f730f701534..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/ImageMath.py
+++ /dev/null
@@ -1,263 +0,0 @@
-#
-# The Python Imaging Library
-# $Id$
-#
-# a simple math add-on for the Python Imaging Library
-#
-# History:
-# 1999-02-15 fl Original PIL Plus release
-# 2005-05-05 fl Simplified and cleaned up for PIL 1.1.6
-# 2005-09-12 fl Fixed int() and float() for Python 2.4.1
-#
-# Copyright (c) 1999-2005 by Secret Labs AB
-# Copyright (c) 2005 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-
-import builtins
-
-from . import Image, _imagingmath
-
-
-def _isconstant(v):
- return isinstance(v, (int, float))
-
-
-class _Operand:
- """Wraps an image operand, providing standard operators"""
-
- def __init__(self, im):
- self.im = im
-
- def __fixup(self, im1):
- # convert image to suitable mode
- if isinstance(im1, _Operand):
- # argument was an image.
- if im1.im.mode in ("1", "L"):
- return im1.im.convert("I")
- elif im1.im.mode in ("I", "F"):
- return im1.im
- else:
- msg = f"unsupported mode: {im1.im.mode}"
- raise ValueError(msg)
- else:
- # argument was a constant
- if _isconstant(im1) and self.im.mode in ("1", "L", "I"):
- return Image.new("I", self.im.size, im1)
- else:
- return Image.new("F", self.im.size, im1)
-
- def apply(self, op, im1, im2=None, mode=None):
- im1 = self.__fixup(im1)
- if im2 is None:
- # unary operation
- out = Image.new(mode or im1.mode, im1.size, None)
- im1.load()
- try:
- op = getattr(_imagingmath, op + "_" + im1.mode)
- except AttributeError as e:
- msg = f"bad operand type for '{op}'"
- raise TypeError(msg) from e
- _imagingmath.unop(op, out.im.id, im1.im.id)
- else:
- # binary operation
- im2 = self.__fixup(im2)
- if im1.mode != im2.mode:
- # convert both arguments to floating point
- if im1.mode != "F":
- im1 = im1.convert("F")
- if im2.mode != "F":
- im2 = im2.convert("F")
- if im1.size != im2.size:
- # crop both arguments to a common size
- size = (min(im1.size[0], im2.size[0]), min(im1.size[1], im2.size[1]))
- if im1.size != size:
- im1 = im1.crop((0, 0) + size)
- if im2.size != size:
- im2 = im2.crop((0, 0) + size)
- out = Image.new(mode or im1.mode, im1.size, None)
- im1.load()
- im2.load()
- try:
- op = getattr(_imagingmath, op + "_" + im1.mode)
- except AttributeError as e:
- msg = f"bad operand type for '{op}'"
- raise TypeError(msg) from e
- _imagingmath.binop(op, out.im.id, im1.im.id, im2.im.id)
- return _Operand(out)
-
- # unary operators
- def __bool__(self):
- # an image is "true" if it contains at least one non-zero pixel
- return self.im.getbbox() is not None
-
- def __abs__(self):
- return self.apply("abs", self)
-
- def __pos__(self):
- return self
-
- def __neg__(self):
- return self.apply("neg", self)
-
- # binary operators
- def __add__(self, other):
- return self.apply("add", self, other)
-
- def __radd__(self, other):
- return self.apply("add", other, self)
-
- def __sub__(self, other):
- return self.apply("sub", self, other)
-
- def __rsub__(self, other):
- return self.apply("sub", other, self)
-
- def __mul__(self, other):
- return self.apply("mul", self, other)
-
- def __rmul__(self, other):
- return self.apply("mul", other, self)
-
- def __truediv__(self, other):
- return self.apply("div", self, other)
-
- def __rtruediv__(self, other):
- return self.apply("div", other, self)
-
- def __mod__(self, other):
- return self.apply("mod", self, other)
-
- def __rmod__(self, other):
- return self.apply("mod", other, self)
-
- def __pow__(self, other):
- return self.apply("pow", self, other)
-
- def __rpow__(self, other):
- return self.apply("pow", other, self)
-
- # bitwise
- def __invert__(self):
- return self.apply("invert", self)
-
- def __and__(self, other):
- return self.apply("and", self, other)
-
- def __rand__(self, other):
- return self.apply("and", other, self)
-
- def __or__(self, other):
- return self.apply("or", self, other)
-
- def __ror__(self, other):
- return self.apply("or", other, self)
-
- def __xor__(self, other):
- return self.apply("xor", self, other)
-
- def __rxor__(self, other):
- return self.apply("xor", other, self)
-
- def __lshift__(self, other):
- return self.apply("lshift", self, other)
-
- def __rshift__(self, other):
- return self.apply("rshift", self, other)
-
- # logical
- def __eq__(self, other):
- return self.apply("eq", self, other)
-
- def __ne__(self, other):
- return self.apply("ne", self, other)
-
- def __lt__(self, other):
- return self.apply("lt", self, other)
-
- def __le__(self, other):
- return self.apply("le", self, other)
-
- def __gt__(self, other):
- return self.apply("gt", self, other)
-
- def __ge__(self, other):
- return self.apply("ge", self, other)
-
-
-# conversions
-def imagemath_int(self):
- return _Operand(self.im.convert("I"))
-
-
-def imagemath_float(self):
- return _Operand(self.im.convert("F"))
-
-
-# logical
-def imagemath_equal(self, other):
- return self.apply("eq", self, other, mode="I")
-
-
-def imagemath_notequal(self, other):
- return self.apply("ne", self, other, mode="I")
-
-
-def imagemath_min(self, other):
- return self.apply("min", self, other)
-
-
-def imagemath_max(self, other):
- return self.apply("max", self, other)
-
-
-def imagemath_convert(self, mode):
- return _Operand(self.im.convert(mode))
-
-
-ops = {}
-for k, v in list(globals().items()):
- if k[:10] == "imagemath_":
- ops[k[10:]] = v
-
-
-def eval(expression, _dict={}, **kw):
- """
- Evaluates an image expression.
-
- :param expression: A string containing a Python-style expression.
- :param options: Values to add to the evaluation context. You
- can either use a dictionary, or one or more keyword
- arguments.
- :return: The evaluated expression. This is usually an image object, but can
- also be an integer, a floating point value, or a pixel tuple,
- depending on the expression.
- """
-
- # build execution namespace
- args = ops.copy()
- args.update(_dict)
- args.update(kw)
- for k, v in list(args.items()):
- if hasattr(v, "im"):
- args[k] = _Operand(v)
-
- compiled_code = compile(expression, "", "eval")
-
- def scan(code):
- for const in code.co_consts:
- if type(const) == type(compiled_code):
- scan(const)
-
- for name in code.co_names:
- if name not in args and name != "abs":
- msg = f"'{name}' not allowed"
- raise ValueError(msg)
-
- scan(compiled_code)
- out = builtins.eval(expression, {"__builtins": {"abs": abs}}, args)
- try:
- return out.im
- except AttributeError:
- return out
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/vegalite/v5/schema/channels.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/vegalite/v5/schema/channels.py
deleted file mode 100644
index 67d7fe1c020b1924c3083ea13925b317e73a8488..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/vegalite/v5/schema/channels.py
+++ /dev/null
@@ -1,17634 +0,0 @@
-# The contents of this file are automatically written by
-# tools/generate_schema_wrapper.py. Do not modify directly.
-
-import sys
-from . import core
-import pandas as pd
-from altair.utils.schemapi import Undefined, with_property_setters
-from altair.utils import parse_shorthand
-from typing import overload, List
-
-from typing import Literal
-
-
-class FieldChannelMixin:
- def to_dict(self, validate=True, ignore=(), context=None):
- context = context or {}
- shorthand = self._get('shorthand')
- field = self._get('field')
-
- if shorthand is not Undefined and field is not Undefined:
- raise ValueError("{} specifies both shorthand={} and field={}. "
- "".format(self.__class__.__name__, shorthand, field))
-
- if isinstance(shorthand, (tuple, list)):
- # If given a list of shorthands, then transform it to a list of classes
- kwds = self._kwds.copy()
- kwds.pop('shorthand')
- return [self.__class__(sh, **kwds).to_dict(validate=validate, ignore=ignore, context=context)
- for sh in shorthand]
-
- if shorthand is Undefined:
- parsed = {}
- elif isinstance(shorthand, str):
- parsed = parse_shorthand(shorthand, data=context.get('data', None))
- type_required = 'type' in self._kwds
- type_in_shorthand = 'type' in parsed
- type_defined_explicitly = self._get('type') is not Undefined
- if not type_required:
- # Secondary field names don't require a type argument in VegaLite 3+.
- # We still parse it out of the shorthand, but drop it here.
- parsed.pop('type', None)
- elif not (type_in_shorthand or type_defined_explicitly):
- if isinstance(context.get('data', None), pd.DataFrame):
- raise ValueError(
- 'Unable to determine data type for the field "{}";'
- " verify that the field name is not misspelled."
- " If you are referencing a field from a transform,"
- " also confirm that the data type is specified correctly.".format(shorthand)
- )
- else:
- raise ValueError("{} encoding field is specified without a type; "
- "the type cannot be automatically inferred because "
- "the data is not specified as a pandas.DataFrame."
- "".format(shorthand))
- else:
- # Shorthand is not a string; we pass the definition to field,
- # and do not do any parsing.
- parsed = {'field': shorthand}
- context["parsed_shorthand"] = parsed
-
- return super(FieldChannelMixin, self).to_dict(
- validate=validate,
- ignore=ignore,
- context=context
- )
-
-
-class ValueChannelMixin:
- def to_dict(self, validate=True, ignore=(), context=None):
- context = context or {}
- condition = self._get('condition', Undefined)
- copy = self # don't copy unless we need to
- if condition is not Undefined:
- if isinstance(condition, core.SchemaBase):
- pass
- elif 'field' in condition and 'type' not in condition:
- kwds = parse_shorthand(condition['field'], context.get('data', None))
- copy = self.copy(deep=['condition'])
- copy['condition'].update(kwds)
- return super(ValueChannelMixin, copy).to_dict(validate=validate,
- ignore=ignore,
- context=context)
-
-
-class DatumChannelMixin:
- def to_dict(self, validate=True, ignore=(), context=None):
- context = context or {}
- datum = self._get('datum', Undefined)
- copy = self # don't copy unless we need to
- if datum is not Undefined:
- if isinstance(datum, core.SchemaBase):
- pass
- return super(DatumChannelMixin, copy).to_dict(validate=validate,
- ignore=ignore,
- context=context)
-
-
-@with_property_setters
-class Angle(FieldChannelMixin, core.FieldOrDatumDefWithConditionMarkPropFieldDefnumber):
- """Angle schema wrapper
-
- Mapping(required=[shorthand])
-
- Parameters
- ----------
-
- shorthand : string
- shorthand for field, aggregate, and type
- aggregate : :class:`Aggregate`
- Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``,
- ``"min"``, ``"max"``, ``"count"`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- bin : anyOf(boolean, :class:`BinParams`, None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating
- that the data for ``x`` or ``y`` channel are binned before they are imported into
- Vega-Lite ( ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- condition : anyOf(:class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`))
- One or more value definition(s) with `a parameter or a test predicate
- `__.
-
- **Note:** A field definition's ``condition`` property can only contain `conditional
- value definitions `__
- since Vega-Lite only allows at most one encoded field per encoding channel.
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access
- nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If
- field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details
- about escaping in the `field documentation
- `__. 2) ``field`` is not required
- if ``aggregate`` is ``count``.
- legend : anyOf(:class:`Legend`, None)
- An object defining properties of the legend. If ``null``, the legend for the
- encoding channel will be removed.
-
- **Default value:** If undefined, default `legend properties
- `__ are applied.
-
- **See also:** `legend `__
- documentation.
- scale : anyOf(:class:`Scale`, None)
- An object defining properties of the channel's scale, which is the function that
- transforms values in the data domain (numbers, dates, strings, etc) to visual values
- (pixels, colors, sizes) of the encoding channels.
-
- If ``null``, the scale will be `disabled and the data value will be directly encoded
- `__.
-
- **Default value:** If undefined, default `scale properties
- `__ are applied.
-
- **See also:** `scale `__
- documentation.
- sort : :class:`Sort`
- Sort order for the encoded field.
-
- For continuous fields (quantitative or temporal), ``sort`` can be either
- ``"ascending"`` or ``"descending"``.
-
- For discrete fields, ``sort`` can be one of the following:
-
-
- * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in
- JavaScript.
- * `A string indicating an encoding channel name to sort by
- `__ (e.g.,
- ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g.,
- ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a
- sort-by-encoding definition
- `__. For
- example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order":
- "descending"}``.
- * `A sort field definition
- `__ for sorting by
- another field.
- * `An array specifying the field values in preferred order
- `__. In this case, the
- sort order will obey the values in the array, followed by any unspecified values
- in their original order. For discrete time field, values in the sort array can be
- `date-time definition objects
- `__. In addition, for time
- units ``"month"`` and ``"day"``, the values can be the month or day names (case
- insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ).
- * ``null`` indicating no sort.
-
- **Default value:** ``"ascending"``
-
- **Note:** ``null`` and sorting by another channel is not supported for ``row`` and
- ``column``.
-
- **See also:** `sort `__
- documentation.
- timeUnit : anyOf(:class:`TimeUnit`, :class:`BinnedTimeUnit`, :class:`TimeUnitParams`)
- Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal
- field. or `a temporal field that gets casted as ordinal
- `__.
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `timeUnit `__
- documentation.
- title : anyOf(:class:`Text`, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- type : :class:`StandardType`
- The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
- ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also
- be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- Vega-Lite automatically infers data types in many cases as discussed below. However,
- type is required for a field if: (1) the field is not nominal and the field encoding
- has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale
- type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal
- scale for a field with ``bin`` or ``timeUnit``.
-
- **Default value:**
-
- 1) For a data ``field``, ``"nominal"`` is the default data type unless the field
- encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or
- ``timeUnit`` that satisfies the following criteria:
-
-
- * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin``
- or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is
- ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a
- quantitative scale `__.
- * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit``
- or (2) the specified scale type is a time or utc scale
- * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort
- order
- `__,
- (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding
- channel is ``order``.
-
- 2) For a constant value in data domain ( ``datum`` ):
-
-
- * ``"quantitative"`` if the datum is a number
- * ``"nominal"`` if the datum is a string
- * ``"temporal"`` if the datum is `a date time object
- `__
-
- **Note:**
-
-
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types (number, string, etc.). The same primitive data type can have different
- types of measurement. For example, numeric data can represent quantitative,
- ordinal, or nominal data.
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal"
- (for using an ordinal scale)
- `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they must have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "angle"
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmax=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmin=Undefined, **kwds) -> 'Angle':
- ...
-
- def bandPosition(self, _: float, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: bool, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: None, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, _: str, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, repeat=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def legend(self, aria=Undefined, clipHeight=Undefined, columnPadding=Undefined, columns=Undefined, cornerRadius=Undefined, description=Undefined, direction=Undefined, fillColor=Undefined, format=Undefined, formatType=Undefined, gradientLength=Undefined, gradientOpacity=Undefined, gradientStrokeColor=Undefined, gradientStrokeWidth=Undefined, gradientThickness=Undefined, gridAlign=Undefined, labelAlign=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, legendX=Undefined, legendY=Undefined, offset=Undefined, orient=Undefined, padding=Undefined, rowPadding=Undefined, strokeColor=Undefined, symbolDash=Undefined, symbolDashOffset=Undefined, symbolFillColor=Undefined, symbolLimit=Undefined, symbolOffset=Undefined, symbolOpacity=Undefined, symbolSize=Undefined, symbolStrokeColor=Undefined, symbolStrokeWidth=Undefined, symbolType=Undefined, tickCount=Undefined, tickMinStep=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titleOrient=Undefined, titlePadding=Undefined, type=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def legend(self, _: None, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, domainRaw=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def scale(self, _: None, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[float], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[str], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[bool], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[core.DateTime], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: None, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["binnedyear", "binnedyearquarter", "binnedyearquartermonth", "binnedyearmonth", "binnedyearmonthdate", "binnedyearmonthdatehours", "binnedyearmonthdatehoursminutes", "binnedyearmonthdatehoursminutesseconds", "binnedyearweek", "binnedyearweekday", "binnedyearweekdayhours", "binnedyearweekdayhoursminutes", "binnedyearweekdayhoursminutesseconds", "binnedyeardayofyear"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["binnedutcyear", "binnedutcyearquarter", "binnedutcyearquartermonth", "binnedutcyearmonth", "binnedutcyearmonthdate", "binnedutcyearmonthdatehours", "binnedutcyearmonthdatehoursminutes", "binnedutcyearmonthdatehoursminutesseconds", "binnedutcyearweek", "binnedutcyearweekday", "binnedutcyearweekdayhours", "binnedutcyearweekdayhoursminutes", "binnedutcyearweekdayhoursminutesseconds", "binnedutcyeardayofyear"], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, binned=Undefined, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: str, **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: List[str], **kwds) -> 'Angle':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: None, **kwds) -> 'Angle':
- ...
-
- def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Angle':
- ...
-
-
- def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined,
- condition=Undefined, field=Undefined, legend=Undefined, scale=Undefined,
- sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds):
- super(Angle, self).__init__(shorthand=shorthand, aggregate=aggregate, bandPosition=bandPosition,
- bin=bin, condition=condition, field=field, legend=legend,
- scale=scale, sort=sort, timeUnit=timeUnit, title=title, type=type,
- **kwds)
-
-
-@with_property_setters
-class AngleDatum(DatumChannelMixin, core.FieldOrDatumDefWithConditionDatumDefnumber):
- """AngleDatum schema wrapper
-
- Mapping(required=[])
-
- Parameters
- ----------
-
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- condition : anyOf(:class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`))
- One or more value definition(s) with `a parameter or a test predicate
- `__.
-
- **Note:** A field definition's ``condition`` property can only contain `conditional
- value definitions `__
- since Vega-Lite only allows at most one encoded field per encoding channel.
- datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`)
- A constant value in data domain.
- title : anyOf(:class:`Text`, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- type : :class:`Type`
- The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
- ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also
- be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- Vega-Lite automatically infers data types in many cases as discussed below. However,
- type is required for a field if: (1) the field is not nominal and the field encoding
- has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale
- type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal
- scale for a field with ``bin`` or ``timeUnit``.
-
- **Default value:**
-
- 1) For a data ``field``, ``"nominal"`` is the default data type unless the field
- encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or
- ``timeUnit`` that satisfies the following criteria:
-
-
- * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin``
- or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is
- ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a
- quantitative scale `__.
- * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit``
- or (2) the specified scale type is a time or utc scale
- * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort
- order
- `__,
- (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding
- channel is ``order``.
-
- 2) For a constant value in data domain ( ``datum`` ):
-
-
- * ``"quantitative"`` if the datum is a number
- * ``"nominal"`` if the datum is a string
- * ``"temporal"`` if the datum is `a date time object
- `__
-
- **Note:**
-
-
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types (number, string, etc.). The same primitive data type can have different
- types of measurement. For example, numeric data can represent quantitative,
- ordinal, or nominal data.
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal"
- (for using an ordinal scale)
- `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they must have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "angle"
-
- def bandPosition(self, _: float, **kwds) -> 'AngleDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'AngleDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'AngleDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'AngleDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: str, **kwds) -> 'AngleDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: List[str], **kwds) -> 'AngleDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: None, **kwds) -> 'AngleDatum':
- ...
-
- def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'AngleDatum':
- ...
-
-
- def __init__(self, datum, bandPosition=Undefined, condition=Undefined, title=Undefined,
- type=Undefined, **kwds):
- super(AngleDatum, self).__init__(datum=datum, bandPosition=bandPosition, condition=condition,
- title=title, type=type, **kwds)
-
-
-@with_property_setters
-class AngleValue(ValueChannelMixin, core.ValueDefWithConditionMarkPropFieldOrDatumDefnumber):
- """AngleValue schema wrapper
-
- Mapping(required=[])
-
- Parameters
- ----------
-
- condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefnumberExprRef`, List(:class:`ConditionalValueDefnumberExprRef`))
- A field definition or one or more value definition(s) with a parameter predicate.
- value : anyOf(float, :class:`ExprRef`)
- A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient
- definition `__ for color,
- values between ``0`` to ``1`` for opacity).
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "angle"
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'AngleValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'AngleValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'AngleValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'AngleValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'AngleValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'AngleValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefnumberExprRef], **kwds) -> 'AngleValue':
- ...
-
-
- def __init__(self, value, condition=Undefined, **kwds):
- super(AngleValue, self).__init__(value=value, condition=condition, **kwds)
-
-
-@with_property_setters
-class Color(FieldChannelMixin, core.FieldOrDatumDefWithConditionMarkPropFieldDefGradientstringnull):
- """Color schema wrapper
-
- Mapping(required=[shorthand])
-
- Parameters
- ----------
-
- shorthand : string
- shorthand for field, aggregate, and type
- aggregate : :class:`Aggregate`
- Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``,
- ``"min"``, ``"max"``, ``"count"`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- bin : anyOf(boolean, :class:`BinParams`, None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating
- that the data for ``x`` or ``y`` channel are binned before they are imported into
- Vega-Lite ( ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- condition : anyOf(:class:`ConditionalValueDefGradientstringnullExprRef`, List(:class:`ConditionalValueDefGradientstringnullExprRef`))
- One or more value definition(s) with `a parameter or a test predicate
- `__.
-
- **Note:** A field definition's ``condition`` property can only contain `conditional
- value definitions `__
- since Vega-Lite only allows at most one encoded field per encoding channel.
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access
- nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If
- field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details
- about escaping in the `field documentation
- `__. 2) ``field`` is not required
- if ``aggregate`` is ``count``.
- legend : anyOf(:class:`Legend`, None)
- An object defining properties of the legend. If ``null``, the legend for the
- encoding channel will be removed.
-
- **Default value:** If undefined, default `legend properties
- `__ are applied.
-
- **See also:** `legend `__
- documentation.
- scale : anyOf(:class:`Scale`, None)
- An object defining properties of the channel's scale, which is the function that
- transforms values in the data domain (numbers, dates, strings, etc) to visual values
- (pixels, colors, sizes) of the encoding channels.
-
- If ``null``, the scale will be `disabled and the data value will be directly encoded
- `__.
-
- **Default value:** If undefined, default `scale properties
- `__ are applied.
-
- **See also:** `scale `__
- documentation.
- sort : :class:`Sort`
- Sort order for the encoded field.
-
- For continuous fields (quantitative or temporal), ``sort`` can be either
- ``"ascending"`` or ``"descending"``.
-
- For discrete fields, ``sort`` can be one of the following:
-
-
- * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in
- JavaScript.
- * `A string indicating an encoding channel name to sort by
- `__ (e.g.,
- ``"x"`` or ``"y"`` ) with an optional minus prefix for descending sort (e.g.,
- ``"-x"`` to sort by x-field, descending). This channel string is short-form of `a
- sort-by-encoding definition
- `__. For
- example, ``"sort": "-x"`` is equivalent to ``"sort": {"encoding": "x", "order":
- "descending"}``.
- * `A sort field definition
- `__ for sorting by
- another field.
- * `An array specifying the field values in preferred order
- `__. In this case, the
- sort order will obey the values in the array, followed by any unspecified values
- in their original order. For discrete time field, values in the sort array can be
- `date-time definition objects
- `__. In addition, for time
- units ``"month"`` and ``"day"``, the values can be the month or day names (case
- insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ).
- * ``null`` indicating no sort.
-
- **Default value:** ``"ascending"``
-
- **Note:** ``null`` and sorting by another channel is not supported for ``row`` and
- ``column``.
-
- **See also:** `sort `__
- documentation.
- timeUnit : anyOf(:class:`TimeUnit`, :class:`BinnedTimeUnit`, :class:`TimeUnitParams`)
- Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal
- field. or `a temporal field that gets casted as ordinal
- `__.
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `timeUnit `__
- documentation.
- title : anyOf(:class:`Text`, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- type : :class:`StandardType`
- The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
- ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also
- be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- Vega-Lite automatically infers data types in many cases as discussed below. However,
- type is required for a field if: (1) the field is not nominal and the field encoding
- has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale
- type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal
- scale for a field with ``bin`` or ``timeUnit``.
-
- **Default value:**
-
- 1) For a data ``field``, ``"nominal"`` is the default data type unless the field
- encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or
- ``timeUnit`` that satisfies the following criteria:
-
-
- * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin``
- or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is
- ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a
- quantitative scale `__.
- * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit``
- or (2) the specified scale type is a time or utc scale
- * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort
- order
- `__,
- (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding
- channel is ``order``.
-
- 2) For a constant value in data domain ( ``datum`` ):
-
-
- * ``"quantitative"`` if the datum is a number
- * ``"nominal"`` if the datum is a string
- * ``"temporal"`` if the datum is `a date time object
- `__
-
- **Note:**
-
-
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types (number, string, etc.). The same primitive data type can have different
- types of measurement. For example, numeric data can represent quantitative,
- ordinal, or nominal data.
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal"
- (for using an ordinal scale)
- `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they must have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "color"
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmax=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmin=Undefined, **kwds) -> 'Color':
- ...
-
- def bandPosition(self, _: float, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: bool, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: None, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefGradientstringnullExprRef], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, _: str, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, repeat=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def legend(self, aria=Undefined, clipHeight=Undefined, columnPadding=Undefined, columns=Undefined, cornerRadius=Undefined, description=Undefined, direction=Undefined, fillColor=Undefined, format=Undefined, formatType=Undefined, gradientLength=Undefined, gradientOpacity=Undefined, gradientStrokeColor=Undefined, gradientStrokeWidth=Undefined, gradientThickness=Undefined, gridAlign=Undefined, labelAlign=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelOffset=Undefined, labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined, labelSeparation=Undefined, legendX=Undefined, legendY=Undefined, offset=Undefined, orient=Undefined, padding=Undefined, rowPadding=Undefined, strokeColor=Undefined, symbolDash=Undefined, symbolDashOffset=Undefined, symbolFillColor=Undefined, symbolLimit=Undefined, symbolOffset=Undefined, symbolOpacity=Undefined, symbolSize=Undefined, symbolStrokeColor=Undefined, symbolStrokeWidth=Undefined, symbolType=Undefined, tickCount=Undefined, tickMinStep=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOpacity=Undefined, titleOrient=Undefined, titlePadding=Undefined, type=Undefined, values=Undefined, zindex=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def legend(self, _: None, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def scale(self, align=Undefined, base=Undefined, bins=Undefined, clamp=Undefined, constant=Undefined, domain=Undefined, domainMax=Undefined, domainMid=Undefined, domainMin=Undefined, domainRaw=Undefined, exponent=Undefined, interpolate=Undefined, nice=Undefined, padding=Undefined, paddingInner=Undefined, paddingOuter=Undefined, range=Undefined, rangeMax=Undefined, rangeMin=Undefined, reverse=Undefined, round=Undefined, scheme=Undefined, type=Undefined, zero=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def scale(self, _: None, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[float], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[str], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[bool], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[core.DateTime], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["x", "y", "color", "fill", "stroke", "strokeWidth", "size", "shape", "fillOpacity", "strokeOpacity", "opacity", "text"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["-x", "-y", "-color", "-fill", "-stroke", "-strokeWidth", "-size", "-shape", "-fillOpacity", "-strokeOpacity", "-opacity", "-text"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, encoding=Undefined, order=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: None, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["binnedyear", "binnedyearquarter", "binnedyearquartermonth", "binnedyearmonth", "binnedyearmonthdate", "binnedyearmonthdatehours", "binnedyearmonthdatehoursminutes", "binnedyearmonthdatehoursminutesseconds", "binnedyearweek", "binnedyearweekday", "binnedyearweekdayhours", "binnedyearweekdayhoursminutes", "binnedyearweekdayhoursminutesseconds", "binnedyeardayofyear"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["binnedutcyear", "binnedutcyearquarter", "binnedutcyearquartermonth", "binnedutcyearmonth", "binnedutcyearmonthdate", "binnedutcyearmonthdatehours", "binnedutcyearmonthdatehoursminutes", "binnedutcyearmonthdatehoursminutesseconds", "binnedutcyearweek", "binnedutcyearweekday", "binnedutcyearweekdayhours", "binnedutcyearweekdayhoursminutes", "binnedutcyearweekdayhoursminutesseconds", "binnedutcyeardayofyear"], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, binned=Undefined, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: str, **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: List[str], **kwds) -> 'Color':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: None, **kwds) -> 'Color':
- ...
-
- def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Color':
- ...
-
-
- def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined,
- condition=Undefined, field=Undefined, legend=Undefined, scale=Undefined,
- sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds):
- super(Color, self).__init__(shorthand=shorthand, aggregate=aggregate, bandPosition=bandPosition,
- bin=bin, condition=condition, field=field, legend=legend,
- scale=scale, sort=sort, timeUnit=timeUnit, title=title, type=type,
- **kwds)
-
-
-@with_property_setters
-class ColorDatum(DatumChannelMixin, core.FieldOrDatumDefWithConditionDatumDefGradientstringnull):
- """ColorDatum schema wrapper
-
- Mapping(required=[])
-
- Parameters
- ----------
-
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- condition : anyOf(:class:`ConditionalValueDefGradientstringnullExprRef`, List(:class:`ConditionalValueDefGradientstringnullExprRef`))
- One or more value definition(s) with `a parameter or a test predicate
- `__.
-
- **Note:** A field definition's ``condition`` property can only contain `conditional
- value definitions `__
- since Vega-Lite only allows at most one encoded field per encoding channel.
- datum : anyOf(:class:`PrimitiveValue`, :class:`DateTime`, :class:`ExprRef`, :class:`RepeatRef`)
- A constant value in data domain.
- title : anyOf(:class:`Text`, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- type : :class:`Type`
- The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
- ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also
- be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- Vega-Lite automatically infers data types in many cases as discussed below. However,
- type is required for a field if: (1) the field is not nominal and the field encoding
- has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale
- type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal
- scale for a field with ``bin`` or ``timeUnit``.
-
- **Default value:**
-
- 1) For a data ``field``, ``"nominal"`` is the default data type unless the field
- encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or
- ``timeUnit`` that satisfies the following criteria:
-
-
- * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin``
- or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is
- ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a
- quantitative scale `__.
- * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit``
- or (2) the specified scale type is a time or utc scale
- * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort
- order
- `__,
- (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding
- channel is ``order``.
-
- 2) For a constant value in data domain ( ``datum`` ):
-
-
- * ``"quantitative"`` if the datum is a number
- * ``"nominal"`` if the datum is a string
- * ``"temporal"`` if the datum is `a date time object
- `__
-
- **Note:**
-
-
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types (number, string, etc.). The same primitive data type can have different
- types of measurement. For example, numeric data can represent quantitative,
- ordinal, or nominal data.
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal"
- (for using an ordinal scale)
- `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they must have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "color"
-
- def bandPosition(self, _: float, **kwds) -> 'ColorDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'ColorDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'ColorDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefGradientstringnullExprRef], **kwds) -> 'ColorDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: str, **kwds) -> 'ColorDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: List[str], **kwds) -> 'ColorDatum':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: None, **kwds) -> 'ColorDatum':
- ...
-
- def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal", "geojson"], **kwds) -> 'ColorDatum':
- ...
-
-
- def __init__(self, datum, bandPosition=Undefined, condition=Undefined, title=Undefined,
- type=Undefined, **kwds):
- super(ColorDatum, self).__init__(datum=datum, bandPosition=bandPosition, condition=condition,
- title=title, type=type, **kwds)
-
-
-@with_property_setters
-class ColorValue(ValueChannelMixin, core.ValueDefWithConditionMarkPropFieldOrDatumDefGradientstringnull):
- """ColorValue schema wrapper
-
- Mapping(required=[])
-
- Parameters
- ----------
-
- condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefGradientstringnullExprRef`, List(:class:`ConditionalValueDefGradientstringnullExprRef`))
- A field definition or one or more value definition(s) with a parameter predicate.
- value : anyOf(:class:`Gradient`, string, None, :class:`ExprRef`)
- A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient
- definition `__ for color,
- values between ``0`` to ``1`` for opacity).
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "color"
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'ColorValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'ColorValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'ColorValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'ColorValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'ColorValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'ColorValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefGradientstringnullExprRef], **kwds) -> 'ColorValue':
- ...
-
-
- def __init__(self, value, condition=Undefined, **kwds):
- super(ColorValue, self).__init__(value=value, condition=condition, **kwds)
-
-
-@with_property_setters
-class Column(FieldChannelMixin, core.RowColumnEncodingFieldDef):
- """Column schema wrapper
-
- Mapping(required=[shorthand])
-
- Parameters
- ----------
-
- shorthand : string
- shorthand for field, aggregate, and type
- aggregate : :class:`Aggregate`
- Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``,
- ``"min"``, ``"max"``, ``"count"`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- align : :class:`LayoutAlign`
- The alignment to apply to row/column facet's subplot. The supported string values
- are ``"all"``, ``"each"``, and ``"none"``.
-
-
- * For ``"none"``, a flow layout will be used, in which adjacent subviews are simply
- placed one after the other.
- * For ``"each"``, subviews will be aligned into a clean grid structure, but each row
- or column may be of variable size.
- * For ``"all"``, subviews will be aligned and each row or column will be sized
- identically based on the maximum observed size. String values for this property
- will be applied to both grid rows and columns.
-
- **Default value:** ``"all"``.
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- bin : anyOf(boolean, :class:`BinParams`, None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating
- that the data for ``x`` or ``y`` channel are binned before they are imported into
- Vega-Lite ( ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- center : boolean
- Boolean flag indicating if facet's subviews should be centered relative to their
- respective rows or columns.
-
- **Default value:** ``false``
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access
- nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If
- field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details
- about escaping in the `field documentation
- `__. 2) ``field`` is not required
- if ``aggregate`` is ``count``.
- header : anyOf(:class:`Header`, None)
- An object defining properties of a facet's header.
- sort : anyOf(:class:`SortArray`, :class:`SortOrder`, :class:`EncodingSortField`, None)
- Sort order for the encoded field.
-
- For continuous fields (quantitative or temporal), ``sort`` can be either
- ``"ascending"`` or ``"descending"``.
-
- For discrete fields, ``sort`` can be one of the following:
-
-
- * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in
- JavaScript.
- * `A sort field definition
- `__ for sorting by
- another field.
- * `An array specifying the field values in preferred order
- `__. In this case, the
- sort order will obey the values in the array, followed by any unspecified values
- in their original order. For discrete time field, values in the sort array can be
- `date-time definition objects
- `__. In addition, for time
- units ``"month"`` and ``"day"``, the values can be the month or day names (case
- insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ).
- * ``null`` indicating no sort.
-
- **Default value:** ``"ascending"``
-
- **Note:** ``null`` is not supported for ``row`` and ``column``.
- spacing : float
- The spacing in pixels between facet's sub-views.
-
- **Default value** : Depends on ``"spacing"`` property of `the view composition
- configuration `__ (
- ``20`` by default)
- timeUnit : anyOf(:class:`TimeUnit`, :class:`BinnedTimeUnit`, :class:`TimeUnitParams`)
- Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal
- field. or `a temporal field that gets casted as ordinal
- `__.
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `timeUnit `__
- documentation.
- title : anyOf(:class:`Text`, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- type : :class:`StandardType`
- The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
- ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also
- be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- Vega-Lite automatically infers data types in many cases as discussed below. However,
- type is required for a field if: (1) the field is not nominal and the field encoding
- has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale
- type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal
- scale for a field with ``bin`` or ``timeUnit``.
-
- **Default value:**
-
- 1) For a data ``field``, ``"nominal"`` is the default data type unless the field
- encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or
- ``timeUnit`` that satisfies the following criteria:
-
-
- * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin``
- or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is
- ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a
- quantitative scale `__.
- * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit``
- or (2) the specified scale type is a time or utc scale
- * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort
- order
- `__,
- (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding
- channel is ``order``.
-
- 2) For a constant value in data domain ( ``datum`` ):
-
-
- * ``"quantitative"`` if the datum is a number
- * ``"nominal"`` if the datum is a string
- * ``"temporal"`` if the datum is `a date time object
- `__
-
- **Note:**
-
-
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types (number, string, etc.). The same primitive data type can have different
- types of measurement. For example, numeric data can represent quantitative,
- ordinal, or nominal data.
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal"
- (for using an ordinal scale)
- `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they must have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "column"
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmax=Undefined, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmin=Undefined, **kwds) -> 'Column':
- ...
-
- def align(self, _: Literal["all", "each", "none"], **kwds) -> 'Column':
- ...
-
- def bandPosition(self, _: float, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: bool, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: None, **kwds) -> 'Column':
- ...
-
- def center(self, _: bool, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, _: str, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, repeat=Undefined, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def header(self, format=Undefined, formatType=Undefined, labelAlign=Undefined, labelAnchor=Undefined, labelAngle=Undefined, labelBaseline=Undefined, labelColor=Undefined, labelExpr=Undefined, labelFont=Undefined, labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined, labelLineHeight=Undefined, labelOrient=Undefined, labelPadding=Undefined, labels=Undefined, orient=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined, titleAngle=Undefined, titleBaseline=Undefined, titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined, titleLineHeight=Undefined, titleOrient=Undefined, titlePadding=Undefined, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def header(self, _: None, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[float], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[str], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[bool], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: List[core.DateTime], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: Literal["ascending", "descending"], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, field=Undefined, op=Undefined, order=Undefined, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def sort(self, _: None, **kwds) -> 'Column':
- ...
-
- def spacing(self, _: float, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["binnedyear", "binnedyearquarter", "binnedyearquartermonth", "binnedyearmonth", "binnedyearmonthdate", "binnedyearmonthdatehours", "binnedyearmonthdatehoursminutes", "binnedyearmonthdatehoursminutesseconds", "binnedyearweek", "binnedyearweekday", "binnedyearweekdayhours", "binnedyearweekdayhoursminutes", "binnedyearweekdayhoursminutesseconds", "binnedyeardayofyear"], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["binnedutcyear", "binnedutcyearquarter", "binnedutcyearquartermonth", "binnedutcyearmonth", "binnedutcyearmonthdate", "binnedutcyearmonthdatehours", "binnedutcyearmonthdatehoursminutes", "binnedutcyearmonthdatehoursminutesseconds", "binnedutcyearweek", "binnedutcyearweekday", "binnedutcyearweekdayhours", "binnedutcyearweekdayhoursminutes", "binnedutcyearweekdayhoursminutesseconds", "binnedutcyeardayofyear"], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, binned=Undefined, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: str, **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: List[str], **kwds) -> 'Column':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: None, **kwds) -> 'Column':
- ...
-
- def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Column':
- ...
-
-
- def __init__(self, shorthand=Undefined, aggregate=Undefined, align=Undefined,
- bandPosition=Undefined, bin=Undefined, center=Undefined, field=Undefined,
- header=Undefined, sort=Undefined, spacing=Undefined, timeUnit=Undefined,
- title=Undefined, type=Undefined, **kwds):
- super(Column, self).__init__(shorthand=shorthand, aggregate=aggregate, align=align,
- bandPosition=bandPosition, bin=bin, center=center, field=field,
- header=header, sort=sort, spacing=spacing, timeUnit=timeUnit,
- title=title, type=type, **kwds)
-
-
-@with_property_setters
-class Description(FieldChannelMixin, core.StringFieldDefWithCondition):
- """Description schema wrapper
-
- Mapping(required=[shorthand])
-
- Parameters
- ----------
-
- shorthand : string
- shorthand for field, aggregate, and type
- aggregate : :class:`Aggregate`
- Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``,
- ``"min"``, ``"max"``, ``"count"`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- bin : anyOf(boolean, :class:`BinParams`, string, None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating
- that the data for ``x`` or ``y`` channel are binned before they are imported into
- Vega-Lite ( ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- condition : anyOf(:class:`ConditionalValueDefstringExprRef`, List(:class:`ConditionalValueDefstringExprRef`))
- One or more value definition(s) with `a parameter or a test predicate
- `__.
-
- **Note:** A field definition's ``condition`` property can only contain `conditional
- value definitions `__
- since Vega-Lite only allows at most one encoded field per encoding channel.
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access
- nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If
- field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details
- about escaping in the `field documentation
- `__. 2) ``field`` is not required
- if ``aggregate`` is ``count``.
- format : anyOf(string, :class:`Dict`)
- When used with the default ``"number"`` and ``"time"`` format type, the text
- formatting pattern for labels of guides (axes, legends, headers) and text marks.
-
-
- * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's
- `number format pattern `__.
- * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time
- format pattern `__.
-
- See the `format documentation `__
- for more examples.
-
- When used with a `custom formatType
- `__, this
- value will be passed as ``format`` alongside ``datum.value`` to the registered
- function.
-
- **Default value:** Derived from `numberFormat
- `__ config for number
- format and from `timeFormat
- `__ config for time
- format.
- formatType : string
- The format type for labels. One of ``"number"``, ``"time"``, or a `registered custom
- format type
- `__.
-
- **Default value:**
-
-
- * ``"time"`` for temporal fields and ordinal and nominal fields with ``timeUnit``.
- * ``"number"`` for quantitative fields as well as ordinal and nominal fields without
- ``timeUnit``.
- timeUnit : anyOf(:class:`TimeUnit`, :class:`BinnedTimeUnit`, :class:`TimeUnitParams`)
- Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal
- field. or `a temporal field that gets casted as ordinal
- `__.
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `timeUnit `__
- documentation.
- title : anyOf(:class:`Text`, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- type : :class:`StandardType`
- The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
- ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also
- be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- Vega-Lite automatically infers data types in many cases as discussed below. However,
- type is required for a field if: (1) the field is not nominal and the field encoding
- has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale
- type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal
- scale for a field with ``bin`` or ``timeUnit``.
-
- **Default value:**
-
- 1) For a data ``field``, ``"nominal"`` is the default data type unless the field
- encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or
- ``timeUnit`` that satisfies the following criteria:
-
-
- * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin``
- or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is
- ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a
- quantitative scale `__.
- * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit``
- or (2) the specified scale type is a time or utc scale
- * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort
- order
- `__,
- (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding
- channel is ``order``.
-
- 2) For a constant value in data domain ( ``datum`` ):
-
-
- * ``"quantitative"`` if the datum is a number
- * ``"nominal"`` if the datum is a string
- * ``"temporal"`` if the datum is `a date time object
- `__
-
- **Note:**
-
-
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types (number, string, etc.). The same primitive data type can have different
- types of measurement. For example, numeric data can represent quantitative,
- ordinal, or nominal data.
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (default, for using a temporal scale) or `"ordinal"
- (for using an ordinal scale)
- `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat"}``. The ``"type"`` of the aggregate output is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they must have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "description"
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, _: Literal["average", "count", "distinct", "max", "mean", "median", "min", "missing", "product", "q1", "q3", "ci0", "ci1", "stderr", "stdev", "stdevp", "sum", "valid", "values", "variance", "variancep"], **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmax=Undefined, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def aggregate(self, argmin=Undefined, **kwds) -> 'Description':
- ...
-
- def bandPosition(self, _: float, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: bool, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined, extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined, steps=Undefined, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: str, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def bin(self, _: None, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefstringExprRef], **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, _: str, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def field(self, repeat=Undefined, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def format(self, _: str, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def format(self, _: dict, **kwds) -> 'Description':
- ...
-
- def formatType(self, _: str, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["year", "quarter", "month", "week", "day", "dayofyear", "date", "hours", "minutes", "seconds", "milliseconds"], **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyear", "utcquarter", "utcmonth", "utcweek", "utcday", "utcdayofyear", "utcdate", "utchours", "utcminutes", "utcseconds", "utcmilliseconds"], **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["yearquarter", "yearquartermonth", "yearmonth", "yearmonthdate", "yearmonthdatehours", "yearmonthdatehoursminutes", "yearmonthdatehoursminutesseconds", "yearweek", "yearweekday", "yearweekdayhours", "yearweekdayhoursminutes", "yearweekdayhoursminutesseconds", "yeardayofyear", "quartermonth", "monthdate", "monthdatehours", "monthdatehoursminutes", "monthdatehoursminutesseconds", "weekday", "weeksdayhours", "weekdayhoursminutes", "weekdayhoursminutesseconds", "dayhours", "dayhoursminutes", "dayhoursminutesseconds", "hoursminutes", "hoursminutesseconds", "minutesseconds", "secondsmilliseconds"], **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["utcyearquarter", "utcyearquartermonth", "utcyearmonth", "utcyearmonthdate", "utcyearmonthdatehours", "utcyearmonthdatehoursminutes", "utcyearmonthdatehoursminutesseconds", "utcyearweek", "utcyearweekday", "utcyearweekdayhours", "utcyearweekdayhoursminutes", "utcyearweekdayhoursminutesseconds", "utcyeardayofyear", "utcquartermonth", "utcmonthdate", "utcmonthdatehours", "utcmonthdatehoursminutes", "utcmonthdatehoursminutesseconds", "utcweekday", "utcweeksdayhours", "utcweekdayhoursminutes", "utcweekdayhoursminutesseconds", "utcdayhours", "utcdayhoursminutes", "utcdayhoursminutesseconds", "utchoursminutes", "utchoursminutesseconds", "utcminutesseconds", "utcsecondsmilliseconds"], **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["binnedyear", "binnedyearquarter", "binnedyearquartermonth", "binnedyearmonth", "binnedyearmonthdate", "binnedyearmonthdatehours", "binnedyearmonthdatehoursminutes", "binnedyearmonthdatehoursminutesseconds", "binnedyearweek", "binnedyearweekday", "binnedyearweekdayhours", "binnedyearweekdayhoursminutes", "binnedyearweekdayhoursminutesseconds", "binnedyeardayofyear"], **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, _: Literal["binnedutcyear", "binnedutcyearquarter", "binnedutcyearquartermonth", "binnedutcyearmonth", "binnedutcyearmonthdate", "binnedutcyearmonthdatehours", "binnedutcyearmonthdatehoursminutes", "binnedutcyearmonthdatehoursminutesseconds", "binnedutcyearweek", "binnedutcyearweekday", "binnedutcyearweekdayhours", "binnedutcyearweekdayhoursminutes", "binnedutcyearweekdayhoursminutesseconds", "binnedutcyeardayofyear"], **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def timeUnit(self, binned=Undefined, maxbins=Undefined, step=Undefined, unit=Undefined, utc=Undefined, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: str, **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: List[str], **kwds) -> 'Description':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def title(self, _: None, **kwds) -> 'Description':
- ...
-
- def type(self, _: Literal["quantitative", "ordinal", "temporal", "nominal"], **kwds) -> 'Description':
- ...
-
-
- def __init__(self, shorthand=Undefined, aggregate=Undefined, bandPosition=Undefined, bin=Undefined,
- condition=Undefined, field=Undefined, format=Undefined, formatType=Undefined,
- timeUnit=Undefined, title=Undefined, type=Undefined, **kwds):
- super(Description, self).__init__(shorthand=shorthand, aggregate=aggregate,
- bandPosition=bandPosition, bin=bin, condition=condition,
- field=field, format=format, formatType=formatType,
- timeUnit=timeUnit, title=title, type=type, **kwds)
-
-
-@with_property_setters
-class DescriptionValue(ValueChannelMixin, core.StringValueDefWithCondition):
- """DescriptionValue schema wrapper
-
- Mapping(required=[])
-
- Parameters
- ----------
-
- condition : anyOf(:class:`ConditionalMarkPropFieldOrDatumDef`, :class:`ConditionalValueDefstringnullExprRef`, List(:class:`ConditionalValueDefstringnullExprRef`))
- A field definition or one or more value definition(s) with a parameter predicate.
- value : anyOf(string, None, :class:`ExprRef`)
- A constant value in visual domain (e.g., ``"red"`` / ``"#0099ff"`` / `gradient
- definition `__ for color,
- values between ``0`` to ``1`` for opacity).
- """
- _class_is_valid_at_instantiation = False
- _encoding_name = "description"
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, test=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'DescriptionValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, bandPosition=Undefined, datum=Undefined, legend=Undefined, scale=Undefined, test=Undefined, title=Undefined, type=Undefined, **kwds) -> 'DescriptionValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, aggregate=Undefined, bandPosition=Undefined, bin=Undefined, empty=Undefined, field=Undefined, legend=Undefined, param=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined, title=Undefined, type=Undefined, **kwds) -> 'DescriptionValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, bandPosition=Undefined, datum=Undefined, empty=Undefined, legend=Undefined, param=Undefined, scale=Undefined, title=Undefined, type=Undefined, **kwds) -> 'DescriptionValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, test=Undefined, value=Undefined, **kwds) -> 'DescriptionValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, empty=Undefined, param=Undefined, value=Undefined, **kwds) -> 'DescriptionValue':
- ...
-
- @overload # type: ignore[no-overload-impl]
- def condition(self, _: List[core.ConditionalValueDefstringnullExprRef], **kwds) -> 'DescriptionValue':
- ...
-
-
- def __init__(self, value, condition=Undefined, **kwds):
- super(DescriptionValue, self).__init__(value=value, condition=condition, **kwds)
-
-
-@with_property_setters
-class Detail(FieldChannelMixin, core.FieldDefWithoutScale):
- """Detail schema wrapper
-
- Mapping(required=[shorthand])
- Definition object for a data field, its type and transformation of an encoding channel.
-
- Parameters
- ----------
-
- shorthand : string
- shorthand for field, aggregate, and type
- aggregate : :class:`Aggregate`
- Aggregation function for the field (e.g., ``"mean"``, ``"sum"``, ``"median"``,
- ``"min"``, ``"max"``, ``"count"`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- bandPosition : float
- Relative position on a band of a stacked, binned, time unit, or band scale. For
- example, the marks will be positioned at the beginning of the band if set to ``0``,
- and at the middle of the band if set to ``0.5``.
- bin : anyOf(boolean, :class:`BinParams`, string, None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating
- that the data for ``x`` or ``y`` channel are binned before they are imported into
- Vega-Lite ( ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-Lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:** 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access
- nested objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ). If
- field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ). See more details
- about escaping in the `field documentation
- `__. 2) ``field`` is not required
- if ``aggregate`` is ``count``.
- timeUnit : anyOf(:class:`TimeUnit`, :class:`BinnedTimeUnit`, :class:`TimeUnitParams`)
- Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal
- field. or `a temporal field that gets casted as ordinal
- `__.
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `timeUnit `__
- documentation.
- title : anyOf(:class:`Text`, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- type : :class:`StandardType`
- The type of measurement ( ``"quantitative"``, ``"temporal"``, ``"ordinal"``, or
- ``"nominal"`` ) for the encoded field or constant value ( ``datum`` ). It can also
- be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- Vega-Lite automatically infers data types in many cases as discussed below. However,
- type is required for a field if: (1) the field is not nominal and the field encoding
- has no specified ``aggregate`` (except ``argmin`` and ``argmax`` ), ``bin``, scale
- type, custom ``sort`` order, nor ``timeUnit`` or (2) if you wish to use an ordinal
- scale for a field with ``bin`` or ``timeUnit``.
-
- **Default value:**
-
- 1) For a data ``field``, ``"nominal"`` is the default data type unless the field
- encoding has ``aggregate``, ``channel``, ``bin``, scale type, ``sort``, or
- ``timeUnit`` that satisfies the following criteria:
-
-
- * ``"quantitative"`` is the default type if (1) the encoded field contains ``bin``
- or ``aggregate`` except ``"argmin"`` and ``"argmax"``, (2) the encoding channel is
- ``latitude`` or ``longitude`` channel or (3) if the specified scale type is `a
- quantitative scale `__.
- * ``"temporal"`` is the default type if (1) the encoded field contains ``timeUnit``
- or (2) the specified scale type is a time or utc scale
- * ``"ordinal"`` is the default type if (1) the encoded field contains a `custom sort
- order
- `__,
- (2) the specified scale type is an ordinal/point/band scale, or (3) the encoding
- channel is ``order``.
-
- 2) For a constant value in data domain ( ``datum`` ):
-
-
- * ``"quantitative"`` if the datum is a number
- * ``"nominal"`` if the datum is a string
- * ``"temporal"`` if the datum is `a date time object
- `__
-
- **Note:**
-
-
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types (number, string, etc.). The same primitive data type can have different
- types of measurement. For example, numeric data can represent quantitative,
- ordinal, or nominal data.
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
-