diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download free serato skin for virtual dj The ultimate guide for beginners.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download free serato skin for virtual dj The ultimate guide for beginners.md
deleted file mode 100644
index 542e5231affed6d4047e62121b4cc0dafc3befba..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download free serato skin for virtual dj The ultimate guide for beginners.md
+++ /dev/null
@@ -1,148 +0,0 @@
-
-
How to Download Free Serato Skin for Virtual DJ
-
If you are a fan of Virtual DJ, one of the most popular and versatile DJ software in the market, you might be interested in changing its look and feel with a different skin. A skin is a graphical interface that modifies the appearance and layout of Virtual DJ, giving it a new style and functionality.
One of the most sought-after skins for Virtual DJ is the Serato Skin, which mimics the design and features of Serato DJ Pro, another leading DJ software that is widely used by professional DJs. The Serato Skin for Virtual DJ gives you the best of both worlds, combining the power and flexibility of Virtual DJ with the sleek and intuitive interface of Serato DJ Pro.
-
In this article, we will show you how to download free Serato Skin for Virtual DJ, how to install it on your computer, and how to use it to enhance your mixing and scratching skills. By following these simple steps, you will be able to transform your Virtual DJ into a Serato-like experience that will impress your audience and yourself.
-
How to Download Serato Skin for Virtual DJ
-
The first step to get Serato Skin for Virtual DJ is to find a reliable source where you can download it safely and legally. There are many websites that offer free downloads of skins for Virtual DJ, but not all of them are trustworthy or updated. Some of them may contain viruses, malware, or broken links that can harm your computer or compromise your privacy.
-
One of the websites that we recommend for downloading free Serato Skin for Virtual DJ is Sonatty, a blog that provides useful information and resources for DJs. Sonatty has several versions of Serato Skin for Virtual DJ available, including Serato DJ Pro 2.5, Serato DJ Pro 2.0, and more. You can also find other skins, plugins, effects, samples, and tutorials on Sonatty that can help you improve your performance as a DJ.
-
Step 1: Find a reliable source for downloading Serato Skin for Virtual DJ
-
To download free Serato Skin for Virtual DJ from Sonatty, you need to visit their website and navigate to the Plugins section. There you will see a list of posts that contain links to different skins, plugins, and effects for Virtual DJ. Look for the post that matches the version of Serato Skin for Virtual DJ that you want to download.
-
Step 2: Choose the version of Serato Skin for Virtual DJ that suits your needs
-
Depending on your preference and compatibility, you can choose between different versions of Serato Skin for Virtual DJ that have different features and requirements. For example, if you have Virtual DJ 2021, you can download Serato DJ Pro 2.5, which is the latest version of Serato Skin that has a premium edition with more options and functions. If you have Virtual DJ 2018 or 2020, you can download Serato DJ Pro 2.0, which is an older version of Serato Skin that still works well with these versions of Virtual DJ. You can also find other versions of Serato Skin on Sonatty or other websites if you have different versions of Virtual DJ.
-
Step 3: Download and extract the Serato Skin for Virtual DJ file
-
Once you have chosen the version of Serato Skin for Virtual DJ that you want to download, click on the [Download] button on the post that contains it. This will take you to another page where you will see a link to download the file from Google Drive. Click on the link and then click on Download anyway to start downloading the file.
-
The file will be in a compressed format (.zip or .rar) that you need to extract using a program like WinRAR or WinZip. To extract the file, right-click on it and select Extract here or Extract to.... This will create a folder with the same name as the file that contains the skin file (.zip) and some instructions (.txt).
-
How to get serato skin for virtual dj without paying
-Serato skin for virtual dj 8 pro free download
-Best serato skin for virtual dj 7 download link
-Where to find serato skin for virtual dj 2021
-Serato skin for virtual dj zip file download
-Serato skin for virtual dj mac free download
-Serato skin for virtual dj windows 10 download
-Serato scratch live skin for virtual dj free download
-Serato dj pro skin for virtual dj free download
-Serato video skin for virtual dj free download
-Serato sl3 skin for virtual dj free download
-Serato sl4 skin for virtual dj free download
-Serato rane 62 skin for virtual dj free download
-Serato rane 57 skin for virtual dj free download
-Serato rane 12 skin for virtual dj free download
-Serato rane 72 skin for virtual dj free download
-Serato pioneer ddj sx2 skin for virtual dj free download
-Serato pioneer ddj sx3 skin for virtual dj free download
-Serato pioneer ddj sz2 skin for virtual dj free download
-Serato pioneer ddj sz3 skin for virtual dj free download
-Serato pioneer ddj sb3 skin for virtual dj free download
-Serato pioneer ddj sr2 skin for virtual dj free download
-Serato pioneer ddj sr3 skin for virtual dj free download
-Serato pioneer ddj 1000srt skin for virtual dj free download
-Serato pioneer ddj 800srt skin for virtual dj free download
-Serato numark mixtrack pro 3 skin for virtual dj free download
-Serato numark mixtrack platinum fx skin for virtual dj free download
-Serato numark ns6ii skin for virtual dj free download
-Serato numark nvii skin for virtual dj free download
-Serato denon mcx8000 skin for virtual dj free download
-Serato denon mc7000 skin for virtual dj free download
-Serato denon prime 4 skin for virtual dj free download
-Serato denon prime 2 skin for virtual dj free download
-Serato reloop mixon 4 skin for virtual dj free download
-Serato reloop beatpad 2 skin for virtual dj free download
-Serato reloop touch skin for virtual dj free download
-Serato reloop elite skin for virtual dj free download
-Serato hercules inpulse 500 skin for virtual dj free download
-Serato hercules inpulse 300 skin for virtual dj free download
-Serato hercules inpulse 200 skin for virtual dj free download
-Serato hercules starlight skin for virtual dj free download
-Serato hercules jogvision skin for virtual dj free download
-Serato traktor kontrol s4 mk3 skin for virtual dj free download
-Serato traktor kontrol s2 mk3 skin for virtual dj free download
-Serato traktor kontrol s8 mk2 skin for virtual dj free download
-Serato traktor kontrol z2 mk2 skin for virtual dj free download
-Free serato skins pack for all versions of virtual dj
-How to install serato skins on your computer and use them with virtual dj
-How to customize your own serato skins and share them with other users of virtual dj
-How to troubleshoot common issues with serato skins and fix them on your system
-
How to Install Serato Skin for Virtual DJ
-
The next step to get Serato Skin for Virtual DJ is to install it on your computer so that you can use it with your Virtual DJ software. This is a very easy process that only requires copying and pasting one file into one folder.
-
Step 1: Locate the Skin folder in your Virtual DJ directory
-
To install Serato Skin for Virtual DJ, you need to find where your Skin folder is located in your Virtual DJ directory. The default location of this folder is usually C:\Users\YourName\Documents\VirtualDJ\Skins, but it may vary depending on how you installed your software or what version you have.
-
To find your skin folder easily, open your VirtualDJ software and go to Settings > Interface > Skins > Open Folder. This will open your skin folder in a new window where you can see all the skins that you have installed or available.
-
Step 2: Copy and paste the Serato Skin for VirtualDJ file into the skin folder
-
To install Serato Skin for VirtualDJ, you need to copy and paste one file into your skin folder. The file is called Seratovdj.zip, which is located inside the folder that you extracted from Sonatty's website (e.g., Seratovdj2020.zip). To copy this file, right-click on it and select Copy. Then go back to your skin folder window and right-click on an empty space and select Paste. This will add this file into your skin folder along with other skins.
-
Step 3: Open your virtualDJ software and select seratovdj from interface settings
-
To use seratovdj skin with virtualDJ software ,you need open virtualDJ software then go settings > interface > skins > seratovdj .This will change look virtualDJ software like seratodj pro .You can also switch between different skins anytime by repeating this process .
-
How to Use seratovdj skin with virtualDJ
-
The final step to get seratovdj skin with virtualDJ is enjoy mixing scratching skills .seratovdj skin gives best both worlds ,combining power flexibility virtualDJ sleek intuitive interface seratodj pro .You can explore features functions seratovdj skin customize according preferences .Here some tips tricks use seratovdj skin virtualDJ :
-
Step 1: Explore the features and functions of seratovdj skin for virtualDJ
-
seratovdj skin for virtualDJ has many features and functions that mimic the design and features of seratodj pro. Some of the main features and functions are:
-
-
Waveforms: seratovdj skin for virtualDJ displays the waveforms of the tracks that you are playing or loading in different colors and shapes. You can zoom in or out of the waveforms, adjust their brightness and contrast, and sync them with the beatgrid. You can also see the cue points, loops, and effects on the waveforms.
-
Decks: seratovdj skin for virtualDJ has two or four decks that you can use to mix and scratch your tracks. You can switch between the decks by clicking on the deck number or using a keyboard shortcut. You can also see the track information, BPM, pitch, time, key, and mode on each deck.
-
Mixer: seratovdj skin for virtualDJ has a mixer that allows you to control the volume, gain, EQ, filter, crossfader, and headphone cue of each deck. You can also use the mixer to apply effects, samples, and loops to your tracks.
-
Library: seratovdj skin for virtualDJ has a library that lets you browse and load your tracks from your computer or external devices. You can also search for tracks by name, artist, genre, BPM, key, or color. You can also create and manage playlists, crates, smart crates, and history.
-
Effects: seratovdj skin for virtualDJ has a variety of effects that you can use to spice up your mixes. You can choose from echo, flanger, phaser, reverb, delay, filter, gater, slicer, and more. You can also adjust the parameters of each effect and apply them to individual decks or to the master output.
-
Samples: seratovdj skin for virtualDJ has a sample player that lets you trigger sounds from your computer or external devices. You can load up to 32 samples in 8 banks and assign them to different pads. You can also adjust the volume, pitch, loop mode, and sync mode of each sample.
-
Loops: seratovdj skin for virtualDJ has a loop function that lets you create and manipulate loops on your tracks. You can set the loop length manually or automatically using beatjump or snap mode. You can also save and recall loops using hot cues or memory cues.
-
Cues: seratovdj skin for virtualDJ has a cue function that lets you mark and jump to specific points on your tracks. You can set up to 8 hot cues per deck and trigger them using pads or keyboard shortcuts. You can also set memory cues that are saved with your tracks and visible on the waveforms.
-
-
Step 2: Customize the seratovdj skin for virtualDJ according to your preferences
-
seratovdj skin for virtualDJ is highly customizable and allows you to change its appearance and behavior according to your preferences. You can access the customization options by clicking on the Settings button on the top right corner of the skin. Some of the customization options are:
-
-
Skin Layout: You can choose between different layouts for the skin, such as 2 Decks Horizontal Waveform (default), 2 Decks Vertical Waveform, 4 Decks Horizontal Waveform, 4 Decks Vertical Waveform, etc.
-
Skin Color: You can choose between different colors for the skin, such as Blue (default), Red, Green, Purple, etc.
-
Skin Mode: You can choose between different modes for the skin, such as Performance (default), Library Only (for browsing tracks), Video (for mixing videos), etc.
-
Waveform Color: You can choose between different colors for the waveforms, such as RGB (default), Mono (white), Inverted (black), etc.
-
Waveform Shape: You can choose between different shapes for the waveforms, such as Filled (default), Outline (transparent), Dots (dots), etc.
-
Waveform Zoom: You can adjust the zoom level of the waveforms using a slider or a keyboard shortcut.
-
Brightness/Contrast: You can adjust the brightness and contrast of the waveforms using sliders or keyboard shortcuts.
-
Crossfader Curve: You can adjust the curve of the crossfader using a slider or a keyboard shortcut.
-
Pitch Range: You can adjust the pitch range of each deck using a slider or a keyboard shortcut.
-
Pitch Lock: You can lock or unlock the pitch of each deck using a button or a keyboard shortcut.
-
Key Lock: You can lock or unlock the key of each deck using a button or a keyboard shortcut.
-
Sync Mode: You can choose between different sync modes for each deck using a button or a keyboard shortcut.
-
Quantize Mode: You can enable or disable quantize mode for each deck using a button or a keyboard shortcut.
-
Slip Mode: You can enable or disable slip mode for each deck using a button or a keyboard shortcut.
-
Vinyl Mode: You can enable or disable vinyl mode for each deck using a button or a keyboard shortcut.
-
MIDI Mapping: You can map any function of seratovdj skin for virtualDJ to any MIDI controller using a button or a keyboard shortcut.
-
Keyboard Mapping: You can map any function of seratovdj skin for virtualDJ to any key on your keyboard using a button or a keyboard shortcut.
-
Skin Options: You can enable or disable various options for seratovdj skin for virtualDJ , such as Show/Hide Browser Panel , Show/Hide Mixer Panel , Show/Hide Effects Panel , Show/Hide Samples Panel , Show/Hide Loops Panel , Show/Hide Cues Panel , etc.
-
-
Step 3: Enjoy mixing and scratching with seratovdj skin for virtualDJ
-
The last step to get seratovdj skin for virtualDJ is to enjoy mixing and scratching with it. seratovdj skin for virtualDJ gives you all the tools and features that you need to create amazing mixes and scratches that will impress your audience and yourself. Whether you are a beginner or an expert DJ ,seratovdj skin for virtualDJ will help you unleash your creativity and have fun with your music .
-
Conclusion
-
In this article ,we showed you how to download free seratovdj skin for virtualDJ ,how to install it on your computer ,and how to use it to enhance your mixing and scratching skills .By following these simple steps ,you will be able to transform your virtualDJ into a serato-like experience that will impress your audience and yourself .
-
Serato Skin for Virtual DJ is one of the most popular and versatile skins for Virtual DJ that mimics the design and features of Serato DJ Pro .It gives you the best of both worlds ,combining the power and flexibility of Virtual DJ with the sleek and intuitive interface of Serato DJ Pro .You can explore its features and functions ,customize it according to your preferences ,and enjoy mixing and scratching with it .
-
If you are looking for a new way to spice up your Virtual DJ software ,we highly recommend you to try Serato Skin for Virtual DJ .You will not regret it .It is free ,easy ,and fun .What are you waiting for ?Download Serato Skin for Virtual DJ today and start mixing like a pro .
-
Frequently Asked Questions
-
-
Q: Where can I download free Serato Skin for Virtual DJ ?
-
A: One of the websites that we recommend for downloading free Serato Skin for Virtual DJ is Sonatty ,a blog that provides useful information and resources for DJs .Sonatty has several versions of Serato Skin for Virtual DJ available ,including Serato DJ Pro 2.5 ,Serato DJ Pro 2.0 ,and more .You can also find other skins ,plugins ,effects ,samples ,and tutorials on Sonatty that can help you improve your performance as a DJ .
-
Q: How do I install Serato Skin for Virtual DJ ?
-
atty's website (e.g., Seratovdj2020.zip). To copy this file, right-click on it and select Copy. Then go to your Skin folder window and right-click on an empty space and select Paste. This will add this file into your Skin folder along with other skins.
-
Q: How do I use Serato Skin for Virtual DJ ?
-
A: To use Serato Skin for Virtual DJ ,you need to open your Virtual DJ software and select the Serato Skin from the interface settings. You can also switch between different skins anytime by repeating this process. To use Serato Skin for Virtual DJ ,you can explore its features and functions ,customize it according to your preferences ,and enjoy mixing and scratching with it .
-
Q: What are the benefits of using Serato Skin for Virtual DJ ?
-
A: The benefits of using Serato Skin for Virtual DJ are:
-
-
It gives you a new look and feel for your Virtual DJ software that mimics the design and features of Serato DJ Pro.
-
It combines the power and flexibility of Virtual DJ with the sleek and intuitive interface of Serato DJ Pro.
-
It allows you to access and use many features and functions that are available in Serato DJ Pro ,such as waveforms ,decks ,mixer ,library ,effects ,samples ,loops ,cues ,etc.
-
It is highly customizable and allows you to change its appearance and behavior according to your preferences.
-
It is free ,easy ,and fun to use.
-
-
Q: What are the requirements for installing Serato Skin for Virtual DJ ?
-
A: The requirements for installing Serato Skin for Virtual DJ are:
-
-
You need to have a computer that meets the minimum system requirements for running Virtual DJ software.
-
You need to have a version of Virtual DJ software that is compatible with the version of Serato Skin for Virtual DJ that you want to download.
-
You need to have a program that can extract compressed files (e.g., WinRAR or WinZip).
-
You need to have an internet connection that can download files from Google Drive or other websites.
-
-
Q: Is Serato Skin for Virtual DJ legal ?
-
A: Serato Skin for Virtual DJ is legal as long as you download it from a reliable source that has permission from the original creators of Serato DJ Pro .You should not download or use any skin that infringes the intellectual property rights of Serato or any other company .You should also not use any skin that contains viruses ,malware ,or broken links that can harm your computer or compromise your privacy .
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Excel 2019 Crashing The Causes and The Solutions.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Excel 2019 Crashing The Causes and The Solutions.md
deleted file mode 100644
index d7c54258daa8ead8a9928f2655be3ea74ef32e53..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Excel 2019 Crashing The Causes and The Solutions.md
+++ /dev/null
@@ -1,26 +0,0 @@
-
-
How to Fix Excel 2019 Crashing Issues
-
Excel 2019 is the latest version of the popular spreadsheet software from Microsoft. It offers many new features and improvements, such as new functions, charts, data types, and more. However, some users have reported that Excel 2019 crashes frequently or unexpectedly on their computers. This can be very frustrating and annoying, especially if you are working on important or complex documents.
Fortunately, there are some possible solutions that can help you fix Excel 2019 crashing issues and prevent them from happening again. In this article, we will show you some of the most common causes of Excel 2019 crashing and how to troubleshoot them. We will also give you some tips on how to optimize your Excel 2019 performance and avoid any errors.
-
What Causes Excel 2019 Crashing?
-
Excel 2019 crashing can be caused by various factors, such as:
-
-
Corrupted or incompatible add-ins. Add-ins are extensions that enhance the functionality of Excel. However, some add-ins might not work well with Excel 2019 or might be corrupted or outdated. This can cause Excel 2019 to crash or freeze when you try to use them.
-
Corrupted or damaged files. If your Excel files are corrupted or damaged due to virus infection, power outage, improper shutdown, or other reasons, they might cause Excel 2019 to crash when you try to open or save them.
-
Insufficient memory or disk space. If your computer does not have enough memory or disk space to run Excel 2019 smoothly, it might cause Excel 2019 to crash or slow down.
-
Outdated or incompatible drivers. Drivers are software that enable your computer to communicate with your hardware devices, such as printer, scanner, mouse, keyboard, etc. If your drivers are outdated or incompatible with Excel 2019, they might cause Excel 2019 to crash or malfunction.
-
Software conflicts. If you have other software running in the background that interfere with Excel 2019, such as antivirus, firewall, VPN, etc., they might cause Excel 2019 to crash or behave erratically.
-
-
To fix Excel 2019 crashing issues, you need to identify and resolve the underlying issue that is causing them. You can do this manually by following the steps in the next section. However, this can be time-consuming and complicated, especially if you are not familiar with the technical aspects of your computer.
-
That's why using a professional tool like Excel Repair Toolbox is a better option. This tool can automatically scan your computer and detect the cause of Excel 2019 crashing issues. It can also repair any errors and optimize your Excel 2019 performance.
-
-
How to Troubleshoot Excel 2019 Crashing Issues?
-
If you want to troubleshoot Excel 2019 crashing issues manually, you can follow these steps:
-
-
Disable or remove any add-ins that might be causing problems. To do this, open Excel 2019 and go to File > Options > Add-Ins. In the Manage drop-down list, select COM Add-ins and click Go. Uncheck any add-ins that you don't need or use and click OK. Restart Excel 2019 and see if the problem persists. If it does, repeat the same steps for other types of add-ins, such as Excel Add-ins, Analysis ToolPak, etc.
-
Repair or recover any corrupted or damaged files. To do this, open Excel 2019 and go to File > Open. Locate the file that you want to repair and click on the arrow next to the Open button. Select Open and Repair from the menu and choose either Repair or Extract Data depending on the severity of the corruption. Follow the instructions on the screen to complete the process.
-
Free up some memory or disk space on your computer. To do this, close any unnecessary programs or tabs that are running in the background. You can also use a disk cleanup tool like CCleaner to delete any temporary files, cache files, cookies, etc. that might be taking up space on your hard drive.
-
Update or reinstall any drivers that might be outdated or incompatible with Excel 2019. To do this, go to Device Manager on your computer and look for any devices that ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Version Of Corel Draw LINK.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Version Of Corel Draw LINK.md
deleted file mode 100644
index 0ac0803280d234eb311b4be596217647f8827332..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Version Of Corel Draw LINK.md
+++ /dev/null
@@ -1,26 +0,0 @@
-
-
How to Get a Free Version of CorelDRAW
-
CorelDRAW is a popular graphic design software that allows you to create vector illustrations, page layouts, photo editing, typography, and more. However, CorelDRAW is not a cheap software and may not be affordable for everyone. If you are looking for a free version of CorelDRAW, you have a few options to consider.
-
Option 1: Download a Free Trial of CorelDRAW Graphics Suite
-
The easiest way to get a free version of CorelDRAW is to download a free trial of CorelDRAW Graphics Suite from the official website. The free trial gives you full access to all the features and content that come with a CorelDRAW Graphics Suite subscription for 15 days. You can use the free trial to explore the software and create your own projects without any limitations. However, after 15 days, you will need to purchase a subscription or a one-time license to continue using the software.
To download the free trial, you need to visit this page and click on the "Download Now" button. You will need to enter your name and email address and agree to the terms and conditions. Then, you will receive an email with a download link and instructions on how to install and activate the software. You can also access the online help, tutorials, and resources from the website to learn how to use the software.
-
Option 2: Use CorelDRAW.app Online or on iPad
-
Another way to get a free version of CorelDRAW is to use CorelDRAW.app, which is an online or iPad app that lets you create vector illustrations and graphic designs via web browser or tablet. CorelDRAW.app is included as part of the CorelDRAW Graphics Suite subscription, but you can also use it for free with some limitations. The free version of CorelDRAW.app allows you to create up to 5 projects and save them in the cloud. You can also export your projects as PNG or JPEG files.
-
To use CorelDRAW.app online, you need to visit this page and sign up for a free account. You can also sign in with your existing Corel account if you have one. Then, you can start creating your projects using the online interface and tools. You can also access the online help, tutorials, and resources from the website to learn how to use the app.
-
To use CorelDRAW.app on iPad, you need to download the app from the App Store and sign in with your free or paid Corel account. Then, you can start creating your projects using the iPad interface and tools. You can also access the online help, tutorials, and resources from the app to learn how to use it.
-
Option 3: Use an Alternative Graphic Design Software
-
The third way to get a free version of CorelDRAW is to use an alternative graphic design software that offers similar or better features and functionality. There are many free or low-cost graphic design software available on the market that can help you create vector illustrations, page layouts, photo editing, typography, and more. Some of these software are:
-
-
Inkscape: A free and open-source vector graphics editor that supports SVG format and has many tools and features similar to CorelDRAW.
-
GIMP: A free and open-source image editor that supports various formats and has many tools and features similar to Corel PHOTO-PAINT.
-
Scribus: A free and open-source desktop publishing software that supports various formats and has many tools and features similar to CorelDRAW's page layout capabilities.
-
Gravit Designer: A free online or desktop vector graphics editor that supports various formats and has many tools and features similar to CorelDRAW.
-
Krita: A free and open-source digital painting software that supports various formats and has many tools and features similar to Corel PHOTO-PAINT.
-
-
To use these alternative graphic design software, you need to visit their respective websites and download or access them online. You can also find online help, tutorials, and resources from their websites or communities to learn how to use them.
-
Conclusion
-
CorelDRAW is a powerful graphic design
- ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Ashrae Standard 170 Pdf 17l How to Download and Apply the Latest Addendum.md b/spaces/1gistliPinn/ChatGPT4/Examples/Ashrae Standard 170 Pdf 17l How to Download and Apply the Latest Addendum.md
deleted file mode 100644
index 5caf2588b985b8dd22f284f11f82b3b38e399419..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Ashrae Standard 170 Pdf 17l How to Download and Apply the Latest Addendum.md
+++ /dev/null
@@ -1,15 +0,0 @@
-
-
ANSI/ASHRAE/ASHE Standard 170 offers guidance, regulation and mandates to designers and operators of health care facilities. The revised 2021 edition incorporates 17 addenda to the 2017 standard. The updated standard includes the following significant changes: Revised scope, with improved guidance on thermal comfort conditions provided; Extensive modifications to address the Outpatient and Residential sections; Addition of a new outpatient ventilation table to address non-acute-type spaces; Extensive revisions to air filtration requirements; Addition of new columns in the ventilation tables to prescribe filtration requirement and designate unoccupied turndown; Expanded guidance on separation distance requirements for varied intake and exhaust arrangements, coordinating with related ASHRAE Standard 62.1 data;
HVAC systems are built to keep the indoor air quality (IAQ) safe for patients. Because of the airflow standards, equipment must meet high ventilation rates and filtration requirements. Healthcare facilities serve a critical service, and they must consider several factors to provide adequate public health. Among the concerns for health care facilities is airflow ventilation.
-
In healthcare facilities, humidifiers prevent the spread of bacteria and viruses. The standard's (ANSI/ASHRAE/ASHE 170) ventilation system addresses temperature and humidity that could be compromised without the proper attention and care.
-
Ophthalmology is already one of the busiest outpatient specialties in healthcare. Each patient's journey includes several healthcare personnel interacting to undertake routine objective assessments which is often followed by specialized imaging. The clinical consultation can take an average of 8 min and includes a close proximity slit-lamp examination to systematically inspect the eye and its adnexa. During the Wuhan outbreak of COVID-19, nosocomial transmission was reported to be highest in ENT and Ophthalmology.[17] The standard high-volume practice observed in ophthalmic units is therefore very high-risk and cannot be underestimated in subjecting staff and patients to contracting SARS-CoV-2.
-
-
Filtering facepiece respirators (FFPs), on the other hand, provide additional benefit to surgical masks by providing an air-tight seal and containing a mechanical filter, which can remove airborne contaminants through interception. Health and Safety Executive and British Safety Industry Federation recommend fit testing to ensure the respirator is suited to the user's facial structure and therefore performs optimally. There are three categories of FFP in Europe: FFP1, FFP2 (equivalent to N95), and FFP3. Class three (FFP3) provides the highest quality of protection and is the only one approved for UK healthcare settings, especially in AGPs, such as intubation and non-invasive ventilation. They must meet industry-standard regulations including strict industry tests with biological aerosols and cannot exceed 2% leakage. FFP3 masks provide 99% efficiency in filtering particles sized above 100 nm, including small airborne droplets.[22,24]
-
Adopts the current versions of the industry standards SAE J639, SAE J1739, and SAE J2844 in the use conditions for the proposed listings of HFO-1234yf in nonroad vehicles and previous listings for certain onroad vehicles.
-
EPA is rescinding use conditions that limit human exposure to halocarbon and inert gas agents used in the fire suppression and explosion protection industry. These use conditions are redundant with safety standards established by the National Fire Protection Association (NFPA). In addition, EPA is taking direct final action to change the listing for HBFC-22B1 from acceptable subject to use conditions to unacceptable.
-
This notice identifies EPA's decisions of acceptable substitutes for refrigeration, air conditioning, foams, non-aerosol solvent cleaning, and aerosol solvents. This action also requests information on the composition and safety of certain refrigerants for motor vehicle air conditioners. This notice also requests information on whether the SNAP program should include review of and establishment of use conditions for operations that involve manual cleaning with solvents or restriction of non-aerosol solvent substitutes to equipment that meets the cleaning equipment standards in the National Emission Standards for Halogenated Solvent Cleaning. Finally, this action updates readers on the SNAP program's review of n-propyl bromide for use as a substitute for ozone-depleting solvents used in the non-aerosol solvents cleaning, aerosol solvents and propellants, and adhesives, coatings and inks sectors.
-
Description EPA is rescinding use conditions that limit human exposure to halocarbon and inert gas agents used in the fire suppression and explosion protection industry. These use conditions are redundant with safety standards established by the National Fire Protection Association (NFPA). In addition, EPA is taking direct final action to change the listing for HBFC-22B1 from acceptable subject to use conditions to unacceptable.
-
Description: This notice identifies EPA's decisions of acceptable substitutes for refrigeration, air conditioning, foams, non-aerosol solvent cleaning, and aerosol solvents. This action also requests information on the composition and safety of certain refrigerants for motor vehicle air conditioners. This notice also requests information on whether the SNAP program should include review of and establishment of use conditions for operations that involve manual cleaning with solvents or restriction of non-aerosol solvent substitutes to equipment that meets the cleaning equipment standards in the National Emission Standards for Halogenated Solvent Cleaning. Finally, this action updates readers on the SNAP program's review of n-propyl bromide for use as a substitute for ozone-depleting solvents used in the non-aerosol solvents cleaning, aerosol solvents and propellants, and adhesives, coatings and inks sectors.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Disk Drill Pro 3.6.918 Crack Activation Code ((FREE)) Free Download 2019.md b/spaces/1gistliPinn/ChatGPT4/Examples/Disk Drill Pro 3.6.918 Crack Activation Code ((FREE)) Free Download 2019.md
deleted file mode 100644
index b199af2b77ae86cbe6f734eef32255f14abbf8b5..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Disk Drill Pro 3.6.918 Crack Activation Code ((FREE)) Free Download 2019.md
+++ /dev/null
@@ -1,29 +0,0 @@
-
-
Disk Drill Pro 3.6.918 Crack Activation Code Free Download 2019
-
Disk Drill Pro is a powerful data recovery software for Windows and Mac. It can recover lost files from any type of storage device, including hard drives, USB flash drives, memory cards, and more. Disk Drill Pro also offers data loss prevention features, such as Recovery Vault and Guaranteed Recovery, that can protect your data from accidental deletion or corruption.
-
In this article, we will show you how to download and install Disk Drill Pro 3.6.918 Crack Activation Code for free. This is a cracked version of Disk Drill Pro that bypasses the license verification and allows you to use all the features of the software without paying for it. However, we do not recommend using cracked software, as it may contain viruses, malware, or other harmful components that can damage your system or compromise your privacy. Moreover, using cracked software is illegal and unethical, as it violates the terms and conditions of the original software developer.
-
Disk Drill Pro 3.6.918 Crack Activation Code Free Download 2019
If you want to use Disk Drill Pro legally and safely, you should purchase it from the official website[^2^] [^3^] [^4^]. You can also try the free version of Disk Drill Basic[^4^], which allows you to recover up to 500 MB of data for free and preview all the recoverable files before recovery. You can also get a 50% discount if you upgrade from a previous version[^3^], or a 20% discount if you are a student, educator, government employee, or non-profit organization member[^3^].
-
However, if you still want to download and install Disk Drill Pro 3.6.918 Crack Activation Code for free, here are the steps you need to follow:
-
-
Download Disk Drill Pro 3.6.918 Crack Activation Code Free Download 2019 from this link[^1^]. This is a zip file that contains the setup file and the crack file.
-
Extract the zip file to a folder on your computer.
-
Run the setup file and follow the instructions to install Disk Drill Pro on your computer.
-
Do not launch Disk Drill Pro after installation.
-
Copy the crack file and paste it into the installation folder of Disk Drill Pro. This will replace the original file and activate the software.
-
Launch Disk Drill Pro and enjoy all the features for free.
-
-
Note: This method is only for educational purposes. We do not take any responsibility for any damage or loss caused by using cracked software. We strongly advise you to purchase Disk Drill Pro from the official website if you want to use it legally and safely.
-
-
Now that you have installed Disk Drill Pro 3.6.918 Crack Activation Code for free, you can use it to recover your lost or deleted data from any storage device. Here are some tips on how to use Disk Drill Pro effectively:
-
-
Before you start a scan, make sure that your storage device is connected to your computer and recognized by Disk Drill Pro. You can see the list of available devices on the left panel of the software.
-
Select the device that you want to scan and click on the "Recover" button. Disk Drill Pro will start a quick scan first, which will take a few minutes. If you don't find your files after the quick scan, you can proceed to a deep scan, which will take longer but will find more files.
-
After the scan is complete, you can preview the found files by clicking on them. You can also filter the files by type, size, date, or name using the options on the right panel of the software.
-
When you find the files that you want to recover, select them and click on the "Recover" button again. You will be asked to choose a location to save the recovered files. Make sure that you don't save them to the same device that you scanned, as this may overwrite the original data and make it unrecoverable.
-
Enjoy your recovered files and back them up to a safe location.
-
-
Disk Drill Pro 3.6.918 Crack Activation Code is a powerful data recovery software that can help you recover your lost or deleted data from any storage device. However, using cracked software is risky and illegal, and we do not recommend it. If you want to use Disk Drill Pro legally and safely, you should purchase it from the official website . You can also try the free version of Disk Drill Basic, which allows you to recover up to 500 MB of data for free and preview all the recoverable files before recovery.
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Drive Club Pc Game Download Kickass 61.md b/spaces/1gistliPinn/ChatGPT4/Examples/Drive Club Pc Game Download Kickass 61.md
deleted file mode 100644
index a7d289b1c176393a939142364059d38b09132076..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Drive Club Pc Game Download Kickass 61.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-
-PLAY MULTIPLAYER IN REAL TIME RIGHT NOW! Jump online to drift and race against live opponents! JOIN THE RACING APP REVOLUTION True next-gen driving...with stunning graphics and realistic racing. You can play real-time multiplayer games on Google Stadia and Windows PC.
-***
-In this game you will be able to get behind the wheel of the coolest motorcycles that have ever existed and have existed in history. Get ready to experience the thrill of driving like you've never experienced before.
-COMPARE GAMES WITH OTHER MANUFACTURERS
-Google Play:
-https://play.google.com/store/apps/details?id=com.appgift.pumpinbikes 8a78ff9644
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dirt Rally 2.0 Apk The Best Mods and Add-Ons for More Fun and Challenge.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dirt Rally 2.0 Apk The Best Mods and Add-Ons for More Fun and Challenge.md
deleted file mode 100644
index 3afe675e2da4437e896a92962ee56260b9583e9c..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dirt Rally 2.0 Apk The Best Mods and Add-Ons for More Fun and Challenge.md
+++ /dev/null
@@ -1,121 +0,0 @@
-
-
DiRT Rally 2.0 APK: How to Download and Play the Best Rally Game on Your Android Device
-
If you are a fan of rally racing, you might have heard of DiRT Rally 2.0, the latest installment in the popular DiRT series by Codemasters. This game is widely regarded as one of the best rally games ever made, with realistic physics, stunning graphics, and immersive gameplay. But did you know that you can also play DiRT Rally 2.0 on your Android device? Yes, you read that right. With DiRT Rally 2.0 APK, you can enjoy this amazing game on your smartphone or tablet, without any hassle or compromise.
-
In this article, we will tell you everything you need to know about DiRT Rally 2.0 APK, including what it is, how to download and install it, and some tips and tricks for playing it. So buckle up and get ready for some adrenaline-pumping action.
DiRT Rally 2.0 is a racing video game that focuses on rally and rallycross disciplines. It was released in February 2019 for Windows, PlayStation 4, and Xbox One, and later for Google Stadia in March 2020. It is the thirteenth game in the Colin McRae Rally series and the eighth game to carry the DiRT name.
-
DiRT Rally 2.0 dares you to carve your way through a selection of iconic rally locations from across the globe, in the most powerful off-road vehicles ever made, knowing that the smallest mistake could end your stage. You can compete in six rally locations (Argentina, Australia, New Zealand, Poland, Spain, and USA) and eight rallycross circuits (Abu Dhabi, Barcelona, Hell, Holjes, Latvia, Mettet, Montalegre, and Silverstone), with over 50 cars to choose from.
-
DiRT Rally 2.0 also features a career mode, where you can create your own team, hire staff, upgrade your cars, and manage your finances. You can also join online events and challenges, where you can compete with other players from around the world.
-
Features of DiRT Rally 2.0
-
Some of the features that make DiRT Rally 2.0 stand out from other racing games are:
-
-
Realistic physics: The game uses a sophisticated physics engine that simulates every aspect of rally driving, such as traction, suspension, weight transfer, tire wear, surface deformation, weather effects, and damage.
-
Stunning graphics: The game boasts of high-quality graphics that bring the rally locations to life, with dynamic lighting, shadows, reflections, dust, mud, water splashes, and smoke.
-
Immersive gameplay: The game offers a first-person perspective that puts you in the driver's seat of your car, with a detailed cockpit view and authentic sound effects. You can also use a co-driver's voice to guide you through the stages.
-
Customization: The game allows you to customize your car's appearance and performance, with various liveries, parts, setups, and tuning options.
-
Variety: The game offers a variety of modes, cars, locations, events, and challenges to keep you engaged and entertained.
-
-
Requirements for DiRT Rally 2.0 APK
-
DiRT Rally 2.0 APK is a modified version of the original game that allows you to play it on your Android device. However, not all devices are compatible with this APK. To run DiRT Rally 2.0 APK smoothly, you need to have the following requirements:
-
dirt rally 2.0 mod apk download
-dirt rally 2.0 apk obb
-dirt rally 2.0 android apk
-dirt rally 2.0 apk data
-dirt rally 2.0 apk free download
-dirt rally 2.0 apk offline
-dirt rally 2.0 apk rexdl
-dirt rally 2.0 apk revdl
-dirt rally 2.0 apk pure
-dirt rally 2.0 apk hack
-dirt rally 2.0 apk latest version
-dirt rally 2.0 apk full version
-dirt rally 2.0 apk + mod + data
-dirt rally 2.0 apk unlimited money
-dirt rally 2.0 apk + obb download
-dirt rally 2.0 apk + data download
-dirt rally 2.0 mod apk android
-dirt rally 2.0 mod apk obb
-dirt rally 2.0 mod apk free download
-dirt rally 2.0 mod apk offline
-dirt rally 2.0 mod apk rexdl
-dirt rally 2.0 mod apk revdl
-dirt rally 2.0 mod apk pure
-dirt rally 2.0 mod apk hack
-dirt rally 2.0 mod apk latest version
-dirt rally 2.0 mod apk full version
-dirt rally 2.0 mod apk + data
-dirt rally 2.0 mod apk unlimited money
-dirt rally 2.0 android apk download
-dirt rally 2.0 android apk obb
-dirt rally 2.0 android apk free download
-dirt rally 2.0 android apk offline
-dirt rally 2.0 android apk rexdl
-dirt rally 2.0 android apk revdl
-dirt rally 2.0 android apk pure
-dirt rally 2.0 android apk hack
-dirt rally 2.0 android apk latest version
-dirt rally 2.0 android apk full version
-dirt rally 2.0 android apk + data
-dirt rally 2.0 android apk unlimited money
-how to install dirt rally 2.0 apk on android
-how to play dirt rally 2.0 offline on android
-how to download and install dirt rally 2.0 on android for free
-how to get unlimited money in dirt rally 2.0 on android
-how to update dirt rally 2.0 on android
-best settings for dirt rally 2.0 on android
-best cars in dirt rally 2.0 on android
-best tracks in dirt rally 2.0 on android
-best mods for dirt rally 2.0 on android
-
-
Android version: You need to have Android 5.0 or higher on your device.
-
Storage space: You need to have at least 4 GB of free space on your device.
-
RAM: You need to have at least 2 GB of RAM on your device.
-
Processor: You need to have a quad-core processor or higher on your device.
-
Graphics: You need to have a GPU that supports OpenGL ES 3.0 or higher on your device.
-
Internet connection: You need to have a stable internet connection to download the APK file and the additional data files.
-
-
If you meet these requirements, you can proceed to download and install DiRT Rally 2.0 APK on your device.
-
How to Download and Install DiRT Rally 2.0 APK
-
To download and install DiRT Rally 2.0 APK on your device, you need to follow these steps:
-
Step 1: Download the APK file
-
The first step is to download the APK file of DiRT Rally 2.0 from a reliable source. You can use this link to download the APK file, which is about 40 MB in size. Make sure you download the file from a trusted website, as some websites may contain malware or viruses that can harm your device.
-
Step 2: Enable unknown sources
-
The next step is to enable unknown sources on your device. This is necessary because DiRT Rally 2.0 APK is not available on the Google Play Store, and you need to allow your device to install apps from other sources. To enable unknown sources, go to Settings > Security > Unknown Sources and toggle it on. You may see a warning message, but you can ignore it and proceed.
-
Step 3: Install the APK file
-
The third step is to install the APK file on your device. To do this, locate the downloaded file in your file manager and tap on it. You may see a pop-up window asking for permissions, but you can grant them and continue. The installation process may take a few minutes, depending on your device's performance.
-
Step 4: Launch the game and enjoy
-
The final step is to launch the game and enjoy it. To do this, go to your app drawer and tap on the DiRT Rally 2.0 icon. You may see a loading screen that will download some additional data files, which are about 1 GB in size. This may take some time, depending on your internet speed. Once the download is complete, you can start playing the game and experience the thrill of rally racing.
-
Tips and Tricks for Playing DiRT Rally 2.0 APK
-
DiRT Rally 2.0 APK is not an easy game to master, as it requires skill, concentration, and patience. However, with some tips and tricks, you can improve your performance and enjoy the game more. Here are some tips and tricks for playing DiRT Rally 2.0 APK:
-
Choose the right car and settings
-
The first tip is to choose the right car and settings for each stage and event. Different cars have different strengths and weaknesses, such as speed, handling, acceleration, braking, and durability. You should choose a car that suits your driving style and the terrain of the stage. For example, if you are driving on a gravel road, you may want a car that has good traction and suspension. You should also adjust the settings of your car according to your preference and skill level. You can change things like gear ratio, differential, brake bias, suspension stiffness, ride height, camber angle, anti-roll bar, tire pressure, and more. These settings can affect how your car behaves on the road, so you should experiment with them until you find the optimal setup.
-
Learn the tracks and practice
-
The second tip is to learn the tracks and practice them before competing in an event. Each track has its own characteristics, such as turns, bumps, jumps, hazards, weather conditions, and more. You should familiarize yourself with these features and memorize them as much as possible. You should also practice driving on them, either in the free roam mode or in the time trial mode. This will help you improve your skills, confidence, and timing. You can also watch the replays of your runs or other players' runs to learn from their mistakes and successes.
-
Use the co-driver's calls
-
The third tip is to use the co-driver's calls to guide you through the stages. The co-driver is your navigator who tells you what to expect ahead, such as the direction, distance, and severity of the turns, the road conditions, the hazards, and the landmarks. The co-driver's calls are based on a standardized system of symbols and numbers that you should learn and understand. For example, "Left 3 over crest" means that there is a left turn with a severity of 3 (out of 6) that goes over a crest. You should listen to the co-driver's calls carefully and follow them accordingly. They can help you prepare for the upcoming challenges and avoid crashes. You can also adjust the volume, timing, and language of the co-driver's calls in the settings menu.
-
Adjust the difficulty and assists
-
The fourth tip is to adjust the difficulty and assists of the game according to your skill level and preference. The game offers several options to customize your experience, such as:
-
Difficulty level: You can choose from five difficulty levels, ranging from very easy to very hard. The difficulty level affects how fast and aggressive your opponents are, how much time you have to complete a stage, and how much money you earn.
-
Assists: You can enable or disable various assists that can help you control your car, such as traction control, stability control, anti-lock brakes, automatic gearbox, launch control, hill start assist, and more. The assists can make the game easier or more realistic, depending on your preference.
-
Camera view: You can choose from several camera views that can affect your visibility and immersion, such as cockpit view, hood view, bumper view, chase view, helicopter view, and more.
-
HUD: You can customize the heads-up display that shows you information such as speedometer, rev counter, gear indicator, damage indicator, timer, map, co-driver's calls, and more. You can turn on or off any of these elements or change their position and size.
-
- You can experiment with these options until you find the best combination for you.
-
Conclusion
-
DiRT Rally 2.0 APK is a great way to enjoy one of the best rally games ever made on your Android device. It offers realistic physics, stunning graphics, immersive gameplay, customization options, variety of modes, cars, locations, events, and challenges. It is not an easy game to master, but with some tips and tricks, you can improve your performance and have fun. To play DiRT Rally 2.0 APK on your device, you need to meet some requirements, download and install the APK file from a reliable source, and enable unknown sources on your device. You can then launch the game and enjoy it. We hope this article has helped you learn more about DiRT Rally 2.0 APK and how to play it on your Android device. If you have any questions or feedback, feel free to leave a comment below. Happy racing!
FAQs
-
Here are some frequently asked questions about DiRT Rally 2.0 APK:
-
-
Is DiRT Rally 2.0 APK safe to download and install?
-
Yes, DiRT Rally 2.0 APK is safe to download and install, as long as you get it from a reliable source. However, you should always be careful when downloading and installing apps from unknown sources, as they may contain malware or viruses that can harm your device. You should also scan the APK file with an antivirus app before installing it.
-
Is DiRT Rally 2.0 APK free to play?
-
Yes, DiRT Rally 2.0 APK is free to play, as you do not need to pay anything to download and install it. However, the game may contain some in-app purchases or ads that can enhance your experience or support the developers.
-
Can I play DiRT Rally 2.0 APK offline?
-
No, DiRT Rally 2.0 APK requires an internet connection to download the additional data files and to access some of the online features, such as events and challenges. You can play the game offline only after you have downloaded all the data files and completed the initial setup.
-
Can I play DiRT Rally 2.0 APK with a controller?
-
Yes, DiRT Rally 2.0 APK supports various controllers that can connect to your Android device via Bluetooth or USB. You can use a controller to control your car and navigate the menus, as well as customize the button layout and sensitivity in the settings menu.
-
Can I play DiRT Rally 2.0 APK with friends?
-
Yes, DiRT Rally 2.0 APK allows you to play with friends online or locally. You can join online events and challenges, where you can compete with other players from around the world. You can also create or join a club, where you can invite your friends and share your progress and results. Alternatively, you can play locally with up to four players on the same device, using a split-screen mode.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_heun_discrete.py b/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_heun_discrete.py
deleted file mode 100644
index 70ae9590d253bd87c9a0830938b456bc190e4f43..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_heun_discrete.py
+++ /dev/null
@@ -1,254 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 Katherine Crowson, The HuggingFace Team and hlky. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from typing import List, Optional, Tuple, Union
-
-import numpy as np
-import paddle
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS
-from .scheduling_utils import SchedulerMixin, SchedulerOutput
-
-
-class HeunDiscreteScheduler(SchedulerMixin, ConfigMixin):
- """
- Implements Algorithm 2 (Heun steps) from Karras et al. (2022). for discrete beta schedules. Based on the original
- k-diffusion implementation by Katherine Crowson:
- https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L90
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- Args:
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
- beta_start (`float`): the starting `beta` value of inference.
- beta_end (`float`): the final `beta` value.
- beta_schedule (`str`):
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
- `linear` or `scaled_linear`.
- trained_betas (`np.ndarray`, optional):
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
- prediction_type (`str`, default `epsilon`, optional):
- prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion
- process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4
- https://imagen.research.google/video/paper.pdf)
- """
-
- _compatibles = _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy()
- order = 2
-
- @register_to_config
- def __init__(
- self,
- num_train_timesteps: int = 1000,
- beta_start: float = 0.00085, # sensible defaults
- beta_end: float = 0.012,
- beta_schedule: str = "linear",
- trained_betas: Optional[Union[np.ndarray, List[float]]] = None,
- prediction_type: str = "epsilon",
- ):
- if trained_betas is not None:
- self.betas = paddle.to_tensor(trained_betas, dtype="float32")
- elif beta_schedule == "linear":
- self.betas = paddle.linspace(beta_start, beta_end, num_train_timesteps, dtype="float32")
- elif beta_schedule == "scaled_linear":
- # this schedule is very specific to the latent diffusion model.
- self.betas = paddle.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype="float32") ** 2
- else:
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
-
- self.alphas = 1.0 - self.betas
- self.alphas_cumprod = paddle.cumprod(self.alphas, 0)
-
- # set all values
- self.set_timesteps(num_train_timesteps, num_train_timesteps)
-
- def index_for_timestep(self, timestep):
- indices = (self.timesteps == timestep).nonzero()
- if self.state_in_first_order:
- pos = -1
- else:
- pos = 0
- return indices[pos].item()
-
- def scale_model_input(
- self,
- sample: paddle.Tensor,
- timestep: Union[float, paddle.Tensor],
- ) -> paddle.Tensor:
- """
- Args:
-
- Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
- current timestep.
- sample (`paddle.Tensor`): input sample timestep (`int`, optional): current timestep
-
- Returns:
- `paddle.Tensor`: scaled input sample
- """
- step_index = self.index_for_timestep(timestep)
-
- sigma = self.sigmas[step_index]
- sample = sample / ((sigma**2 + 1) ** 0.5)
- return sample
-
- def set_timesteps(
- self,
- num_inference_steps: int,
- num_train_timesteps: Optional[int] = None,
- ):
- """
- Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
-
- Args:
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
- num_train_timesteps (`int`, Optional): number of diffusion steps used to train the model.
- """
- self.num_inference_steps = num_inference_steps
-
- num_train_timesteps = num_train_timesteps or self.config.num_train_timesteps
-
- timesteps = np.linspace(0, num_train_timesteps - 1, num_inference_steps, dtype=np.float32)[::-1].copy()
-
- sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5)
- sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas)
- sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32)
- sigmas = paddle.to_tensor(sigmas)
- self.sigmas = paddle.concat([sigmas[:1], sigmas[1:-1].repeat_interleave(2), sigmas[-1:]])
-
- # standard deviation of the initial noise distribution
- self.init_noise_sigma = self.sigmas.max()
-
- timesteps = paddle.to_tensor(timesteps)
- timesteps = paddle.concat([timesteps[:1], timesteps[1:].repeat_interleave(2)])
-
- self.timesteps = timesteps
-
- # empty dt and derivative
- self.prev_derivative = None
- self.dt = None
-
- @property
- def state_in_first_order(self):
- return self.dt is None
-
- def step(
- self,
- model_output: Union[paddle.Tensor, np.ndarray],
- timestep: Union[float, paddle.Tensor],
- sample: Union[paddle.Tensor, np.ndarray],
- return_dict: bool = True,
- ) -> Union[SchedulerOutput, Tuple]:
- """
- Args:
-
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
- model_output (`paddle.Tensor` or `np.ndarray`): direct output from learned diffusion model. timestep
- (`int`): current discrete timestep in the diffusion chain. sample (`paddle.Tensor` or `np.ndarray`):
- current instance of sample being created by diffusion process.
- return_dict (`bool`): option for returning tuple rather than SchedulerOutput class
-
- Returns:
- [`~schedulers.scheduling_utils.SchedulerOutput`] or `tuple`:
- [`~schedulers.scheduling_utils.SchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. When
- returning a tuple, the first element is the sample tensor.
- """
- step_index = self.index_for_timestep(timestep)
-
- if self.state_in_first_order:
- sigma = self.sigmas[step_index]
- sigma_next = self.sigmas[step_index + 1]
- else:
- # 2nd order / Heun's method
- sigma = self.sigmas[step_index - 1]
- sigma_next = self.sigmas[step_index]
-
- # currently only gamma=0 is supported. This usually works best anyways.
- # We can support gamma in the future but then need to scale the timestep before
- # passing it to the model which requires a change in API
- gamma = 0
- sigma_hat = sigma * (gamma + 1) # Note: sigma_hat == sigma for now
-
- # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise
- if self.config.prediction_type == "epsilon":
- sigma_input = sigma_hat if self.state_in_first_order else sigma_next
- pred_original_sample = sample - sigma_input * model_output
- elif self.config.prediction_type == "v_prediction":
- sigma_input = sigma_hat if self.state_in_first_order else sigma_next
- pred_original_sample = model_output * (-sigma_input / (sigma_input**2 + 1) ** 0.5) + (
- sample / (sigma_input**2 + 1)
- )
- else:
- raise ValueError(
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`"
- )
-
- if self.state_in_first_order:
- # 2. Convert to an ODE derivative for 1st order
- derivative = (sample - pred_original_sample) / sigma_hat
- # 3. delta timestep
- dt = sigma_next - sigma_hat
-
- # store for 2nd order step
- self.prev_derivative = derivative
- self.dt = dt
- self.sample = sample
- else:
- # 2. 2nd order / Heun's method
- derivative = (sample - pred_original_sample) / sigma_hat
- derivative = (self.prev_derivative + derivative) / 2
-
- # 3. take prev timestep & sample
- dt = self.dt
- sample = self.sample
-
- # free dt and derivative
- # Note, this puts the scheduler in "first order mode"
- self.prev_derivative = None
- self.dt = None
- self.sample = None
-
- prev_sample = sample + derivative * dt
-
- if not return_dict:
- return (prev_sample,)
-
- return SchedulerOutput(prev_sample=prev_sample)
-
- def add_noise(
- self,
- original_samples: paddle.Tensor,
- noise: paddle.Tensor,
- timesteps: paddle.Tensor,
- ) -> paddle.Tensor:
- # Make sure sigmas and timesteps have the same dtype as original_samples
- self.sigmas = self.sigmas.cast(original_samples.dtype)
-
- step_indices = [self.index_for_timestep(t) for t in timesteps]
-
- sigma = self.sigmas[step_indices].flatten()
- while len(sigma.shape) < len(original_samples.shape):
- sigma = sigma.unsqueeze(-1)
-
- noisy_samples = original_samples + noise * sigma
- return noisy_samples
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_vq_diffusion.py b/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_vq_diffusion.py
deleted file mode 100644
index 7b2ff773fb84a4799beccac400d0a99a6369e170..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_vq_diffusion.py
+++ /dev/null
@@ -1,496 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 Microsoft and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-from typing import List, Optional, Tuple, Union
-
-import numpy as np
-import paddle
-import paddle.nn.functional as F
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput
-from .scheduling_utils import SchedulerMixin
-
-
-def logaddexp(a, b):
- return paddle.log(a.exp() + b.exp())
-
-
-# (TODO junnyu) paddle logsumexp may has bug
-def logsumexp(x, axis=None, keepdim=False):
- return paddle.log(x.exp().sum(axis=axis, keepdim=keepdim))
-
-
-@dataclass
-class VQDiffusionSchedulerOutput(BaseOutput):
- """
- Output class for the scheduler's step function output.
-
- Args:
- prev_sample (`paddle.Tensor` of shape `(batch size, num latent pixels)`):
- Computed sample x_{t-1} of previous timestep. `prev_sample` should be used as next model input in the
- denoising loop.
- """
-
- prev_sample: paddle.Tensor
-
-
-def index_to_log_onehot(x: paddle.Tensor, num_classes: int) -> paddle.Tensor:
- """
- Convert batch of vector of class indices into batch of log onehot vectors
-
- Args:
- x (`paddle.Tensor` of shape `(batch size, vector length)`):
- Batch of class indices
-
- num_classes (`int`):
- number of classes to be used for the onehot vectors
-
- Returns:
- `paddle.Tensor` of shape `(batch size, num classes, vector length)`:
- Log onehot vectors
- """
- x_onehot = F.one_hot(x, num_classes)
- x_onehot = x_onehot.transpose([0, 2, 1])
- log_x = paddle.log(x_onehot.cast("float32").clip(min=1e-30))
- return log_x
-
-
-def gumbel_noised(logits: paddle.Tensor, generator: Optional[paddle.Generator]) -> paddle.Tensor:
- """
- Apply gumbel noise to `logits`
- """
- uniform = paddle.rand(logits.shape, generator=generator)
- gumbel_noise = -paddle.log(-paddle.log(uniform + 1e-30) + 1e-30)
- noised = gumbel_noise + logits
- return noised
-
-
-def alpha_schedules(num_diffusion_timesteps: int, alpha_cum_start=0.99999, alpha_cum_end=0.000009):
- """
- Cumulative and non-cumulative alpha schedules.
-
- See section 4.1.
- """
- att = (
- np.arange(0, num_diffusion_timesteps) / (num_diffusion_timesteps - 1) * (alpha_cum_end - alpha_cum_start)
- + alpha_cum_start
- )
- att = np.concatenate(([1], att))
- at = att[1:] / att[:-1]
- att = np.concatenate((att[1:], [1]))
- return at, att
-
-
-def gamma_schedules(num_diffusion_timesteps: int, gamma_cum_start=0.000009, gamma_cum_end=0.99999):
- """
- Cumulative and non-cumulative gamma schedules.
-
- See section 4.1.
- """
- ctt = (
- np.arange(0, num_diffusion_timesteps) / (num_diffusion_timesteps - 1) * (gamma_cum_end - gamma_cum_start)
- + gamma_cum_start
- )
- ctt = np.concatenate(([0], ctt))
- one_minus_ctt = 1 - ctt
- one_minus_ct = one_minus_ctt[1:] / one_minus_ctt[:-1]
- ct = 1 - one_minus_ct
- ctt = np.concatenate((ctt[1:], [0]))
- return ct, ctt
-
-
-class VQDiffusionScheduler(SchedulerMixin, ConfigMixin):
- """
- The VQ-diffusion transformer outputs predicted probabilities of the initial unnoised image.
-
- The VQ-diffusion scheduler converts the transformer's output into a sample for the unnoised image at the previous
- diffusion timestep.
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- For more details, see the original paper: https://arxiv.org/abs/2111.14822
-
- Args:
- num_vec_classes (`int`):
- The number of classes of the vector embeddings of the latent pixels. Includes the class for the masked
- latent pixel.
-
- num_train_timesteps (`int`):
- Number of diffusion steps used to train the model.
-
- alpha_cum_start (`float`):
- The starting cumulative alpha value.
-
- alpha_cum_end (`float`):
- The ending cumulative alpha value.
-
- gamma_cum_start (`float`):
- The starting cumulative gamma value.
-
- gamma_cum_end (`float`):
- The ending cumulative gamma value.
- """
-
- order = 1
-
- @register_to_config
- def __init__(
- self,
- num_vec_classes: int,
- num_train_timesteps: int = 100,
- alpha_cum_start: float = 0.99999,
- alpha_cum_end: float = 0.000009,
- gamma_cum_start: float = 0.000009,
- gamma_cum_end: float = 0.99999,
- ):
- self.num_embed = num_vec_classes
-
- # By convention, the index for the mask class is the last class index
- self.mask_class = self.num_embed - 1
-
- at, att = alpha_schedules(num_train_timesteps, alpha_cum_start=alpha_cum_start, alpha_cum_end=alpha_cum_end)
- ct, ctt = gamma_schedules(num_train_timesteps, gamma_cum_start=gamma_cum_start, gamma_cum_end=gamma_cum_end)
-
- num_non_mask_classes = self.num_embed - 1
- bt = (1 - at - ct) / num_non_mask_classes
- btt = (1 - att - ctt) / num_non_mask_classes
-
- at = paddle.to_tensor(at.astype("float64"))
- bt = paddle.to_tensor(bt.astype("float64"))
- ct = paddle.to_tensor(ct.astype("float64"))
- log_at = paddle.log(at)
- log_bt = paddle.log(bt)
- log_ct = paddle.log(ct)
-
- att = paddle.to_tensor(att.astype("float64"))
- btt = paddle.to_tensor(btt.astype("float64"))
- ctt = paddle.to_tensor(ctt.astype("float64"))
- log_cumprod_at = paddle.log(att)
- log_cumprod_bt = paddle.log(btt)
- log_cumprod_ct = paddle.log(ctt)
-
- self.log_at = log_at.cast("float32")
- self.log_bt = log_bt.cast("float32")
- self.log_ct = log_ct.cast("float32")
- self.log_cumprod_at = log_cumprod_at.cast("float32")
- self.log_cumprod_bt = log_cumprod_bt.cast("float32")
- self.log_cumprod_ct = log_cumprod_ct.cast("float32")
-
- # setable values
- self.num_inference_steps = None
- self.timesteps = paddle.to_tensor(np.arange(0, num_train_timesteps)[::-1].copy())
-
- def set_timesteps(self, num_inference_steps: int):
- """
- Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference.
-
- Args:
- num_inference_steps (`int`):
- the number of diffusion steps used when generating samples with a pre-trained model.
- """
- self.num_inference_steps = num_inference_steps
- timesteps = np.arange(0, self.num_inference_steps)[::-1].copy()
- self.timesteps = paddle.to_tensor(timesteps)
-
- def step(
- self,
- model_output: paddle.Tensor,
- timestep: paddle.Tensor,
- sample: paddle.Tensor,
- generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None,
- return_dict: bool = True,
- ) -> Union[VQDiffusionSchedulerOutput, Tuple]:
- """
- Predict the sample at the previous timestep via the reverse transition distribution i.e. Equation (11). See the
- docstring for `self.q_posterior` for more in depth docs on how Equation (11) is computed.
-
- Args:
- log_p_x_0: (`paddle.Tensor` of shape `(batch size, num classes - 1, num latent pixels)`):
- The log probabilities for the predicted classes of the initial latent pixels. Does not include a
- prediction for the masked class as the initial unnoised image cannot be masked.
-
- t (`paddle.Tensor`):
- The timestep that determines which transition matrices are used.
-
- x_t: (`paddle.Tensor` of shape `(batch size, num latent pixels)`):
- The classes of each latent pixel at time `t`
-
- generator: (`paddle.Generator` or None):
- RNG for the noise applied to p(x_{t-1} | x_t) before it is sampled from.
-
- return_dict (`bool`):
- option for returning tuple rather than VQDiffusionSchedulerOutput class
-
- Returns:
- [`~schedulers.scheduling_utils.VQDiffusionSchedulerOutput`] or `tuple`:
- [`~schedulers.scheduling_utils.VQDiffusionSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`.
- When returning a tuple, the first element is the sample tensor.
- """
- if timestep == 0:
- log_p_x_t_min_1 = model_output
- else:
- log_p_x_t_min_1 = self.q_posterior(model_output, sample, timestep)
-
- log_p_x_t_min_1 = gumbel_noised(log_p_x_t_min_1, generator)
-
- x_t_min_1 = log_p_x_t_min_1.argmax(axis=1)
-
- if not return_dict:
- return (x_t_min_1,)
-
- return VQDiffusionSchedulerOutput(prev_sample=x_t_min_1)
-
- def q_posterior(self, log_p_x_0, x_t, t):
- """
- Calculates the log probabilities for the predicted classes of the image at timestep `t-1`. I.e. Equation (11).
-
- Instead of directly computing equation (11), we use Equation (5) to restate Equation (11) in terms of only
- forward probabilities.
-
- Equation (11) stated in terms of forward probabilities via Equation (5):
-
- Where:
- - the sum is over x_0 = {C_0 ... C_{k-1}} (classes for x_0)
-
- p(x_{t-1} | x_t) = sum( q(x_t | x_{t-1}) * q(x_{t-1} | x_0) * p(x_0) / q(x_t | x_0) )
-
- Args:
- log_p_x_0: (`paddle.Tensor` of shape `(batch size, num classes - 1, num latent pixels)`):
- The log probabilities for the predicted classes of the initial latent pixels. Does not include a
- prediction for the masked class as the initial unnoised image cannot be masked.
-
- x_t: (`paddle.Tensor` of shape `(batch size, num latent pixels)`):
- The classes of each latent pixel at time `t`
-
- t (paddle.Tensor):
- The timestep that determines which transition matrix is used.
-
- Returns:
- `paddle.Tensor` of shape `(batch size, num classes, num latent pixels)`:
- The log probabilities for the predicted classes of the image at timestep `t-1`. I.e. Equation (11).
- """
- log_onehot_x_t = index_to_log_onehot(x_t, self.num_embed)
-
- log_q_x_t_given_x_0 = self.log_Q_t_transitioning_to_known_class(
- t=t, x_t=x_t, log_onehot_x_t=log_onehot_x_t, cumulative=True
- )
-
- log_q_t_given_x_t_min_1 = self.log_Q_t_transitioning_to_known_class(
- t=t, x_t=x_t, log_onehot_x_t=log_onehot_x_t, cumulative=False
- )
-
- # p_0(x_0=C_0 | x_t) / q(x_t | x_0=C_0) ... p_n(x_0=C_0 | x_t) / q(x_t | x_0=C_0)
- # . . .
- # . . .
- # . . .
- # p_0(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) ... p_n(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1})
- q = log_p_x_0 - log_q_x_t_given_x_0
-
- # sum_0 = p_0(x_0=C_0 | x_t) / q(x_t | x_0=C_0) + ... + p_0(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}), ... ,
- # sum_n = p_n(x_0=C_0 | x_t) / q(x_t | x_0=C_0) + ... + p_n(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1})
- q_log_sum_exp = logsumexp(q, axis=1, keepdim=True)
-
- # p_0(x_0=C_0 | x_t) / q(x_t | x_0=C_0) / sum_0 ... p_n(x_0=C_0 | x_t) / q(x_t | x_0=C_0) / sum_n
- # . . .
- # . . .
- # . . .
- # p_0(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) / sum_0 ... p_n(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) / sum_n
- q = q - q_log_sum_exp
-
- # (p_0(x_0=C_0 | x_t) / q(x_t | x_0=C_0) / sum_0) * a_cumulative_{t-1} + b_cumulative_{t-1} ... (p_n(x_0=C_0 | x_t) / q(x_t | x_0=C_0) / sum_n) * a_cumulative_{t-1} + b_cumulative_{t-1}
- # . . .
- # . . .
- # . . .
- # (p_0(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) / sum_0) * a_cumulative_{t-1} + b_cumulative_{t-1} ... (p_n(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) / sum_n) * a_cumulative_{t-1} + b_cumulative_{t-1}
- # c_cumulative_{t-1} ... c_cumulative_{t-1}
- q = self.apply_cumulative_transitions(q, t - 1)
-
- # ((p_0(x_0=C_0 | x_t) / q(x_t | x_0=C_0) / sum_0) * a_cumulative_{t-1} + b_cumulative_{t-1}) * q(x_t | x_{t-1}=C_0) * sum_0 ... ((p_n(x_0=C_0 | x_t) / q(x_t | x_0=C_0) / sum_n) * a_cumulative_{t-1} + b_cumulative_{t-1}) * q(x_t | x_{t-1}=C_0) * sum_n
- # . . .
- # . . .
- # . . .
- # ((p_0(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) / sum_0) * a_cumulative_{t-1} + b_cumulative_{t-1}) * q(x_t | x_{t-1}=C_{k-1}) * sum_0 ... ((p_n(x_0=C_{k-1} | x_t) / q(x_t | x_0=C_{k-1}) / sum_n) * a_cumulative_{t-1} + b_cumulative_{t-1}) * q(x_t | x_{t-1}=C_{k-1}) * sum_n
- # c_cumulative_{t-1} * q(x_t | x_{t-1}=C_k) * sum_0 ... c_cumulative_{t-1} * q(x_t | x_{t-1}=C_k) * sum_0
- log_p_x_t_min_1 = q + log_q_t_given_x_t_min_1 + q_log_sum_exp
-
- # For each column, there are two possible cases.
- #
- # Where:
- # - sum(p_n(x_0))) is summing over all classes for x_0
- # - C_i is the class transitioning from (not to be confused with c_t and c_cumulative_t being used for gamma's)
- # - C_j is the class transitioning to
- #
- # 1. x_t is masked i.e. x_t = c_k
- #
- # Simplifying the expression, the column vector is:
- # .
- # .
- # .
- # (c_t / c_cumulative_t) * (a_cumulative_{t-1} * p_n(x_0 = C_i | x_t) + b_cumulative_{t-1} * sum(p_n(x_0)))
- # .
- # .
- # .
- # (c_cumulative_{t-1} / c_cumulative_t) * sum(p_n(x_0))
- #
- # From equation (11) stated in terms of forward probabilities, the last row is trivially verified.
- #
- # For the other rows, we can state the equation as ...
- #
- # (c_t / c_cumulative_t) * [b_cumulative_{t-1} * p(x_0=c_0) + ... + (a_cumulative_{t-1} + b_cumulative_{t-1}) * p(x_0=C_i) + ... + b_cumulative_{k-1} * p(x_0=c_{k-1})]
- #
- # This verifies the other rows.
- #
- # 2. x_t is not masked
- #
- # Simplifying the expression, there are two cases for the rows of the column vector, where C_j = C_i and where C_j != C_i:
- # .
- # .
- # .
- # C_j != C_i: b_t * ((b_cumulative_{t-1} / b_cumulative_t) * p_n(x_0 = c_0) + ... + ((a_cumulative_{t-1} + b_cumulative_{t-1}) / b_cumulative_t) * p_n(x_0 = C_i) + ... + (b_cumulative_{t-1} / (a_cumulative_t + b_cumulative_t)) * p_n(c_0=C_j) + ... + (b_cumulative_{t-1} / b_cumulative_t) * p_n(x_0 = c_{k-1}))
- # .
- # .
- # .
- # C_j = C_i: (a_t + b_t) * ((b_cumulative_{t-1} / b_cumulative_t) * p_n(x_0 = c_0) + ... + ((a_cumulative_{t-1} + b_cumulative_{t-1}) / (a_cumulative_t + b_cumulative_t)) * p_n(x_0 = C_i = C_j) + ... + (b_cumulative_{t-1} / b_cumulative_t) * p_n(x_0 = c_{k-1}))
- # .
- # .
- # .
- # 0
- #
- # The last row is trivially verified. The other rows can be verified by directly expanding equation (11) stated in terms of forward probabilities.
- return log_p_x_t_min_1
-
- def log_Q_t_transitioning_to_known_class(
- self, *, t: paddle.Tensor, x_t: paddle.Tensor, log_onehot_x_t: paddle.Tensor, cumulative: bool
- ):
- """
- Returns the log probabilities of the rows from the (cumulative or non-cumulative) transition matrix for each
- latent pixel in `x_t`.
-
- See equation (7) for the complete non-cumulative transition matrix. The complete cumulative transition matrix
- is the same structure except the parameters (alpha, beta, gamma) are the cumulative analogs.
-
- Args:
- t (paddle.Tensor):
- The timestep that determines which transition matrix is used.
-
- x_t (`paddle.Tensor` of shape `(batch size, num latent pixels)`):
- The classes of each latent pixel at time `t`.
-
- log_onehot_x_t (`paddle.Tensor` of shape `(batch size, num classes, num latent pixels)`):
- The log one-hot vectors of `x_t`
-
- cumulative (`bool`):
- If cumulative is `False`, we use the single step transition matrix `t-1`->`t`. If cumulative is `True`,
- we use the cumulative transition matrix `0`->`t`.
-
- Returns:
- `paddle.Tensor` of shape `(batch size, num classes - 1, num latent pixels)`:
- Each _column_ of the returned matrix is a _row_ of log probabilities of the complete probability
- transition matrix.
-
- When non cumulative, returns `self.num_classes - 1` rows because the initial latent pixel cannot be
- masked.
-
- Where:
- - `q_n` is the probability distribution for the forward process of the `n`th latent pixel.
- - C_0 is a class of a latent pixel embedding
- - C_k is the class of the masked latent pixel
-
- non-cumulative result (omitting logarithms):
- ```
- q_0(x_t | x_{t-1} = C_0) ... q_n(x_t | x_{t-1} = C_0)
- . . .
- . . .
- . . .
- q_0(x_t | x_{t-1} = C_k) ... q_n(x_t | x_{t-1} = C_k)
- ```
-
- cumulative result (omitting logarithms):
- ```
- q_0_cumulative(x_t | x_0 = C_0) ... q_n_cumulative(x_t | x_0 = C_0)
- . . .
- . . .
- . . .
- q_0_cumulative(x_t | x_0 = C_{k-1}) ... q_n_cumulative(x_t | x_0 = C_{k-1})
- ```
- """
- if cumulative:
- a = self.log_cumprod_at[t]
- b = self.log_cumprod_bt[t]
- c = self.log_cumprod_ct[t]
- else:
- a = self.log_at[t]
- b = self.log_bt[t]
- c = self.log_ct[t]
-
- if not cumulative:
- # The values in the onehot vector can also be used as the logprobs for transitioning
- # from masked latent pixels. If we are not calculating the cumulative transitions,
- # we need to save these vectors to be re-appended to the final matrix so the values
- # aren't overwritten.
- #
- # `P(x_t!=mask|x_{t-1=mask}) = 0` and 0 will be the value of the last row of the onehot vector
- # if x_t is not masked
- #
- # `P(x_t=mask|x_{t-1=mask}) = 1` and 1 will be the value of the last row of the onehot vector
- # if x_t is masked
- log_onehot_x_t_transitioning_from_masked = log_onehot_x_t[:, -1, :].unsqueeze(1)
-
- # `index_to_log_onehot` will add onehot vectors for masked pixels,
- # so the default one hot matrix has one too many rows. See the doc string
- # for an explanation of the dimensionality of the returned matrix.
- log_onehot_x_t = log_onehot_x_t[:, :-1, :]
-
- # this is a cheeky trick to produce the transition probabilities using log one-hot vectors.
- #
- # Don't worry about what values this sets in the columns that mark transitions
- # to masked latent pixels. They are overwrote later with the `mask_class_mask`.
- #
- # Looking at the below logspace formula in non-logspace, each value will evaluate to either
- # `1 * a + b = a + b` where `log_Q_t` has the one hot value in the column
- # or
- # `0 * a + b = b` where `log_Q_t` has the 0 values in the column.
- #
- # See equation 7 for more details.
- log_Q_t = logaddexp(log_onehot_x_t + a, b)
-
- # The whole column of each masked pixel is `c`
- mask_class_mask = x_t == self.mask_class
- mask_class_mask = mask_class_mask.unsqueeze(1).expand([-1, self.num_embed - 1, -1])
- log_Q_t[mask_class_mask] = c
-
- if not cumulative:
- log_Q_t = paddle.concat((log_Q_t, log_onehot_x_t_transitioning_from_masked), axis=1)
-
- return log_Q_t
-
- def apply_cumulative_transitions(self, q, t):
- bsz = q.shape[0]
- a = self.log_cumprod_at[t]
- b = self.log_cumprod_bt[t]
- c = self.log_cumprod_ct[t]
-
- num_latent_pixels = q.shape[2]
- c = c.expand([bsz, 1, num_latent_pixels])
-
- q = logaddexp(q + a, b)
- q = paddle.concat((q, c), axis=1)
-
- return q
diff --git a/spaces/44ov41za8i/FreeVC/speaker_encoder/data_objects/speaker.py b/spaces/44ov41za8i/FreeVC/speaker_encoder/data_objects/speaker.py
deleted file mode 100644
index 07379847a854d85623db02ce5e5409c1566eb80c..0000000000000000000000000000000000000000
--- a/spaces/44ov41za8i/FreeVC/speaker_encoder/data_objects/speaker.py
+++ /dev/null
@@ -1,40 +0,0 @@
-from speaker_encoder.data_objects.random_cycler import RandomCycler
-from speaker_encoder.data_objects.utterance import Utterance
-from pathlib import Path
-
-# Contains the set of utterances of a single speaker
-class Speaker:
- def __init__(self, root: Path):
- self.root = root
- self.name = root.name
- self.utterances = None
- self.utterance_cycler = None
-
- def _load_utterances(self):
- with self.root.joinpath("_sources.txt").open("r") as sources_file:
- sources = [l.split(",") for l in sources_file]
- sources = {frames_fname: wave_fpath for frames_fname, wave_fpath in sources}
- self.utterances = [Utterance(self.root.joinpath(f), w) for f, w in sources.items()]
- self.utterance_cycler = RandomCycler(self.utterances)
-
- def random_partial(self, count, n_frames):
- """
- Samples a batch of unique partial utterances from the disk in a way that all
- utterances come up at least once every two cycles and in a random order every time.
-
- :param count: The number of partial utterances to sample from the set of utterances from
- that speaker. Utterances are guaranteed not to be repeated if is not larger than
- the number of utterances available.
- :param n_frames: The number of frames in the partial utterance.
- :return: A list of tuples (utterance, frames, range) where utterance is an Utterance,
- frames are the frames of the partial utterances and range is the range of the partial
- utterance with regard to the complete utterance.
- """
- if self.utterances is None:
- self._load_utterances()
-
- utterances = self.utterance_cycler.sample(count)
-
- a = [(u,) + u.random_partial(n_frames) for u in utterances]
-
- return a
diff --git a/spaces/4th3n4/TraDeX/app.py b/spaces/4th3n4/TraDeX/app.py
deleted file mode 100644
index d724bf8aedef6f1303915cb68e8621477b64b954..0000000000000000000000000000000000000000
--- a/spaces/4th3n4/TraDeX/app.py
+++ /dev/null
@@ -1,590 +0,0 @@
-# %%
-# Import section
-# (Please don't edit this section unless if necessary)
-import copy
-from pathlib import Path
-import warnings
-import holidays
-import seaborn as sns
-import matplotlib
-import matplotlib.dates as mdates
-import matplotlib.pyplot as plt
-plt.style.use('fivethirtyeight')
-import numpy as np
-import pandas as pd
-import glob
-import csv
-import lightning.pytorch as pl
-from lightning.pytorch.callbacks import EarlyStopping, LearningRateMonitor
-from lightning.pytorch.loggers import TensorBoardLogger
-import torch
-from pytorch_forecasting import Baseline, TemporalFusionTransformer, TimeSeriesDataSet
-from pytorch_forecasting.data import GroupNormalizer, NaNLabelEncoder
-from pytorch_forecasting.metrics import SMAPE, PoissonLoss, QuantileLoss
-from pytorch_forecasting.models.temporal_fusion_transformer.tuning import optimize_hyperparameters
-import random
-import gc
-import tensorflow as tf
-import tensorboard as tb
-tf.io.gfile = tb.compat.tensorflow_stub.io.gfile
-import os
-import math
-import sys
-from sklearn.model_selection import train_test_split
-from sklearn.preprocessing import MinMaxScaler
-import tensorflow as tf
-from tensorflow.keras.layers import Conv1D, LSTM, Dense, Dropout, Bidirectional, TimeDistributed
-from tensorflow.keras.layers import MaxPooling1D, Flatten
-from tensorflow.keras.regularizers import L1, L2
-from tensorflow.keras.metrics import Accuracy
-from tensorflow.keras.metrics import RootMeanSquaredError
-from sklearn.metrics import mean_squared_error as MSE
-from sklearn.model_selection import KFold
-from sklearn.inspection import permutation_importance
-from tensorflow.keras.utils import plot_model
-from sklearn.metrics import explained_variance_score, mean_poisson_deviance, mean_gamma_deviance, mean_squared_error, mean_squared_log_error, d2_absolute_error_score, d2_pinball_score, d2_tweedie_score
-from sklearn.metrics import r2_score
-from sklearn.metrics import max_error
-import datetime
-from datetime import date
-import optuna
-from tensorflow.keras.callbacks import Callback
-from optuna.integration import TFKerasPruningCallback
-import shutil
-import gradio as gr
-
-# Some variables (don't edit these variables unless if necessary)
-DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'
-random.seed(30)
-np.random.seed(30)
-tf.random.set_seed(30)
-torch.manual_seed(30)
-torch.cuda.manual_seed(30)
-
-# Global variables
-PATIENCE = 30
-MAX_EPOCHS = 3
-LEARNING_RATE = 0.01
-OPTUNA = True
-ACCELERATOR = "cpu"
-# This below line is only for GPU. Don't use it for CPU
-#os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:1024"
-
-# Variables to count the number of files
-w = 7
-prax = [0 for x in range(w)]
-
-# %%
-# Function to train the model (TFT)
-def modelTFT(csv_file, prax):
- train = csv_file
- #test = pd.read_csv("/kaggle/input/artemis-test/nifty_daily.csv")
- train['date'] = pd.to_datetime(train['Date/Time'])
- #test['date'] = pd.to_datetime(test['Date'])
-
- data = pd.concat([train], axis = 0, ignore_index=True)
- # Check that key is country-store-product-date combination
- #assert len(data.drop_duplicates(['country', 'store', 'product', 'date'])) == len(data)
- # Check that there is one date per country-store-product combination
- #assert len(data.drop_duplicates(['country', 'store', 'product'])) == len(data)//data['date'].nunique()
-
- #display(train.sample(4))
-
- # Add a time_idx (an sequence of consecutive integers that goes from min to max date)
-
- data = (data.merge((data[['Date/Time']].drop_duplicates(ignore_index=True)
- .rename_axis('time_idx')).reset_index(), on = ['Date/Time']))
- # add additional features
- data["day_of_week"] = data['date'].dt.dayofweek.astype(str).astype("category") # categories have be strings
- data["week_of_year"] = data['date'].dt.isocalendar().week.astype(str).astype("category") # categories have be strings
- data["month"] = data['date'].dt.month.astype(str).astype("category") # categories have be strings
- #data["log_num_sold"] = np.log(data.num_sold + 1e-8)
- #data["avg_volume_by_country"] = data.groupby(["time_idx", "country"], observed=True).num_sold.transform("mean")
- #data["avg_volume_by_store"] = data.groupby(["time_idx", "store"], observed=True).num_sold.transform("mean")
- #data["avg_volume_by_product"] = data.groupby(["time_idx", "product"], observed=True).num_sold.transform("mean")
-
- #unique_dates_country = data[['date', 'Ticker']].drop_duplicates(ignore_index = True)
- #unique_dates_country['is_holiday'] = (unique_dates_country
- # .apply(lambda x: x.date in holidays.country_holidays(x.country), axis = 1).astype('category'))
- #unique_dates_country['is_holiday_lead_1'] = (unique_dates_country
- # .apply(lambda x: x.date+pd.Timedelta(days=1) in holidays.country_holidays(x.country), axis = 1).astype('category'))
- #unique_dates_country['is_holiday_lead_2'] = (unique_dates_country
- # .apply(lambda x: x.date+pd.Timedelta(days=2) in holidays.country_holidays(x.country), axis = 1).astype('category'))
- #unique_dates_country['is_holiday_lag_1'] = (unique_dates_country
- # .apply(lambda x: x.date-pd.Timedelta(days=1) in holidays.country_holidays(x.country), axis = 1).astype('category'))
- #unique_dates_country['is_holiday_lag_2'] = (unique_dates_country
- # .apply(lambda x: x.date-pd.Timedelta(days=2) in holidays.country_holidays(x.country), axis = 1).astype('category'))
- #data = data.merge(unique_dates_country, on = ['date', 'Ticker'], validate = "m:1")
- #del unique_dates_country
- gc.collect()
- data.sample(5, random_state=30)
-
- train = data.iloc[:len(train)]
- test = data.iloc[len(train):]
-
- max_prediction_length = 2
- max_encoder_length = train.date.nunique()
- training_cutoff = train["time_idx"].max() - max_prediction_length #we will validate on 2020
-
- # Let's create a Dataset
- training = TimeSeriesDataSet(
- train[lambda x: x.time_idx <= training_cutoff],
- time_idx="time_idx",
- target="Close",
- group_ids=["Ticker"],
- min_encoder_length=max_prediction_length, # keep encoder length long (as it is in the validation set)
- max_encoder_length=max_encoder_length,
- max_prediction_length=max_prediction_length,
- static_categoricals=["Ticker"],
- time_varying_known_categoricals=["month", "week_of_year", "day_of_week"],
- #variable_groups={"is_holiday": ["is_holiday"]}, # group of categorical variables can be treated as one variable
- time_varying_known_reals=["time_idx"],
- time_varying_unknown_categoricals=[],
- time_varying_unknown_reals=[
- 'Open','High','Low','Close','OI','RSI14','RSI44','HHRSI','Rsi Weekly','LLCHHV','white','Vap44','Vap14','Ema5','Ema20','Ema50','Ema200'
- ],
- target_normalizer=GroupNormalizer(
- groups=['Ticker'], transformation="softplus"
- ), # use softplus and normalize by group
- categorical_encoders={
- 'week_of_year':NaNLabelEncoder(add_nan=True)
- },
- #lags={'num_sold': [7, 30, 365]},
- add_relative_time_idx=True,
- add_target_scales=True,
- add_encoder_length=True,
- )
-
- # create validation set (predict=True) which means to predict the last max_prediction_length points in time
- # for each series
- validation = TimeSeriesDataSet.from_dataset(training, train, predict=True, stop_randomization=True)
-
- # create dataloaders for model
- batch_size = 128 # set this between 32 to 128
- train_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=0)
- val_dataloader = validation.to_dataloader(train=False, batch_size=batch_size * 10, num_workers=0)
-
- #let's see how a naive model does
-
- actuals = torch.cat([y for x, (y, weight) in iter(val_dataloader)])#.cuda()
- baseline_predictions = Baseline().predict(val_dataloader)#.cuda()
- (actuals - baseline_predictions).abs().mean().item()
-
- sm = SMAPE()
-
- print(f"Median loss for naive prediction on validation: {sm.loss(actuals, baseline_predictions).mean(axis = 1).median().item()}")
-
- early_stop_callback = EarlyStopping(monitor="train_loss", min_delta=1e-2, patience=PATIENCE, verbose=False, mode="min")
- lr_logger = LearningRateMonitor() # log the learning rate
- logger = TensorBoardLogger("lightning_logs") # logging results to a tensorboard
-
- trainer = pl.Trainer(
- max_epochs=1,
- accelerator=ACCELERATOR,
- enable_model_summary=False,
- gradient_clip_val=0.25,
- limit_train_batches=10, # coment in for training, running valiation every 30 batches
- #fast_dev_run=True, # comment in to check that networkor dataset has no serious bugs
- callbacks=[lr_logger, early_stop_callback],
- logger=logger,
- )
-
- tft = TemporalFusionTransformer.from_dataset(
- training,
- learning_rate=LEARNING_RATE,
- lstm_layers=2,
- hidden_size=16,
- attention_head_size=2,
- dropout=0.2,
- hidden_continuous_size=8,
- output_size=1, # 7 quantiles by default
- loss=SMAPE(),
- log_interval=10, # uncomment for learning rate finder and otherwise, e.g. to 10 for logging every 10 batches
- reduce_on_plateau_patience=4
- )
-
- tft.to(DEVICE)
- trainer.fit(
- tft,
- train_dataloaders=train_dataloader,
- val_dataloaders=val_dataloader,
- )
- #torch.cuda.empty_cache()
- #print(f"Number of parameters in network: {tft.size()/1e3:.1f}k")
-
- if OPTUNA:
- from pytorch_forecasting.models.temporal_fusion_transformer.tuning import optimize_hyperparameters
-
- # create study
- study = optimize_hyperparameters(
- train_dataloader,
- val_dataloader,
- model_path="optuna_test",
- n_trials=5,
- max_epochs=MAX_EPOCHS,
- gradient_clip_val_range=(0.01, 0.3),
- hidden_size_range=(8, 24),
- hidden_continuous_size_range=(8, 12),
- attention_head_size_range=(2, 4),
- learning_rate_range=(0.01, 0.05),
- dropout_range=(0.1, 0.25),
- trainer_kwargs=dict(limit_train_batches=20),
- reduce_on_plateau_patience=4,
- pruner=optuna.pruners.MedianPruner(n_min_trials=3, n_startup_trials=3),
- use_learning_rate_finder=False, # use Optuna to find ideal learning rate or use in-built learning rate finder
- )
- #torch.cuda.empty_cache()
- #'''
- trainer = pl.Trainer(
- max_epochs=MAX_EPOCHS,
- accelerator=ACCELERATOR,
- enable_model_summary=False,
- gradient_clip_val=study.best_params['gradient_clip_val'],
- limit_train_batches=20, # coment in for training, running valiation every 30 batches
- #fast_dev_run=True, # comment in to check that networkor dataset has no serious bugs
- callbacks=[lr_logger, early_stop_callback],
- logger=logger,
- )
-
- tft = TemporalFusionTransformer.from_dataset(
- training,
- learning_rate=study.best_params['learning_rate'],
- lstm_layers=2,
- hidden_size=study.best_params['hidden_size'],
- attention_head_size=study.best_params['attention_head_size'],
- dropout=study.best_params['dropout'],
- hidden_continuous_size=study.best_params['hidden_continuous_size'],
- output_size=1, # 7 quantiles by default
- loss=SMAPE(),
- log_interval=10, # uncomment for learning rate finder and otherwise, e.g. to 10 for logging every 10 batches
- reduce_on_plateau_patience=4
- )
-
- tft.to(DEVICE)
- trainer.fit(
- tft,
- train_dataloaders=train_dataloader,
- val_dataloaders=val_dataloader,
- )
- #'''
- #torch.cuda.empty_cache()
- best_model_path = trainer.checkpoint_callback.best_model_path
- best_tft = TemporalFusionTransformer.load_from_checkpoint(best_model_path)
- actuals = torch.cat([y[0] for x, y in iter(val_dataloader)])#.cuda()
- predictions = best_tft.predict(val_dataloader, mode="prediction")
- raw_predictions = best_tft.predict(val_dataloader, mode="raw", return_x=True)
-
- sm = SMAPE()
- print(f"Validation median SMAPE loss: {sm.loss(actuals, predictions).mean(axis = 1).median().item()}")
- prax[5] = sm.loss(actuals, predictions).mean(axis = 1).median().item()
- #best_tft.plot_prediction(raw_predictions.x, raw_predictions.output, idx=0, add_loss_to_title=True);
-
- print(raw_predictions[0][0])
- prax[3] = '-'
- prax[4] = raw_predictions[0][0].data.cpu().tolist()[0][0]
- t = prax[4]
- tm = data['Close'][len(data)-1]
- if(t-tm>0):
- prax[6] = 1
- elif(t-tm==0):
- prax[6] = 0
- else:
- prax[6] = -1
- #prax[i][3] = raw_predictions[0][0].data[1]
- print("-----------")
-
- #with open("out.csv", "w", newline="") as f:
- # writer = csv.writer(f)
- # writer.writerows(prax)
-
-# %%
-# Function to train the model (TFT)
-def modelTFT_OpenGap(csv_file, prax):
- train = csv_file
- #test = pd.read_csv("/kaggle/input/artemis-test/nifty_daily.csv")
- train['date'] = pd.to_datetime(train['Date/Time'])
- #test['date'] = pd.to_datetime(test['Date'])
- datLength = len(train)
- train['O-C'] = 0
- for i in range(datLength):
- if i == 0:
- train['O-C'][i] = 0
- continue
- else:
- train['O-C'][i] = train['Open'][i] - train['Close'][i-1]
- data = pd.concat([train], axis = 0, ignore_index=True)
- # Check that key is country-store-product-date combination
- #assert len(data.drop_duplicates(['country', 'store', 'product', 'date'])) == len(data)
- # Check that there is one date per country-store-product combination
- #assert len(data.drop_duplicates(['country', 'store', 'product'])) == len(data)//data['date'].nunique()
-
- #display(train.sample(4))
-
- # Add a time_idx (an sequence of consecutive integers that goes from min to max date)
-
- data = (data.merge((data[['Date/Time']].drop_duplicates(ignore_index=True)
- .rename_axis('time_idx')).reset_index(), on = ['Date/Time']))
- # add additional features
- data["day_of_week"] = data['date'].dt.dayofweek.astype(str).astype("category") # categories have be strings
- data["week_of_year"] = data['date'].dt.isocalendar().week.astype(str).astype("category") # categories have be strings
- data["month"] = data['date'].dt.month.astype(str).astype("category") # categories have be strings
- #data["log_num_sold"] = np.log(data.num_sold + 1e-8)
- #data["avg_volume_by_country"] = data.groupby(["time_idx", "country"], observed=True).num_sold.transform("mean")
- #data["avg_volume_by_store"] = data.groupby(["time_idx", "store"], observed=True).num_sold.transform("mean")
- #data["avg_volume_by_product"] = data.groupby(["time_idx", "product"], observed=True).num_sold.transform("mean")
-
- #unique_dates_country = data[['date', 'Ticker']].drop_duplicates(ignore_index = True)
- #unique_dates_country['is_holiday'] = (unique_dates_country
- # .apply(lambda x: x.date in holidays.country_holidays(x.country), axis = 1).astype('category'))
- #unique_dates_country['is_holiday_lead_1'] = (unique_dates_country
- # .apply(lambda x: x.date+pd.Timedelta(days=1) in holidays.country_holidays(x.country), axis = 1).astype('category'))
- #unique_dates_country['is_holiday_lead_2'] = (unique_dates_country
- # .apply(lambda x: x.date+pd.Timedelta(days=2) in holidays.country_holidays(x.country), axis = 1).astype('category'))
- #unique_dates_country['is_holiday_lag_1'] = (unique_dates_country
- # .apply(lambda x: x.date-pd.Timedelta(days=1) in holidays.country_holidays(x.country), axis = 1).astype('category'))
- #unique_dates_country['is_holiday_lag_2'] = (unique_dates_country
- # .apply(lambda x: x.date-pd.Timedelta(days=2) in holidays.country_holidays(x.country), axis = 1).astype('category'))
- #data = data.merge(unique_dates_country, on = ['date', 'Ticker'], validate = "m:1")
- #del unique_dates_country
- gc.collect()
- data.sample(5, random_state=30)
-
- train = data.iloc[:len(train)]
- test = data.iloc[len(train):]
-
- max_prediction_length = 2
- max_encoder_length = train.date.nunique()
- training_cutoff = train["time_idx"].max() - max_prediction_length #we will validate on 2020
-
- # Let's create a Dataset
- training = TimeSeriesDataSet(
- train[lambda x: x.time_idx <= training_cutoff],
- time_idx="time_idx",
- target="Close",
- group_ids=["Ticker"],
- min_encoder_length=max_prediction_length, # keep encoder length long (as it is in the validation set)
- max_encoder_length=max_encoder_length,
- max_prediction_length=max_prediction_length,
- static_categoricals=["Ticker"],
- time_varying_known_categoricals=["month", "week_of_year", "day_of_week"],
- #variable_groups={"is_holiday": ["is_holiday"]}, # group of categorical variables can be treated as one variable
- time_varying_known_reals=["time_idx"],
- time_varying_unknown_categoricals=[],
- time_varying_unknown_reals=[
- 'Open','High','Low','Close','OI','RSI14','RSI44','HHRSI','Rsi Weekly','LLCHHV','white','Vap44','Vap14','Ema5','Ema20','Ema50','Ema200', 'O-C'
- ],
- target_normalizer=GroupNormalizer(
- groups=['Ticker'], transformation="softplus"
- ), # use softplus and normalize by group
- categorical_encoders={
- 'week_of_year':NaNLabelEncoder(add_nan=True)
- },
- #lags={'num_sold': [7, 30, 365]},
- add_relative_time_idx=True,
- add_target_scales=True,
- add_encoder_length=True,
- )
-
- # create validation set (predict=True) which means to predict the last max_prediction_length points in time
- # for each series
- validation = TimeSeriesDataSet.from_dataset(training, train, predict=True, stop_randomization=True)
-
- # create dataloaders for model
- batch_size = 128 # set this between 32 to 128
- train_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=0)
- val_dataloader = validation.to_dataloader(train=False, batch_size=batch_size * 10, num_workers=0)
-
- #let's see how a naive model does
-
- actuals = torch.cat([y for x, (y, weight) in iter(val_dataloader)])#.cuda()
- baseline_predictions = Baseline().predict(val_dataloader)#.cuda()
- (actuals - baseline_predictions).abs().mean().item()
-
- sm = SMAPE()
-
- print(f"Median loss for naive prediction on validation: {sm.loss(actuals, baseline_predictions).mean(axis = 1).median().item()}")
-
- early_stop_callback = EarlyStopping(monitor="train_loss", min_delta=1e-2, patience=PATIENCE, verbose=False, mode="min")
- lr_logger = LearningRateMonitor() # log the learning rate
- logger = TensorBoardLogger("lightning_logs") # logging results to a tensorboard
-
- trainer = pl.Trainer(
- max_epochs=1,
- accelerator=ACCELERATOR,
- enable_model_summary=False,
- gradient_clip_val=0.25,
- limit_train_batches=10, # coment in for training, running valiation every 30 batches
- #fast_dev_run=True, # comment in to check that networkor dataset has no serious bugs
- callbacks=[lr_logger, early_stop_callback],
- logger=logger,
- )
-
- tft = TemporalFusionTransformer.from_dataset(
- training,
- learning_rate=LEARNING_RATE,
- lstm_layers=2,
- hidden_size=16,
- attention_head_size=2,
- dropout=0.2,
- hidden_continuous_size=8,
- output_size=1, # 7 quantiles by default
- loss=SMAPE(),
- log_interval=10, # uncomment for learning rate finder and otherwise, e.g. to 10 for logging every 10 batches
- reduce_on_plateau_patience=4
- )
-
- tft.to(DEVICE)
- trainer.fit(
- tft,
- train_dataloaders=train_dataloader,
- val_dataloaders=val_dataloader,
- )
- #torch.cuda.empty_cache()
- #print(f"Number of parameters in network: {tft.size()/1e3:.1f}k")
-
- if OPTUNA:
- from pytorch_forecasting.models.temporal_fusion_transformer.tuning import optimize_hyperparameters
-
- # create study
- study = optimize_hyperparameters(
- train_dataloader,
- val_dataloader,
- model_path="optuna_test",
- n_trials=5,
- max_epochs=MAX_EPOCHS,
- gradient_clip_val_range=(0.01, 0.3),
- hidden_size_range=(8, 24),
- hidden_continuous_size_range=(8, 12),
- attention_head_size_range=(2, 4),
- learning_rate_range=(0.01, 0.05),
- dropout_range=(0.1, 0.25),
- trainer_kwargs=dict(limit_train_batches=20),
- reduce_on_plateau_patience=4,
- pruner=optuna.pruners.MedianPruner(n_min_trials=3, n_warmup_steps=3),
- use_learning_rate_finder=False, # use Optuna to find ideal learning rate or use in-built learning rate finder
- )
- #torch.cuda.empty_cache()
- #'''
- trainer = pl.Trainer(
- max_epochs=MAX_EPOCHS,
- accelerator=ACCELERATOR,
- enable_model_summary=False,
- gradient_clip_val=study.best_params['gradient_clip_val'],
- limit_train_batches=20, # coment in for training, running valiation every 30 batches
- #fast_dev_run=True, # comment in to check that networkor dataset has no serious bugs
- callbacks=[lr_logger, early_stop_callback],
- logger=logger,
- )
-
- tft = TemporalFusionTransformer.from_dataset(
- training,
- learning_rate=study.best_params['learning_rate'],
- lstm_layers=2,
- hidden_size=study.best_params['hidden_size'],
- attention_head_size=study.best_params['attention_head_size'],
- dropout=study.best_params['dropout'],
- hidden_continuous_size=study.best_params['hidden_continuous_size'],
- output_size=1, # 7 quantiles by default
- loss=SMAPE(),
- log_interval=10, # uncomment for learning rate finder and otherwise, e.g. to 10 for logging every 10 batches
- reduce_on_plateau_patience=4
- )
-
- tft.to(DEVICE)
- trainer.fit(
- tft,
- train_dataloaders=train_dataloader,
- val_dataloaders=val_dataloader,
- )
- #'''
- #torch.cuda.empty_cache()
- best_model_path = trainer.checkpoint_callback.best_model_path
- best_tft = TemporalFusionTransformer.load_from_checkpoint(best_model_path)
- actuals = torch.cat([y[0] for x, y in iter(val_dataloader)])#.cuda()
- predictions = best_tft.predict(val_dataloader, mode="prediction")
- raw_predictions = best_tft.predict(val_dataloader, mode="raw", return_x=True)
-
- sm = SMAPE()
- print(f"Validation median SMAPE loss: {sm.loss(actuals, predictions).mean(axis = 1).median().item()}")
- prax[5] = sm.loss(actuals, predictions).mean(axis = 1).median().item()
- #best_tft.plot_prediction(raw_predictions.x, raw_predictions.output, idx=0, add_loss_to_title=True);
-
- print(raw_predictions[0][0])
- prax[3] = '-'
- prax[4] = raw_predictions[0][0].data.cpu().tolist()[0][0]
- t = prax[4]
- tm = data['Close'][len(data)-1]
- if(t-tm>0):
- prax[6] = 1
- elif(t-tm==0):
- prax[6] = 0
- else:
- prax[6] = -1
- #prax[i][3] = raw_predictions[0][0].data[1]
- print("-----------")
-
- #with open("out.csv", "w", newline="") as f:
- # writer = csv.writer(f)
- # writer.writerows(prax)
-
-# %%
-def generate_csv(data_list):
- today = date.today().strftime("%Y_%m_%d")
- filename = f"result_{today}.csv"
- file_exists = os.path.isfile(filename)
- with open(filename, mode='a', newline='') as csv_file:
- fieldnames = ['Ticker', 'Prev_Close_Real', 'Model', 'Prev_Close_Model', 'Close_Model', 'Max_Err', 'Up_Down' ] # replace with your own column names
- writer = csv.writer(csv_file, delimiter=',')
- if not file_exists:
- writer.writerow(fieldnames) # file doesn't exist yet, write a header
- writer.writerow(data_list)
- csv_file.close()
-
-def guess_date(string):
- for fmt in ["%Y/%m/%d", "%d-%m-%Y", "%Y%m%d", "%m/%d/%Y", "%d/%m/%Y", "%Y-%m-%d", "%d/%m/%y", "%m/%d/%y"]:
- try:
- return datetime.datetime.strptime(string, fmt).date()
- except ValueError:
- continue
- raise ValueError(string)
-
-# %%
-# Main function
-def main(files):
- # Get a list of all the CSV files uploaded
- prax = [0,0,0,0,0,0,0]
- for idx, file in enumerate(files):
- print(f"File #{idx+1}: {file}")
- print(file.name)
- df = pd.read_csv(file.name)
- print(df['Ticker'][0])
- prax[0] = df['Ticker'][0]
- prax[1] = df['Close'][len(df)-1]
- print('------------------')
- #df = df.drop(['EMARSI'], axis=1)
- #df['Date/Time'] = pd.to_datetime(df['Date/Time'])
- for i in range(len(df)):
- x = guess_date(df['Date/Time'][i])
- df['Date/Time'][i] = x.strftime("%Y-%m-%d")
- df['Date/Time'] = pd.to_datetime(df['Date/Time'])
- df.fillna(0, inplace=True)
- #df.to_csv('out.csv')
- modelTFT(df, prax)
- prax[2] = "TFT"
- generate_csv(prax)
- modelTFT_OpenGap(df, prax)
- prax[2] = "TFT_OpenGap"
- generate_csv(prax)
- # Generate blank line
- prax=["","","","","","",""]
- generate_csv(prax)
- # Reset prax
- prax = [0,0,0,0,0,0,0]
- today = date.today().strftime("%Y_%m_%d")
- return f"result_{today}.csv"
-
-gradioApp = gr.Interface(fn=main, inputs=gr.File(file_count="multiple"), outputs="file")
-
-if __name__ == "__main__":
- # Calling main function
- gradioApp.launch()
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/commons/indexed_datasets.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/commons/indexed_datasets.py
deleted file mode 100644
index 13e3b42bde738c656654ebad803916fbb119f221..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/commons/indexed_datasets.py
+++ /dev/null
@@ -1,77 +0,0 @@
-import pickle
-from copy import deepcopy
-
-import numpy as np
-
-
-class IndexedDataset:
- def __init__(self, path, num_cache=0):
- super().__init__()
- self.path = path
- self.data_file = None
- self.data_offsets = np.load(f"{path}.idx", allow_pickle=True).item()['offsets']
- self.data_file = open(f"{path}.data", 'rb', buffering=-1)
- # self.cache = []
- self.cache = {}
- self.num_cache = num_cache
-
- def check_index(self, i):
- if i < 0 or i >= len(self.data_offsets) - 1:
- raise IndexError('index out of range')
-
- def __del__(self):
- if self.data_file:
- self.data_file.close()
-
- def __getitem__(self, i):
- self.check_index(i)
-
- if self.num_cache > 0:
- if i in self.cache.keys():
- return self.cache[i]
- # for c in self.cache:
- # if c[0] == i:
- # return c[1]
- self.data_file.seek(self.data_offsets[i])
- b = self.data_file.read(self.data_offsets[i + 1] - self.data_offsets[i])
- item = pickle.loads(b)
- if self.num_cache > 0 and len(self.cache) < self.num_cache:
- if i not in self.cache.keys():
- self.cache[i] = deepcopy(item)
- # self.cache = [(i, deepcopy(item))] + self.cache[:-1]
- return item
-
- def __len__(self):
- return len(self.data_offsets) - 1
-
-class IndexedDatasetBuilder:
- def __init__(self, path):
- self.path = path
- self.out_file = open(f"{path}.data", 'wb')
- self.byte_offsets = [0]
-
- def add_item(self, item):
- s = pickle.dumps(item)
- bytes = self.out_file.write(s)
- self.byte_offsets.append(self.byte_offsets[-1] + bytes)
-
- def finalize(self):
- self.out_file.close()
- np.save(open(f"{self.path}.idx", 'wb'), {'offsets': self.byte_offsets})
-
-
-if __name__ == "__main__":
- import random
- from tqdm import tqdm
- ds_path = '/tmp/indexed_ds_example'
- size = 100
- items = [{"a": np.random.normal(size=[10000, 10]),
- "b": np.random.normal(size=[10000, 10])} for i in range(size)]
- builder = IndexedDatasetBuilder(ds_path)
- for i in tqdm(range(size)):
- builder.add_item(items[i])
- builder.finalize()
- ds = IndexedDataset(ds_path)
- for i in tqdm(range(10000)):
- idx = random.randint(0, size - 1)
- assert (ds[idx]['a'] == items[idx]['a']).all()
diff --git a/spaces/AISuperheroes/03GR-Chatbot-Memory/README.md b/spaces/AISuperheroes/03GR-Chatbot-Memory/README.md
deleted file mode 100644
index 2b59bc76dfa4dab0a8ff08e09d13a4359925d52c..0000000000000000000000000000000000000000
--- a/spaces/AISuperheroes/03GR-Chatbot-Memory/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: 03GR Chatbot Memory
-emoji: ⚡
-colorFrom: blue
-colorTo: red
-sdk: gradio
-sdk_version: 3.6
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ASJMO/freegpt/g4f/Provider/Provider.py b/spaces/ASJMO/freegpt/g4f/Provider/Provider.py
deleted file mode 100644
index d24df76b6a6ccfc9b244f13a51bfc124b398a271..0000000000000000000000000000000000000000
--- a/spaces/ASJMO/freegpt/g4f/Provider/Provider.py
+++ /dev/null
@@ -1,16 +0,0 @@
-import os
-from ..typing import sha256, Dict, get_type_hints
-
-url = None
-model = None
-supports_stream = False
-needs_auth = False
-
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- return
-
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join(
- [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov7/yolov7_e-p6_syncbn_fast_8x16b-300e_coco.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov7/yolov7_e-p6_syncbn_fast_8x16b-300e_coco.py
deleted file mode 100644
index 3d1463dc487e05eabfd3f586a28262017a9dc566..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov7/yolov7_e-p6_syncbn_fast_8x16b-300e_coco.py
+++ /dev/null
@@ -1,19 +0,0 @@
-_base_ = './yolov7_w-p6_syncbn_fast_8x16b-300e_coco.py'
-
-model = dict(
- backbone=dict(arch='E'),
- neck=dict(
- use_maxpool_in_downsample=True,
- use_in_channels_in_downsample=True,
- block_cfg=dict(
- type='ELANBlock',
- middle_ratio=0.4,
- block_ratio=0.2,
- num_blocks=6,
- num_convs_in_block=1),
- in_channels=[320, 640, 960, 1280],
- out_channels=[160, 320, 480, 640]),
- bbox_head=dict(
- head_module=dict(
- in_channels=[160, 320, 480, 640],
- main_out_channels=[320, 640, 960, 1280])))
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192.py
deleted file mode 100644
index 71851ab711a54faae5b9b07825928ea9b2e957f8..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192.py
+++ /dev/null
@@ -1,172 +0,0 @@
-_base_ = [
- '../../../_base_/default_runtime.py',
- '../../../_base_/datasets/deepfashion2.py'
-]
-
-default_hooks = dict(checkpoint=dict(save_best='PCK', rule='greater'))
-
-resume = False # 断点恢复
-load_from = None # 模型权重加载
-train_cfg = dict(by_epoch=True, max_epochs=120, val_interval=10) # 训练轮数,测试间隔
-param_scheduler = [
- dict( # warmup策略
- type='LinearLR',
- begin=0,
- end=500,
- start_factor=0.001,
- by_epoch=False),
- dict( # scheduler
- type='MultiStepLR',
- begin=0,
- end=120,
- milestones=[80, 100],
- gamma=0.1,
- by_epoch=True)
-]
-optim_wrapper = dict(optimizer=dict(type='Adam', lr=0.0005)) # 优化器和学习率
-auto_scale_lr = dict(base_batch_size=512) # 根据batch_size自动缩放学习率
-
-backend_args = dict(backend='local') # 数据加载后端设置,默认从本地硬盘加载
-dataset_type = 'DeepFashion2Dataset' # 数据集类名 DeepFashionDataset
-data_mode = 'topdown' # 算法结构类型,用于指定标注信息加载策略
-data_root = 'data/deepfashion2/' # 数据存放路径
-# 定义数据编解码器,用于生成target和对pred进行解码,同时包含了输入图片和输出heatmap尺寸等信息
-codec = dict(
- type='MSRAHeatmap', input_size=(192, 256), heatmap_size=(48, 64), sigma=2)
-
-train_pipeline = [
- dict(type='LoadImage'),
- dict(type='GetBBoxCenterScale'),
- dict(type='RandomFlip', direction='horizontal'),
- dict(
- type='RandomBBoxTransform',
- shift_prob=0,
- rotate_factor=60,
- scale_factor=(0.75, 1.25)),
- dict(type='TopdownAffine', input_size=codec['input_size']),
- dict(type='GenerateTarget', encoder=codec),
- dict(type='PackPoseInputs')
-]
-val_pipeline = [ # 测试时数据增强
- dict(type='LoadImage', backend_args=backend_args), # 加载图片
- dict(type='GetBBoxCenterScale'), # 根据bbox获取center和scale
- dict(type='TopdownAffine', input_size=codec['input_size']), # 根据变换矩阵更新目标数据
- dict(type='PackPoseInputs') # 对target进行打包用于训练
-]
-train_dataloader = dict( # 训练数据加载
- batch_size=64, # 批次大小
- num_workers=6, # 数据加载进程数
- persistent_workers=True, # 在不活跃时维持进程不终止,避免反复启动进程的开销
- sampler=dict(type='DefaultSampler', shuffle=True), # 采样策略,打乱数据
- dataset=dict(
- type=dataset_type, # 数据集类名
- data_root=data_root, # 数据集路径
- data_mode=data_mode, # 算法类型
- ann_file='train/deepfashion2_skirt.json', # 标注文件路径
- data_prefix=dict(img='train/image/'), # 图像路径
- pipeline=train_pipeline # 数据流水线
- ))
-val_dataloader = dict(
- batch_size=32,
- num_workers=6,
- persistent_workers=True, # 在不活跃时维持进程不终止,避免反复启动进程的开销
- drop_last=False,
- sampler=dict(type='DefaultSampler', shuffle=False), # 采样策略,不进行打乱
- dataset=dict(
- type=dataset_type, # 数据集类名
- data_root=data_root, # 数据集路径
- data_mode=data_mode, # 算法类型
- ann_file='validation/deepfashion2_skirt.json', # 标注文件路径
- data_prefix=dict(img='validation/image/'), # 图像路径
- test_mode=True, # 测试模式开关
- pipeline=val_pipeline # 数据流水线
- ))
-test_dataloader = val_dataloader # 默认情况下不区分验证集和测试集,用户根据需要来自行定义
-
-channel_cfg = dict(
- num_output_channels=294,
- dataset_joints=294,
- dataset_channel=[
- [
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
- 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35,
- 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52,
- 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69,
- 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86,
- 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102,
- 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115,
- 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128,
- 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141,
- 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154,
- 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167,
- 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180,
- 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193,
- 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206,
- 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219,
- 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232,
- 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245,
- 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258,
- 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271,
- 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284,
- 285, 286, 287, 288, 289, 290, 291, 292, 293
- ],
- ],
- inference_channel=[
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
- 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
- 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55,
- 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
- 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
- 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
- 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
- 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135,
- 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149,
- 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
- 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177,
- 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191,
- 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,
- 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219,
- 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,
- 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247,
- 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261,
- 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275,
- 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289,
- 290, 291, 292, 293
- ])
-
-model = dict(
- type='TopdownPoseEstimator', # 模型结构决定了算法流程
- data_preprocessor=dict( # 数据归一化和通道顺序调整,作为模型的一部分
- type='PoseDataPreprocessor',
- mean=[123.675, 116.28, 103.53],
- std=[58.395, 57.12, 57.375],
- bgr_to_rgb=True),
- backbone=dict(
- type='ResNet',
- depth=50,
- init_cfg=dict(
- type='Pretrained', # 预训练参数,只加载backbone权重用于迁移学习
- checkpoint='torchvision://resnet50')),
- head=dict( # 模型头部
- type='HeatmapHead',
- in_channels=2048,
- out_channels=channel_cfg['num_output_channels'],
- # deconv_out_channels=None,
- loss=dict(type='KeypointMSELoss', use_target_weight=True), # 损失函数
- decoder=codec), # 解码器,将heatmap解码成坐标值
- test_cfg=dict(
- flip_test=True, # 开启测试时水平翻转集成
- flip_mode='heatmap', # 对heatmap进行翻转
- shift_heatmap=True, # 对翻转后的结果进行平移提高精度
- ))
-
-val_evaluator = [
- dict(type='PCKAccuracy', thr=0.2),
- dict(type='AUC'),
- dict(type='EPE'),
-]
-test_evaluator = val_evaluator # 默认情况下不区分验证集和测试集,用户根据需要来自行定义
-
-visualizer = dict(
- vis_backends=[dict(type='LocalVisBackend'),
- dict(type='WandbVisBackend')])
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet50_mixup.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet50_mixup.py
deleted file mode 100644
index 23130a69c98823a6979dcd7ee7441746753a9865..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet50_mixup.py
+++ /dev/null
@@ -1,17 +0,0 @@
-# model settings
-model = dict(
- type='ImageClassifier',
- backbone=dict(
- type='ResNet',
- depth=50,
- num_stages=4,
- out_indices=(3, ),
- style='pytorch'),
- neck=dict(type='GlobalAveragePooling'),
- head=dict(
- type='MultiLabelLinearClsHead',
- num_classes=1000,
- in_channels=2048,
- loss=dict(type='CrossEntropyLoss', loss_weight=1.0, use_soft=True)),
- train_cfg=dict(augments=dict(type='Mixup', alpha=0.2)),
-)
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/needs_auth/Theb.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/needs_auth/Theb.py
deleted file mode 100644
index c35ea5929774009f2b434ca8c2877d4207046a3d..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/needs_auth/Theb.py
+++ /dev/null
@@ -1,97 +0,0 @@
-from __future__ import annotations
-
-import json
-import random
-
-import requests
-
-from ...typing import Any, CreateResult
-from ..base_provider import BaseProvider
-
-
-class Theb(BaseProvider):
- url = "https://theb.ai"
- working = True
- supports_stream = True
- supports_gpt_35_turbo = True
- needs_auth = True
-
- @staticmethod
- def create_completion(
- model: str,
- messages: list[dict[str, str]],
- stream: bool, **kwargs: Any) -> CreateResult:
-
- conversation = "\n".join(f"{message['role']}: {message['content']}" for message in messages)
- conversation += "\nassistant: "
-
- auth = kwargs.get("auth", {
- "bearer_token":"free",
- "org_id":"theb",
- })
-
- bearer_token = auth["bearer_token"]
- org_id = auth["org_id"]
-
- headers = {
- 'authority' : 'beta.theb.ai',
- 'accept' : 'text/event-stream',
- 'accept-language' : 'id-ID,id;q=0.9,en-US;q=0.8,en;q=0.7',
- 'authorization' : 'Bearer '+bearer_token,
- 'content-type' : 'application/json',
- 'origin' : 'https://beta.theb.ai',
- 'referer' : 'https://beta.theb.ai/home',
- 'sec-ch-ua' : '"Chromium";v="116", "Not)A;Brand";v="24", "Google Chrome";v="116"',
- 'sec-ch-ua-mobile' : '?0',
- 'sec-ch-ua-platform': '"Windows"',
- 'sec-fetch-dest' : 'empty',
- 'sec-fetch-mode' : 'cors',
- 'sec-fetch-site' : 'same-origin',
- 'user-agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36',
- 'x-ai-model' : 'ee8d4f29cb7047f78cbe84313ed6ace8',
- }
-
- req_rand = random.randint(100000000, 9999999999)
-
- json_data: dict[str, Any] = {
- "text" : conversation,
- "category" : "04f58f64a4aa4191a957b47290fee864",
- "model" : "ee8d4f29cb7047f78cbe84313ed6ace8",
- "model_params": {
- "system_prompt" : "You are ChatGPT, a large language model trained by OpenAI, based on the GPT-3.5 architecture.\nKnowledge cutoff: 2021-09\nCurrent date: {{YYYY-MM-DD}}",
- "temperature" : kwargs.get("temperature", 1),
- "top_p" : kwargs.get("top_p", 1),
- "frequency_penalty" : kwargs.get("frequency_penalty", 0),
- "presence_penalty" : kwargs.get("presence_penalty", 0),
- "long_term_memory" : "auto"
- }
- }
-
- response = requests.post(f"https://beta.theb.ai/api/conversation?org_id={org_id}&req_rand={req_rand}",
- headers=headers, json=json_data, stream=True)
-
- response.raise_for_status()
- content = ""
- next_content = ""
- for chunk in response.iter_lines():
- if b"content" in chunk:
- next_content = content
- data = json.loads(chunk.decode().split("data: ")[1])
- content = data["content"]
- yield data["content"].replace(next_content, "")
-
- @classmethod
- @property
- def params(cls):
- params = [
- ("model", "str"),
- ("messages", "list[dict[str, str]]"),
- ("auth", "list[dict[str, str]]"),
- ("stream", "bool"),
- ("temperature", "float"),
- ("presence_penalty", "int"),
- ("frequency_penalty", "int"),
- ("top_p", "int")
- ]
- param = ", ".join([": ".join(p) for p in params])
- return f"g4f.provider.{cls.__name__} supports: ({param})"
\ No newline at end of file
diff --git a/spaces/AchyuthGamer/text-to-speech-client/assets/index-5644c887.css b/spaces/AchyuthGamer/text-to-speech-client/assets/index-5644c887.css
deleted file mode 100644
index a5e21b3c7de305d425a0a5bb9d399030308004ed..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/text-to-speech-client/assets/index-5644c887.css
+++ /dev/null
@@ -1 +0,0 @@
-*,:before,:after{box-sizing:border-box;border-width:0;border-style:solid;border-color:#e5e7eb}:before,:after{--tw-content: ""}html{line-height:1.5;-webkit-text-size-adjust:100%;-moz-tab-size:4;-o-tab-size:4;tab-size:4;font-family:ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,"Apple Color Emoji","Segoe UI Emoji",Segoe UI Symbol,"Noto Color Emoji";font-feature-settings:normal;font-variation-settings:normal}body{margin:0;line-height:inherit}hr{height:0;color:inherit;border-top-width:1px}abbr:where([title]){-webkit-text-decoration:underline dotted;text-decoration:underline dotted}h1,h2,h3,h4,h5,h6{font-size:inherit;font-weight:inherit}a{color:inherit;text-decoration:inherit}b,strong{font-weight:bolder}code,kbd,samp,pre{font-family:ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace;font-size:1em}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}table{text-indent:0;border-color:inherit;border-collapse:collapse}button,input,optgroup,select,textarea{font-family:inherit;font-feature-settings:inherit;font-variation-settings:inherit;font-size:100%;font-weight:inherit;line-height:inherit;color:inherit;margin:0;padding:0}button,select{text-transform:none}button,[type=button],[type=reset],[type=submit]{-webkit-appearance:button;background-color:transparent;background-image:none}:-moz-focusring{outline:auto}:-moz-ui-invalid{box-shadow:none}progress{vertical-align:baseline}::-webkit-inner-spin-button,::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}summary{display:list-item}blockquote,dl,dd,h1,h2,h3,h4,h5,h6,hr,figure,p,pre{margin:0}fieldset{margin:0;padding:0}legend{padding:0}ol,ul,menu{list-style:none;margin:0;padding:0}dialog{padding:0}textarea{resize:vertical}input::-moz-placeholder,textarea::-moz-placeholder{opacity:1;color:#9ca3af}input::placeholder,textarea::placeholder{opacity:1;color:#9ca3af}button,[role=button]{cursor:pointer}:disabled{cursor:default}img,svg,video,canvas,audio,iframe,embed,object{display:block;vertical-align:middle}img,video{max-width:100%;height:auto}[hidden]{display:none}*,:before,:after{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }::backdrop{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }.static{position:static}.absolute{position:absolute}.relative{position:relative}.left-0{left:0}.top-0{top:0}.z-10{z-index:10}.z-50{z-index:50}.m-2{margin:.5rem}.my-4{margin-top:1rem;margin-bottom:1rem}.mb-1{margin-bottom:.25rem}.mb-2{margin-bottom:.5rem}.mb-4{margin-bottom:1rem}.block{display:block}.flex{display:flex}.h-14{height:3.5rem}.h-full{height:100%}.min-h-screen{min-height:100vh}.w-\[1\%\]{width:1%}.w-full{width:100%}.max-w-xl{max-width:36rem}.cursor-not-allowed{cursor:not-allowed}.flex-col{flex-direction:column}.items-center{align-items:center}.justify-center{justify-content:center}.gap-1{gap:.25rem}.overflow-hidden{overflow:hidden}.whitespace-nowrap{white-space:nowrap}.rounded-lg{border-radius:.5rem}.rounded-md{border-radius:.375rem}.border{border-width:1px}.border-gray-300{--tw-border-opacity: 1;border-color:rgb(209 213 219 / var(--tw-border-opacity))}.bg-blue-500{--tw-bg-opacity: 1;background-color:rgb(59 130 246 / var(--tw-bg-opacity))}.bg-gray-100{--tw-bg-opacity: 1;background-color:rgb(243 244 246 / var(--tw-bg-opacity))}.bg-gray-400{--tw-bg-opacity: 1;background-color:rgb(156 163 175 / var(--tw-bg-opacity))}.bg-white{--tw-bg-opacity: 1;background-color:rgb(255 255 255 / var(--tw-bg-opacity))}.p-2{padding:.5rem}.p-3{padding:.75rem}.p-8{padding:2rem}.px-2{padding-left:.5rem;padding-right:.5rem}.px-4{padding-left:1rem;padding-right:1rem}.px-8{padding-left:2rem;padding-right:2rem}.py-2{padding-top:.5rem;padding-bottom:.5rem}.text-left{text-align:left}.text-center{text-align:center}.text-3xl{font-size:1.875rem;line-height:2.25rem}.text-base{font-size:1rem;line-height:1.5rem}.text-sm{font-size:.875rem;line-height:1.25rem}.text-xl{font-size:1.25rem;line-height:1.75rem}.font-medium{font-weight:500}.font-semibold{font-weight:600}.text-black{--tw-text-opacity: 1;color:rgb(0 0 0 / var(--tw-text-opacity))}.text-gray-600{--tw-text-opacity: 1;color:rgb(75 85 99 / var(--tw-text-opacity))}.text-gray-700{--tw-text-opacity: 1;color:rgb(55 65 81 / var(--tw-text-opacity))}.text-gray-800{--tw-text-opacity: 1;color:rgb(31 41 55 / var(--tw-text-opacity))}.text-white{--tw-text-opacity: 1;color:rgb(255 255 255 / var(--tw-text-opacity))}.shadow-lg{--tw-shadow: 0 10px 15px -3px rgb(0 0 0 / .1), 0 4px 6px -4px rgb(0 0 0 / .1);--tw-shadow-colored: 0 10px 15px -3px var(--tw-shadow-color), 0 4px 6px -4px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.shadow-xl{--tw-shadow: 0 20px 25px -5px rgb(0 0 0 / .1), 0 8px 10px -6px rgb(0 0 0 / .1);--tw-shadow-colored: 0 20px 25px -5px var(--tw-shadow-color), 0 8px 10px -6px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.shadow-black\/5{--tw-shadow-color: rgb(0 0 0 / .05);--tw-shadow: var(--tw-shadow-colored)}.ring-1{--tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(1px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow, 0 0 #0000)}.ring-slate-700\/10{--tw-ring-color: rgb(51 65 85 / .1)}.blur{--tw-blur: blur(8px);filter:var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow)}.filter{filter:var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow)}.transition-all{transition-property:all;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:.15s}:root{font-family:Inter,system-ui,Avenir,Helvetica,Arial,sans-serif;line-height:1.5;font-weight:400;color:#213547;background-color:#fff;font-synthesis:none;text-rendering:optimizeLegibility;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;-webkit-text-size-adjust:100%}audio::-webkit-media-controls-panel{background-color:#fff}.hover\:bg-blue-600:hover{--tw-bg-opacity: 1;background-color:rgb(37 99 235 / var(--tw-bg-opacity))}
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/AddChildrenMap.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/AddChildrenMap.js
deleted file mode 100644
index 2a234643e1ea5779a769871b2e6929928207ade5..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/AddChildrenMap.js
+++ /dev/null
@@ -1,6 +0,0 @@
-var AddChildrenMap = function (key, gameObject) {
- this.childrenMap[key] = gameObject;
- return this;
-}
-
-export default AddChildrenMap;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridtable/input/TableSetInteractive.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridtable/input/TableSetInteractive.js
deleted file mode 100644
index 610fb992262378ad97c17e3d4a2bda96eb3aa1e3..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridtable/input/TableSetInteractive.js
+++ /dev/null
@@ -1,19 +0,0 @@
-import PointerUpDownCell from './PointerUpDownCell.js';
-import OverCell from './OverCell.js';
-import ClickCell from './ClickCell.js';
-import TapCell from './TapCell.js';
-import PressCell from './PressCell.js';
-import SwipeCell from './SwipeCell.js';
-
-var TableSetInteractive = function (table, tableConfig) {
- table.setInteractive();
-
- PointerUpDownCell.call(this, table, tableConfig);
- OverCell.call(this, table, tableConfig);
- ClickCell.call(this, table, tableConfig);
- TapCell.call(this, table, tableConfig);
- PressCell.call(this, table, tableConfig);
- SwipeCell.call(this, table, tableConfig);
-}
-
-export default TableSetInteractive;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/simpledropdownlist/SimpleDropDownList.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/simpledropdownlist/SimpleDropDownList.js
deleted file mode 100644
index 24ce1fd6882ab0aca271a257b205823ab5696725..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/simpledropdownlist/SimpleDropDownList.js
+++ /dev/null
@@ -1,27 +0,0 @@
-import DropDownList from '../dropdownlist/DropDownList.js';
-import BuildListConfig from '../utils/build/BuildListConfig.js';
-
-class SimpleDropDownList extends DropDownList {
- constructor(scene, config, creators) {
- config = BuildListConfig(scene, config, creators);
- super(scene, config);
- this.type = 'rexSimpleDropDownList';
- }
-
- setOptions(options) {
- if (options === undefined) {
- options = [];
- }
- for (var i = 0, cnt = options.length; i < cnt; i++) {
- var option = options[i];
- if (typeof (option) === 'string') {
- options[i] = { text: option, value: option };
- }
- }
- super.setOptions(options);
- return this;
- }
-
-}
-
-export default SimpleDropDownList;
\ No newline at end of file
diff --git a/spaces/Alashazam/Harmony/app.py b/spaces/Alashazam/Harmony/app.py
deleted file mode 100644
index 60eb85c96db04076e6b25e98e48fed18877f7827..0000000000000000000000000000000000000000
--- a/spaces/Alashazam/Harmony/app.py
+++ /dev/null
@@ -1,45 +0,0 @@
-import gradio
-
-class Model:
- def __init__(self, name, path="", prefix=""):
- self.name = name
- self.path = path
- self.prefix = prefix
-
-models = [
- Model("Marvel","models/ItsJayQz/Marvel_WhatIf_Diffusion", "whatif style"),
- Model("Cyberpunk Anime Diffusion", "models/DGSpitzer/Cyberpunk-Anime-Diffusion", "dgs illustration style"),
- Model("Portrait plus", "models/wavymulder/portraitplus", "portrait+ style"),
- Model("CF25", "models/gsdf/Counterfeit-V2.5", "anime style"),
- Model("vintedois", "models/22h/vintedois-diffusion-v0-1", "vintedois style"),
- Model("dreamlike", "models/dreamlike-art/dreamlike-diffusion-1.0","dreamlike style"),
- #Model("Orange Mix","models/WarriorMama777/OrangeMixs", "OrangeMixs style"),
- Model("GTA5","models/ItsJayQz/GTA5_Artwork_Diffusion", "GTA5 style")
-]
-
-model1=[]
-model2=[]
-model3=[]
-
-for i in range(len(models)):
- model3.append(models[i].name)
- model1.append(gradio.Interface.load(models[i].path))
- model2.append(models[i].prefix)
-
-def process1(prompt, modelSelected):
- if (modelSelected==''):
- modelSelected = "Marvel"
- model_idx=model3.index(modelSelected)
- prompt+=", in "+model2[model_idx]
- image_return = model1[model_idx](prompt)
- return image_return
-
-sandbox = gradio.Interface(fn=process1,
- inputs=[gradio.Textbox(label="Enter Prompt:"), gradio.Dropdown(model3)],
- outputs=[gradio.Image(label="Produced Image")],
- title='Text to Image',
- examples=[["Portrait close up, Elvis Presley, concert hall in the background", "GTA5"],
- ["Marvel Blackwidow portrait close up. building city background", "Marvel"],
- ["A white rabbit wizard, Hogwart University, Castle in the background", "dreamlike"]])
-
-sandbox.queue(concurrency_count=20).launch()
diff --git a/spaces/AlhitawiMohammed22/HTD_HTR/app.py b/spaces/AlhitawiMohammed22/HTD_HTR/app.py
deleted file mode 100644
index af57b2af5ff9ea5aab56abb028c4199c5ecc8a5a..0000000000000000000000000000000000000000
--- a/spaces/AlhitawiMohammed22/HTD_HTR/app.py
+++ /dev/null
@@ -1,145 +0,0 @@
-import os
-os.environ["USE_TORCH"] = "1"
-os.environ["USE_TF"] = "0"
-import torch
-from torch.utils.data.dataloader import DataLoader
-
-from builder import DocumentBuilder
-from trocr import IAMDataset, device, get_processor_model
-from doctr.utils.visualization import visualize_page
-from doctr.models.predictor.base import _OCRPredictor
-from doctr.models.detection.predictor import DetectionPredictor
-from doctr.models.preprocessor import PreProcessor
-from doctr.models import db_resnet50, db_mobilenet_v3_large
-
-from doctr.io import DocumentFile
-import numpy as np
-import cv2
-import matplotlib.pyplot as plt
-import streamlit as st
-
-DET_ARCHS = ["db_resnet50", "db_mobilenet_v3_large"]
-RECO_ARCHS = ["microsoft/trocr-large-printed", "microsoft/trocr-large-stage1", "microsoft/trocr-large-handwritten"]
-
-
-def main():
- # Wide mode
- st.set_page_config(layout="wide")
- # Designing the interface
- st.title("docTR + TrOCR")
- # For newline
- st.write('\n')
- #
- st.write('For Detection DocTR: https://github.com/mindee/doctr')
- # For newline
- st.write('\n')
- st.write('For Recognition TrOCR: https://github.com/microsoft/unilm/tree/master/trocr')
- # For newline
- st.write('\n')
-
- st.write('Any Issue please dm')
- # For newline
- st.write('\n')
- # Instructions
- st.markdown(
- "*Hint: click on the top-right corner of an image to enlarge it!*")
- # Set the columns
- cols = st.columns((1, 1, 1))
- cols[0].subheader("Input page")
- cols[1].subheader("Segmentation heatmap")
-
- # Sidebar
- # File selection
- st.sidebar.title("Document selection")
- # Disabling warning
- st.set_option('deprecation.showfileUploaderEncoding', False)
- # Choose your own image
- uploaded_file = st.sidebar.file_uploader(
- "Upload files", type=['pdf', 'png', 'jpeg', 'jpg'])
- if uploaded_file is not None:
- if uploaded_file.name.endswith('.pdf'):
- doc = DocumentFile.from_pdf(uploaded_file.read()).as_images()
- else:
- doc = DocumentFile.from_images(uploaded_file.read())
- page_idx = st.sidebar.selectbox(
- "Page selection", [idx + 1 for idx in range(len(doc))]) - 1
- cols[0].image(doc[page_idx])
- # Model selection
- st.sidebar.title("Model selection")
- det_arch = st.sidebar.selectbox("Text detection model", DET_ARCHS)
- rec_arch = st.sidebar.selectbox("Text recognition model", RECO_ARCHS)
- # For newline
- st.sidebar.write('\n')
- if st.sidebar.button("Analyze page"):
- if uploaded_file is None:
- st.sidebar.write("Please upload a document")
- else:
- with st.spinner('Loading model...'):
- if det_arch == "db_resnet50":
- det_model = db_resnet50(pretrained=True)
- else:
- det_model = db_mobilenet_v3_large(pretrained=True)
- det_predictor = DetectionPredictor(PreProcessor((1024, 1024), batch_size=1, mean=(0.798, 0.785, 0.772), std=(0.264, 0.2749, 0.287)), det_model)
- rec_processor, rec_model = get_processor_model(rec_arch)
- with st.spinner('Analyzing...'):
- # Forward the image to the model
- processed_batches = det_predictor.pre_processor([doc[page_idx]])
- out = det_predictor.model(processed_batches[0], return_model_output=True)
- seg_map = out["out_map"]
- seg_map = torch.squeeze(seg_map[0, ...], axis=0)
- seg_map = cv2.resize(seg_map.detach().numpy(), (doc[page_idx].shape[1], doc[page_idx].shape[0]),
- interpolation=cv2.INTER_LINEAR)
- # Plot the raw heatmap
- fig, ax = plt.subplots()
- ax.imshow(seg_map)
- ax.axis('off')
- cols[1].pyplot(fig)
-
- # Plot OCR output
- # Localize text elements
- loc_preds = out["preds"]
-
- # Check whether crop mode should be switched to channels first
- channels_last = len(doc) == 0 or isinstance(doc[0], np.ndarray)
-
- # Crop images
- crops, loc_preds = _OCRPredictor._prepare_crops(
- doc, loc_preds, channels_last=channels_last, assume_straight_pages=True
- )
-
- test_dataset = IAMDataset(crops[0], rec_processor)
- test_dataloader = DataLoader(test_dataset, batch_size=16)
-
- text = []
- with torch.no_grad():
- for batch in test_dataloader:
- pixel_values = batch["pixel_values"].to(device)
- generated_ids = rec_model.generate(pixel_values)
- generated_text = rec_processor.batch_decode(
- generated_ids, skip_special_tokens=True)
- text.extend(generated_text)
- boxes, text_preds = _OCRPredictor._process_predictions(
- loc_preds, text)
-
- doc_builder = DocumentBuilder()
- out = doc_builder(
- boxes,
- text_preds,
- [
- # type: ignore[misc]
- page.shape[:2] if channels_last else page.shape[-2:]
- for page in [doc[page_idx]]
- ]
- )
-
- for df in out:
- st.markdown("text")
- st.write(" ".join(df["word"].to_list()))
- st.write('\n')
- st.markdown("\n Dataframe Output- similar to Tesseract:")
- st.dataframe(df)
-
-
-
-if __name__ == '__main__':
- main()
\ No newline at end of file
diff --git a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/eval/__init__.py b/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/eval/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/models_face.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/models_face.py
deleted file mode 100644
index f9ba50f96041a163ac974b0c54b4985069b554f3..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/models_face.py
+++ /dev/null
@@ -1,819 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-import math
-import random
-import functools
-import operator
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-import torch.nn.init as init
-from torch.autograd import Function
-
-from .op_edit import FusedLeakyReLU, fused_leaky_relu, upfirdn2d
-
-
-class PixelNorm(nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, input):
- return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8)
-
-
-def make_kernel(k):
- k = torch.tensor(k, dtype=torch.float32)
-
- if k.ndim == 1:
- k = k[None, :] * k[:, None]
-
- k /= k.sum()
-
- return k
-
-
-class Upsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel) * (factor ** 2)
- self.register_buffer("kernel", kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=self.factor,
- down=1, pad=self.pad)
-
- return out
-
-
-class Downsample(nn.Module):
- def __init__(self, kernel, factor=2):
- super().__init__()
-
- self.factor = factor
- kernel = make_kernel(kernel)
- self.register_buffer("kernel", kernel)
-
- p = kernel.shape[0] - factor
-
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.pad = (pad0, pad1)
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, up=1,
- down=self.factor, pad=self.pad)
-
- return out
-
-
-class Blur(nn.Module):
- def __init__(self, kernel, pad, upsample_factor=1):
- super().__init__()
-
- kernel = make_kernel(kernel)
-
- if upsample_factor > 1:
- kernel = kernel * (upsample_factor ** 2)
-
- self.register_buffer("kernel", kernel)
-
- self.pad = pad
-
- def forward(self, input):
- out = upfirdn2d(input, self.kernel, pad=self.pad)
-
- return out
-
-
-class EqualConv2d(nn.Module):
- def __init__(
- self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True
- ):
- super().__init__()
-
- self.weight = nn.Parameter(
- torch.randn(out_channel, in_channel, kernel_size, kernel_size)
- )
- self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2)
-
- self.stride = stride
- self.padding = padding
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_channel))
-
- else:
- self.bias = None
-
- def forward(self, input):
- out = F.conv2d(
- input,
- self.weight * self.scale,
- bias=self.bias,
- stride=self.stride,
- padding=self.padding,
- )
-
- return out
-
- def __repr__(self):
- return (
- f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},"
- f" {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})"
- )
-
-
-class EqualLinear(nn.Module):
- def __init__(
- self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None
- ):
- super().__init__()
-
- self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul))
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init))
-
- else:
- self.bias = None
-
- self.activation = activation
-
- self.scale = (1 / math.sqrt(in_dim)) * lr_mul
- self.lr_mul = lr_mul
-
- def forward(self, input):
- if self.activation:
- out = F.linear(input, self.weight * self.scale)
- out = fused_leaky_relu(out, self.bias * self.lr_mul)
-
- else:
- out = F.linear(
- input, self.weight * self.scale, bias=self.bias * self.lr_mul
- )
-
- return out
-
- def __repr__(self):
- return (
- f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})"
- )
-
-
-class ScaledLeakyReLU(nn.Module):
- def __init__(self, negative_slope=0.2):
- super().__init__()
-
- self.negative_slope = negative_slope
-
- def forward(self, input):
- out = F.leaky_relu(input, negative_slope=self.negative_slope)
-
- return out * math.sqrt(2)
-
-
-class ModulatedConv2d(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- demodulate=True,
- upsample=False,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- ):
- super().__init__()
-
- self.eps = 1e-8
- self.kernel_size = kernel_size
- self.in_channel = in_channel
- self.out_channel = out_channel
- self.upsample = upsample
- self.downsample = downsample
-
- if upsample:
- factor = 2
- p = (len(blur_kernel) - factor) - (kernel_size - 1)
- pad0 = (p + 1) // 2 + factor - 1
- pad1 = p // 2 + 1
-
- self.blur = Blur(blur_kernel, pad=(
- pad0, pad1), upsample_factor=factor)
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- self.blur = Blur(blur_kernel, pad=(pad0, pad1))
-
- fan_in = in_channel * kernel_size ** 2
- self.scale = 1 / math.sqrt(fan_in)
- self.padding = kernel_size // 2
-
- self.weight = nn.Parameter(
- torch.randn(1, out_channel, in_channel, kernel_size, kernel_size)
- )
-
- self.modulation = EqualLinear(style_dim, in_channel, bias_init=1)
-
- self.demodulate = demodulate
-
- def __repr__(self):
- return (
- f"{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, "
- f"upsample={self.upsample}, downsample={self.downsample})"
- )
-
- def forward(self, input, style):
- batch, in_channel, height, width = input.shape
-
- style = self.modulation(style).view(batch, 1, in_channel, 1, 1)
- weight = self.scale * self.weight * style
-
- if self.demodulate:
- demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8)
- weight = weight * demod.view(batch, self.out_channel, 1, 1, 1)
-
- weight = weight.view(
- batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
-
- if self.upsample:
- input = input.view(1, batch * in_channel, height, width)
- weight = weight.view(
- batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size
- )
- weight = weight.transpose(1, 2).reshape(
- batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size
- )
- out = F.conv_transpose2d(
- input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
- out = self.blur(out)
-
- elif self.downsample:
- input = self.blur(input)
- _, _, height, width = input.shape
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=0, stride=2, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- else:
- input = input.view(1, batch * in_channel, height, width)
- out = F.conv2d(input, weight, padding=self.padding, groups=batch)
- _, _, height, width = out.shape
- out = out.view(batch, self.out_channel, height, width)
-
- return out
-
-
-class NoiseInjection(nn.Module):
- def __init__(self):
- super().__init__()
-
- self.weight = nn.Parameter(torch.zeros(1))
-
- def forward(self, image, noise=None):
- if noise is None:
- batch, _, height, width = image.shape
- noise = image.new_empty(batch, 1, height, width).normal_()
-
- return image + self.weight * noise
-
-
-class ConstantInput(nn.Module):
- def __init__(self, channel, size=4):
- super().__init__()
-
- self.input = nn.Parameter(torch.randn(1, channel, size, size))
-
- def forward(self, input):
- batch = input.shape[0]
- out = self.input.repeat(batch, 1, 1, 1)
-
- return out
-
-
-class StyledConv(nn.Module):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=False,
- blur_kernel=[1, 3, 3, 1],
- demodulate=True,
- ):
- super().__init__()
-
- self.conv = ModulatedConv2d(
- in_channel,
- out_channel,
- kernel_size,
- style_dim,
- upsample=upsample,
- blur_kernel=blur_kernel,
- demodulate=demodulate,
- )
-
- self.noise = NoiseInjection()
- # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1))
- # self.activate = ScaledLeakyReLU(0.2)
- self.activate = FusedLeakyReLU(out_channel)
-
- def forward(self, input, style, noise=None):
- out = self.conv(input, style)
- out = self.noise(out, noise=noise)
- # out = out + self.bias
- out = self.activate(out)
-
- return out
-
-
-class ToRGB(nn.Module):
- def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- if upsample:
- self.upsample = Upsample(blur_kernel)
-
- self.conv = ModulatedConv2d(
- in_channel, 3, 1, style_dim, demodulate=False)
- self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))
-
- def forward(self, input, style, skip=None):
- out = self.conv(input, style)
- out = out + self.bias
-
- if skip is not None:
- skip = self.upsample(skip)
-
- out = out + skip
-
- return out
-
-
-class Generator(nn.Module):
- def __init__(
- self,
- size,
- style_dim,
- n_mlp,
- channel_multiplier=1,
- blur_kernel=[1, 3, 3, 1],
- lr_mlp=0.01,
- small=False,
- small_isaac=False,
- ):
- super().__init__()
-
- self.size = size
-
- if small and size > 64:
- raise ValueError("small only works for sizes <= 64")
-
- self.style_dim = style_dim
-
- layers = [PixelNorm()]
-
- for i in range(n_mlp):
- layers.append(
- EqualLinear(
- style_dim, style_dim, lr_mul=lr_mlp, activation="fused_lrelu"
- )
- )
-
- self.style = nn.Sequential(*layers)
-
- if small:
- self.channels = {
- 4: 64 * channel_multiplier,
- 8: 64 * channel_multiplier,
- 16: 64 * channel_multiplier,
- 32: 64 * channel_multiplier,
- 64: 64 * channel_multiplier,
- }
- elif small_isaac:
- self.channels = {4: 256, 8: 256,
- 16: 256, 32: 256, 64: 128, 128: 128}
- else:
- self.channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- self.input = ConstantInput(self.channels[4])
- self.conv1 = StyledConv(
- self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel
- )
- self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False)
-
- self.log_size = int(math.log(size, 2))
- self.num_layers = (self.log_size - 2) * 2 + 1
-
- self.convs = nn.ModuleList()
- self.upsamples = nn.ModuleList()
- self.to_rgbs = nn.ModuleList()
- self.noises = nn.Module()
-
- in_channel = self.channels[4]
-
- for layer_idx in range(self.num_layers):
- res = (layer_idx + 5) // 2
- shape = [1, 1, 2 ** res, 2 ** res]
- self.noises.register_buffer(
- "noise_{}".format(layer_idx), torch.randn(*shape)
- )
-
- for i in range(3, self.log_size + 1):
- out_channel = self.channels[2 ** i]
-
- self.convs.append(
- StyledConv(
- in_channel,
- out_channel,
- 3,
- style_dim,
- upsample=True,
- blur_kernel=blur_kernel,
- )
- )
-
- self.convs.append(
- StyledConv(
- out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel
- )
- )
-
- self.to_rgbs.append(ToRGB(out_channel, style_dim))
-
- in_channel = out_channel
-
- self.n_latent = self.log_size * 2 - 2
-
- def make_noise(self):
- device = self.input.input.device
-
- noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)]
-
- for i in range(3, self.log_size + 1):
- for _ in range(2):
- noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device))
-
- return noises
-
- def mean_latent(self, n_latent):
- latent_in = torch.randn(
- n_latent, self.style_dim, device=self.input.input.device
- )
- latent = self.style(latent_in).mean(0, keepdim=True)
-
- return latent
-
- def get_latent(self, input):
- return self.style(input)
-
- def forward(
- self,
- styles,
- return_latents=False,
- return_features=False,
- inject_index=None,
- truncation=1,
- truncation_latent=None,
- input_is_latent=False,
- noise=None,
- randomize_noise=True,
- ):
- if not input_is_latent:
- # print("haha")
- styles = [self.style(s) for s in styles]
- if noise is None:
- if randomize_noise:
- noise = [None] * self.num_layers
- else:
- noise = [
- getattr(self.noises, "noise_{}".format(i))
- for i in range(self.num_layers)
- ]
-
- if truncation < 1:
- style_t = []
-
- for style in styles:
- style_t.append(
- truncation_latent + truncation *
- (style - truncation_latent)
- )
-
- styles = style_t
- # print(styles)
- if len(styles) < 2:
- inject_index = self.n_latent
-
- if styles[0].ndim < 3:
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- # print("a")
- else:
- # print(len(styles))
- latent = styles[0]
- # print("b", latent.shape)
-
- else:
- # print("c")
- if inject_index is None:
- inject_index = 4
-
- latent = styles[0].unsqueeze(0)
- if latent.shape[1] == 1:
- latent = latent.repeat(1, inject_index, 1)
- else:
- latent = latent[:, :inject_index, :]
- latent2 = styles[1].unsqueeze(1).repeat(
- 1, self.n_latent - inject_index, 1)
-
- latent = torch.cat([latent, latent2], 1)
-
- features = {}
- out = self.input(latent)
- features["out_0"] = out
- out = self.conv1(out, latent[:, 0], noise=noise[0])
- features["conv1_0"] = out
-
- skip = self.to_rgb1(out, latent[:, 1])
- features["skip_0"] = skip
- i = 1
- for conv1, conv2, noise1, noise2, to_rgb in zip(
- self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs
- ):
- out = conv1(out, latent[:, i], noise=noise1)
- features["conv1_{}".format(i)] = out
- out = conv2(out, latent[:, i + 1], noise=noise2)
- features["conv2_{}".format(i)] = out
- skip = to_rgb(out, latent[:, i + 2], skip)
- features["skip_{}".format(i)] = skip
-
- i += 2
-
- image = skip
-
- if return_latents:
- return image, latent
- elif return_features:
- return image, features
- else:
- return image, None
-
-
-class ConvLayer(nn.Sequential):
- def __init__(
- self,
- in_channel,
- out_channel,
- kernel_size,
- downsample=False,
- blur_kernel=[1, 3, 3, 1],
- bias=True,
- activate=True,
- ):
- layers = []
-
- if downsample:
- factor = 2
- p = (len(blur_kernel) - factor) + (kernel_size - 1)
- pad0 = (p + 1) // 2
- pad1 = p // 2
-
- layers.append(Blur(blur_kernel, pad=(pad0, pad1)))
-
- stride = 2
- self.padding = 0
-
- else:
- stride = 1
- self.padding = kernel_size // 2
-
- layers.append(
- EqualConv2d(
- in_channel,
- out_channel,
- kernel_size,
- padding=self.padding,
- stride=stride,
- bias=bias and not activate,
- )
- )
-
- if activate:
- if bias:
- layers.append(FusedLeakyReLU(out_channel))
-
- else:
- layers.append(ScaledLeakyReLU(0.2))
-
- super().__init__(*layers)
-
-
-class ResBlock(nn.Module):
- def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]):
- super().__init__()
-
- self.conv1 = ConvLayer(in_channel, in_channel, 3)
- self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True)
-
- self.skip = ConvLayer(
- in_channel, out_channel, 1, downsample=True, activate=False, bias=False
- )
-
- def forward(self, input):
- out = self.conv1(input)
- out = self.conv2(out)
-
- skip = self.skip(input)
- out = (out + skip) / math.sqrt(2)
-
- return out
-
-
-class StyleDiscriminator(nn.Module):
- def __init__(
- self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1], small=False
- ):
- super().__init__()
-
- if small:
- channels = {4: 64, 8: 64, 16: 64, 32: 64, 64: 64}
-
- else:
- channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256 * channel_multiplier,
- 128: 128 * channel_multiplier,
- 256: 64 * channel_multiplier,
- 512: 32 * channel_multiplier,
- 1024: 16 * channel_multiplier,
- }
-
- convs = [ConvLayer(3, channels[size], 1)]
-
- log_size = int(math.log(size, 2))
-
- in_channel = channels[size]
-
- for i in range(log_size, 2, -1):
- out_channel = channels[2 ** (i - 1)]
-
- convs.append(ResBlock(in_channel, out_channel, blur_kernel))
-
- in_channel = out_channel
-
- self.convs = nn.Sequential(*convs)
-
- self.stddev_group = 4
- self.stddev_feat = 1
-
- self.final_conv = ConvLayer(in_channel + 1, channels[4], 3)
- self.final_linear = nn.Sequential(
- EqualLinear(channels[4] * 4 * 4, channels[4],
- activation="fused_lrelu"),
- EqualLinear(channels[4], 1),
- )
-
-# def forward(self, input):
-# out = self.convs(input)
-
-# batch, channel, height, width = out.shape
-# group = min(batch, self.stddev_group)
-# stddev = out.view(
-# group, -1, self.stddev_feat, channel // self.stddev_feat, height, width
-# )
-# stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8)
-# stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2)
-# stddev = stddev.repeat(group, 1, height, width)
-# out = torch.cat([out, stddev], 1)
-
-# out = self.final_conv(out)
-
-# out = out.view(batch, -1)
-# out = self.final_linear(out)
-
-# return out
-
- def forward(self, input):
- h = input
- h_list = []
-
- for index, blocklist in enumerate(self.convs):
- h = blocklist(h)
- h_list.append(h)
-
- out = h
- batch, channel, height, width = out.shape
- group = min(batch, self.stddev_group)
- stddev = out.view(
- group, -1, self.stddev_feat, channel // self.stddev_feat, height, width
- )
- stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8)
- stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2)
- stddev = stddev.repeat(group, 1, height, width)
- out = torch.cat([out, stddev], 1)
-
- out = self.final_conv(out)
- h_list.append(out)
-
- out = out.view(batch, -1)
- out = self.final_linear(out)
-
- return out, h_list
-
-
-class StyleEncoder(nn.Module):
- def __init__(self, size, w_dim=512):
- super().__init__()
-
- channels = {
- 4: 512,
- 8: 512,
- 16: 512,
- 32: 512,
- 64: 256,
- 128: 128,
- 256: 64,
- 512: 32,
- 1024: 16
- }
-
- self.w_dim = w_dim
- log_size = int(math.log(size, 2))
-
- # self.n_latents = log_size*2 - 2
-
- convs = [ConvLayer(3, channels[size], 1)]
-
- in_channel = channels[size]
- for i in range(log_size, 2, -1):
- out_channel = channels[2 ** (i - 1)]
- convs.append(ResBlock(in_channel, out_channel))
- in_channel = out_channel
-
- # convs.append(EqualConv2d(in_channel, self.n_latents*self.w_dim, 4, padding=0, bias=False))
- convs.append(EqualConv2d(
- in_channel, 2*self.w_dim, 4, padding=0, bias=False))
-
- self.convs = nn.Sequential(*convs)
-
- def forward(self, input):
- out = self.convs(input)
- # return out.view(len(input), self.n_latents, self.w_dim)
- reshaped = out.view(len(input), 2*self.w_dim)
- return reshaped[:, :self.w_dim], reshaped[:, self.w_dim:]
-
-
-def kaiming_init(m):
- if isinstance(m, (nn.Linear, nn.Conv2d)):
- init.kaiming_normal_(m.weight)
- if m.bias is not None:
- m.bias.data.fill_(0)
- elif isinstance(m, (nn.BatchNorm1d, nn.BatchNorm2d)):
- m.weight.data.fill_(1)
- if m.bias is not None:
- m.bias.data.fill_(0)
-
-
-def normal_init(m):
- if isinstance(m, (nn.Linear, nn.Conv2d)):
- init.normal_(m.weight, 0, 0.02)
- if m.bias is not None:
- m.bias.data.fill_(0)
- elif isinstance(m, (nn.BatchNorm1d, nn.BatchNorm2d)):
- m.weight.data.fill_(1)
- if m.bias is not None:
- m.bias.data.fill_(0)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/paradigms.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/paradigms.md
deleted file mode 100644
index a56c02e70af35e2ff3da66dac8e7101cb578222b..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/paradigms.md
+++ /dev/null
@@ -1,54 +0,0 @@
-
-
-# Parallel Sampling of Diffusion Models
-
-[Parallel Sampling of Diffusion Models](https://huggingface.co/papers/2305.16317) is by Andy Shih, Suneel Belkhale, Stefano Ermon, Dorsa Sadigh, Nima Anari.
-
-The abstract from the paper is:
-
-*Diffusion models are powerful generative models but suffer from slow sampling, often taking 1000 sequential denoising steps for one sample. As a result, considerable efforts have been directed toward reducing the number of denoising steps, but these methods hurt sample quality. Instead of reducing the number of denoising steps (trading quality for speed), in this paper we explore an orthogonal approach: can we run the denoising steps in parallel (trading compute for speed)? In spite of the sequential nature of the denoising steps, we show that surprisingly it is possible to parallelize sampling via Picard iterations, by guessing the solution of future denoising steps and iteratively refining until convergence. With this insight, we present ParaDiGMS, a novel method to accelerate the sampling of pretrained diffusion models by denoising multiple steps in parallel. ParaDiGMS is the first diffusion sampling method that enables trading compute for speed and is even compatible with existing fast sampling techniques such as DDIM and DPMSolver. Using ParaDiGMS, we improve sampling speed by 2-4x across a range of robotics and image generation models, giving state-of-the-art sampling speeds of 0.2s on 100-step DiffusionPolicy and 16s on 1000-step StableDiffusion-v2 with no measurable degradation of task reward, FID score, or CLIP score.*
-
-The original codebase can be found at [AndyShih12/paradigms](https://github.com/AndyShih12/paradigms), and the pipeline was contributed by [AndyShih12](https://github.com/AndyShih12). ❤️
-
-## Tips
-
-This pipeline improves sampling speed by running denoising steps in parallel, at the cost of increased total FLOPs.
-Therefore, it is better to call this pipeline when running on multiple GPUs. Otherwise, without enough GPU bandwidth
-sampling may be even slower than sequential sampling.
-
-The two parameters to play with are `parallel` (batch size) and `tolerance`.
-- If it fits in memory, for a 1000-step DDPM you can aim for a batch size of around 100
-(for example, 8 GPUs and `batch_per_device=12` to get `parallel=96`). A higher batch size
-may not fit in memory, and lower batch size gives less parallelism.
-- For tolerance, using a higher tolerance may get better speedups but can risk sample quality degradation.
-If there is quality degradation with the default tolerance, then use a lower tolerance like `0.001`.
-
-For a 1000-step DDPM on 8 A100 GPUs, you can expect around a 3x speedup from [`StableDiffusionParadigmsPipeline`] compared to the [`StableDiffusionPipeline`]
-by setting `parallel=80` and `tolerance=0.1`.
-
-🤗 Diffusers offers [distributed inference support](../training/distributed_inference) for generating multiple prompts
-in parallel on multiple GPUs. But [`StableDiffusionParadigmsPipeline`] is designed for speeding up sampling of a single prompt by using multiple GPUs.
-
-
-
-Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
-
-
-
-## StableDiffusionParadigmsPipeline
-[[autodoc]] StableDiffusionParadigmsPipeline
- - __call__
- - all
-
-## StableDiffusionPipelineOutput
-[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/custom_pipeline_overview.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/custom_pipeline_overview.md
deleted file mode 100644
index 78a64b6bcb960519c82bc401e293c9718a04a6a7..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/custom_pipeline_overview.md
+++ /dev/null
@@ -1,56 +0,0 @@
-
-
-# Load community pipelines
-
-[[open-in-colab]]
-
-Community pipelines are any [`DiffusionPipeline`] class that are different from the original implementation as specified in their paper (for example, the [`StableDiffusionControlNetPipeline`] corresponds to the [Text-to-Image Generation with ControlNet Conditioning](https://arxiv.org/abs/2302.05543) paper). They provide additional functionality or extend the original implementation of a pipeline.
-
-There are many cool community pipelines like [Speech to Image](https://github.com/huggingface/diffusers/tree/main/examples/community#speech-to-image) or [Composable Stable Diffusion](https://github.com/huggingface/diffusers/tree/main/examples/community#composable-stable-diffusion), and you can find all the official community pipelines [here](https://github.com/huggingface/diffusers/tree/main/examples/community).
-
-To load any community pipeline on the Hub, pass the repository id of the community pipeline to the `custom_pipeline` argument and the model repository where you'd like to load the pipeline weights and components from. For example, the example below loads a dummy pipeline from [`hf-internal-testing/diffusers-dummy-pipeline`](https://huggingface.co/hf-internal-testing/diffusers-dummy-pipeline/blob/main/pipeline.py) and the pipeline weights and components from [`google/ddpm-cifar10-32`](https://huggingface.co/google/ddpm-cifar10-32):
-
-
-
-🔒 By loading a community pipeline from the Hugging Face Hub, you are trusting that the code you are loading is safe. Make sure to inspect the code online before loading and running it automatically!
-
-
-
-```py
-from diffusers import DiffusionPipeline
-
-pipeline = DiffusionPipeline.from_pretrained(
- "google/ddpm-cifar10-32", custom_pipeline="hf-internal-testing/diffusers-dummy-pipeline"
-)
-```
-
-Loading an official community pipeline is similar, but you can mix loading weights from an official repository id and pass pipeline components directly. The example below loads the community [CLIP Guided Stable Diffusion](https://github.com/huggingface/diffusers/tree/main/examples/community#clip-guided-stable-diffusion) pipeline, and you can pass the CLIP model components directly to it:
-
-```py
-from diffusers import DiffusionPipeline
-from transformers import CLIPImageProcessor, CLIPModel
-
-clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K"
-
-feature_extractor = CLIPImageProcessor.from_pretrained(clip_model_id)
-clip_model = CLIPModel.from_pretrained(clip_model_id)
-
-pipeline = DiffusionPipeline.from_pretrained(
- "runwayml/stable-diffusion-v1-5",
- custom_pipeline="clip_guided_stable_diffusion",
- clip_model=clip_model,
- feature_extractor=feature_extractor,
-)
-```
-
-For more information about community pipelines, take a look at the [Community pipelines](custom_pipeline_examples) guide for how to use them and if you're interested in adding a community pipeline check out the [How to contribute a community pipeline](contribute_pipeline) guide!
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/reinforcement_learning/run_diffuser_locomotion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/reinforcement_learning/run_diffuser_locomotion.py
deleted file mode 100644
index adf6d1443d1c2e7caca7bdc1a26da1f2f186b8f9..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/reinforcement_learning/run_diffuser_locomotion.py
+++ /dev/null
@@ -1,59 +0,0 @@
-import d4rl # noqa
-import gym
-import tqdm
-from diffusers.experimental import ValueGuidedRLPipeline
-
-
-config = {
- "n_samples": 64,
- "horizon": 32,
- "num_inference_steps": 20,
- "n_guide_steps": 2, # can set to 0 for faster sampling, does not use value network
- "scale_grad_by_std": True,
- "scale": 0.1,
- "eta": 0.0,
- "t_grad_cutoff": 2,
- "device": "cpu",
-}
-
-
-if __name__ == "__main__":
- env_name = "hopper-medium-v2"
- env = gym.make(env_name)
-
- pipeline = ValueGuidedRLPipeline.from_pretrained(
- "bglick13/hopper-medium-v2-value-function-hor32",
- env=env,
- )
-
- env.seed(0)
- obs = env.reset()
- total_reward = 0
- total_score = 0
- T = 1000
- rollout = [obs.copy()]
- try:
- for t in tqdm.tqdm(range(T)):
- # call the policy
- denorm_actions = pipeline(obs, planning_horizon=32)
-
- # execute action in environment
- next_observation, reward, terminal, _ = env.step(denorm_actions)
- score = env.get_normalized_score(total_reward)
-
- # update return
- total_reward += reward
- total_score += score
- print(
- f"Step: {t}, Reward: {reward}, Total Reward: {total_reward}, Score: {score}, Total Score:"
- f" {total_score}"
- )
-
- # save observations for rendering
- rollout.append(next_observation.copy())
-
- obs = next_observation
- except KeyboardInterrupt:
- pass
-
- print(f"Total reward: {total_reward}")
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py
deleted file mode 100644
index 8dac027934b1aff2d9e93008d8afda218ac659d6..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py
+++ /dev/null
@@ -1,935 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import inspect
-import os
-from typing import Any, Callable, Dict, List, Optional, Tuple, Union
-
-import torch
-from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
-
-from ...image_processor import VaeImageProcessor
-from ...loaders import FromSingleFileMixin, LoraLoaderMixin, TextualInversionLoaderMixin
-from ...models import AutoencoderKL, UNet2DConditionModel
-from ...models.attention_processor import (
- AttnProcessor2_0,
- LoRAAttnProcessor2_0,
- LoRAXFormersAttnProcessor,
- XFormersAttnProcessor,
-)
-from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import (
- is_accelerate_available,
- is_accelerate_version,
- is_invisible_watermark_available,
- logging,
- randn_tensor,
- replace_example_docstring,
-)
-from ..pipeline_utils import DiffusionPipeline
-from . import StableDiffusionXLPipelineOutput
-
-
-if is_invisible_watermark_available():
- from .watermark import StableDiffusionXLWatermarker
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-EXAMPLE_DOC_STRING = """
- Examples:
- ```py
- >>> import torch
- >>> from diffusers import StableDiffusionXLPipeline
-
- >>> pipe = StableDiffusionXLPipeline.from_pretrained(
- ... "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
- ... )
- >>> pipe = pipe.to("cuda")
-
- >>> prompt = "a photo of an astronaut riding a horse on mars"
- >>> image = pipe(prompt).images[0]
- ```
-"""
-
-
-# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.rescale_noise_cfg
-def rescale_noise_cfg(noise_cfg, noise_pred_text, guidance_rescale=0.0):
- """
- Rescale `noise_cfg` according to `guidance_rescale`. Based on findings of [Common Diffusion Noise Schedules and
- Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf). See Section 3.4
- """
- std_text = noise_pred_text.std(dim=list(range(1, noise_pred_text.ndim)), keepdim=True)
- std_cfg = noise_cfg.std(dim=list(range(1, noise_cfg.ndim)), keepdim=True)
- # rescale the results from guidance (fixes overexposure)
- noise_pred_rescaled = noise_cfg * (std_text / std_cfg)
- # mix with the original results from guidance by factor guidance_rescale to avoid "plain looking" images
- noise_cfg = guidance_rescale * noise_pred_rescaled + (1 - guidance_rescale) * noise_cfg
- return noise_cfg
-
-
-class StableDiffusionXLPipeline(DiffusionPipeline, FromSingleFileMixin, LoraLoaderMixin):
- r"""
- Pipeline for text-to-image generation using Stable Diffusion XL.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- In addition the pipeline inherits the following loading methods:
- - *Textual-Inversion*: [`loaders.TextualInversionLoaderMixin.load_textual_inversion`]
- - *LoRA*: [`StableDiffusionXLPipeline.load_lora_weights`]
- - *Ckpt*: [`loaders.FromSingleFileMixin.from_single_file`]
-
- as well as the following saving methods:
- - *LoRA*: [`loaders.StableDiffusionXLPipeline.save_lora_weights`]
-
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- text_encoder ([`CLIPTextModel`]):
- Frozen text-encoder. Stable Diffusion XL uses the text portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- text_encoder_2 ([` CLIPTextModelWithProjection`]):
- Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
- specifically the
- [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
- variant.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- tokenizer_2 (`CLIPTokenizer`):
- Second Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- """
-
- def __init__(
- self,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- text_encoder_2: CLIPTextModelWithProjection,
- tokenizer: CLIPTokenizer,
- tokenizer_2: CLIPTokenizer,
- unet: UNet2DConditionModel,
- scheduler: KarrasDiffusionSchedulers,
- force_zeros_for_empty_prompt: bool = True,
- add_watermarker: Optional[bool] = None,
- ):
- super().__init__()
-
- self.register_modules(
- vae=vae,
- text_encoder=text_encoder,
- text_encoder_2=text_encoder_2,
- tokenizer=tokenizer,
- tokenizer_2=tokenizer_2,
- unet=unet,
- scheduler=scheduler,
- )
- self.register_to_config(force_zeros_for_empty_prompt=force_zeros_for_empty_prompt)
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
- self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
- self.default_sample_size = self.unet.config.sample_size
-
- add_watermarker = add_watermarker if add_watermarker is not None else is_invisible_watermark_available()
-
- if add_watermarker:
- self.watermark = StableDiffusionXLWatermarker()
- else:
- self.watermark = None
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing
- def enable_vae_slicing(self):
- r"""
- Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
- compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
- """
- self.vae.enable_slicing()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing
- def disable_vae_slicing(self):
- r"""
- Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
- computing decoding in one step.
- """
- self.vae.disable_slicing()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_tiling
- def enable_vae_tiling(self):
- r"""
- Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
- compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
- processing larger images.
- """
- self.vae.enable_tiling()
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_tiling
- def disable_vae_tiling(self):
- r"""
- Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
- computing decoding in one step.
- """
- self.vae.disable_tiling()
-
- def enable_model_cpu_offload(self, gpu_id=0):
- r"""
- Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
- to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
- method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
- `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.
- """
- if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
- from accelerate import cpu_offload_with_hook
- else:
- raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
-
- device = torch.device(f"cuda:{gpu_id}")
-
- if self.device.type != "cpu":
- self.to("cpu", silence_dtype_warnings=True)
- torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
-
- model_sequence = (
- [self.text_encoder, self.text_encoder_2] if self.text_encoder is not None else [self.text_encoder_2]
- )
- model_sequence.extend([self.unet, self.vae])
-
- hook = None
- for cpu_offloaded_model in model_sequence:
- _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
-
- # We'll offload the last model manually.
- self.final_offload_hook = hook
-
- def encode_prompt(
- self,
- prompt: str,
- prompt_2: Optional[str] = None,
- device: Optional[torch.device] = None,
- num_images_per_prompt: int = 1,
- do_classifier_free_guidance: bool = True,
- negative_prompt: Optional[str] = None,
- negative_prompt_2: Optional[str] = None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
- lora_scale: Optional[float] = None,
- ):
- r"""
- Encodes the prompt into text encoder hidden states.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- prompt to be encoded
- prompt_2 (`str` or `List[str]`, *optional*):
- The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
- used in both text-encoders
- device: (`torch.device`):
- torch device
- num_images_per_prompt (`int`):
- number of images that should be generated per prompt
- do_classifier_free_guidance (`bool`):
- whether to use classifier free guidance or not
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
- less than `1`).
- negative_prompt_2 (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
- `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
- If not provided, pooled text embeddings will be generated from `prompt` input argument.
- negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
- input argument.
- lora_scale (`float`, *optional*):
- A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
- """
- device = device or self._execution_device
-
- # set lora scale so that monkey patched LoRA
- # function of text encoder can correctly access it
- if lora_scale is not None and isinstance(self, LoraLoaderMixin):
- self._lora_scale = lora_scale
-
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- # Define tokenizers and text encoders
- tokenizers = [self.tokenizer, self.tokenizer_2] if self.tokenizer is not None else [self.tokenizer_2]
- text_encoders = (
- [self.text_encoder, self.text_encoder_2] if self.text_encoder is not None else [self.text_encoder_2]
- )
-
- if prompt_embeds is None:
- prompt_2 = prompt_2 or prompt
- # textual inversion: procecss multi-vector tokens if necessary
- prompt_embeds_list = []
- prompts = [prompt, prompt_2]
- for prompt, tokenizer, text_encoder in zip(prompts, tokenizers, text_encoders):
- if isinstance(self, TextualInversionLoaderMixin):
- prompt = self.maybe_convert_prompt(prompt, tokenizer)
-
- text_inputs = tokenizer(
- prompt,
- padding="max_length",
- max_length=tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
-
- text_input_ids = text_inputs.input_ids
- untruncated_ids = tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
- untruncated_ids = tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
-
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
- text_input_ids, untruncated_ids
- ):
- removed_text = tokenizer.batch_decode(untruncated_ids[:, tokenizer.model_max_length - 1 : -1])
- logger.warning(
- "The following part of your input was truncated because CLIP can only handle sequences up to"
- f" {tokenizer.model_max_length} tokens: {removed_text}"
- )
-
- prompt_embeds = text_encoder(
- text_input_ids.to(device),
- output_hidden_states=True,
- )
-
- # We are only ALWAYS interested in the pooled output of the final text encoder
- pooled_prompt_embeds = prompt_embeds[0]
- prompt_embeds = prompt_embeds.hidden_states[-2]
-
- prompt_embeds_list.append(prompt_embeds)
-
- prompt_embeds = torch.concat(prompt_embeds_list, dim=-1)
-
- # get unconditional embeddings for classifier free guidance
- zero_out_negative_prompt = negative_prompt is None and self.config.force_zeros_for_empty_prompt
- if do_classifier_free_guidance and negative_prompt_embeds is None and zero_out_negative_prompt:
- negative_prompt_embeds = torch.zeros_like(prompt_embeds)
- negative_pooled_prompt_embeds = torch.zeros_like(pooled_prompt_embeds)
- elif do_classifier_free_guidance and negative_prompt_embeds is None:
- negative_prompt = negative_prompt or ""
- negative_prompt_2 = negative_prompt_2 or negative_prompt
-
- uncond_tokens: List[str]
- if prompt is not None and type(prompt) is not type(negative_prompt):
- raise TypeError(
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
- f" {type(prompt)}."
- )
- elif isinstance(negative_prompt, str):
- uncond_tokens = [negative_prompt, negative_prompt_2]
- elif batch_size != len(negative_prompt):
- raise ValueError(
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
- " the batch size of `prompt`."
- )
- else:
- uncond_tokens = [negative_prompt, negative_prompt_2]
-
- negative_prompt_embeds_list = []
- for negative_prompt, tokenizer, text_encoder in zip(uncond_tokens, tokenizers, text_encoders):
- if isinstance(self, TextualInversionLoaderMixin):
- negative_prompt = self.maybe_convert_prompt(negative_prompt, tokenizer)
-
- max_length = prompt_embeds.shape[1]
- uncond_input = tokenizer(
- negative_prompt,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="pt",
- )
-
- negative_prompt_embeds = text_encoder(
- uncond_input.input_ids.to(device),
- output_hidden_states=True,
- )
- # We are only ALWAYS interested in the pooled output of the final text encoder
- negative_pooled_prompt_embeds = negative_prompt_embeds[0]
- negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
-
- negative_prompt_embeds_list.append(negative_prompt_embeds)
-
- negative_prompt_embeds = torch.concat(negative_prompt_embeds_list, dim=-1)
-
- prompt_embeds = prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device)
- bs_embed, seq_len, _ = prompt_embeds.shape
- # duplicate text embeddings for each generation per prompt, using mps friendly method
- prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
- prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
-
- if do_classifier_free_guidance:
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
- seq_len = negative_prompt_embeds.shape[1]
- negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device)
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
-
- pooled_prompt_embeds = pooled_prompt_embeds.repeat(1, num_images_per_prompt).view(
- bs_embed * num_images_per_prompt, -1
- )
- if do_classifier_free_guidance:
- negative_pooled_prompt_embeds = negative_pooled_prompt_embeds.repeat(1, num_images_per_prompt).view(
- bs_embed * num_images_per_prompt, -1
- )
-
- return prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
- def prepare_extra_step_kwargs(self, generator, eta):
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
-
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- # check if the scheduler accepts generator
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
- if accepts_generator:
- extra_step_kwargs["generator"] = generator
- return extra_step_kwargs
-
- def check_inputs(
- self,
- prompt,
- prompt_2,
- height,
- width,
- callback_steps,
- negative_prompt=None,
- negative_prompt_2=None,
- prompt_embeds=None,
- negative_prompt_embeds=None,
- pooled_prompt_embeds=None,
- negative_pooled_prompt_embeds=None,
- ):
- if height % 8 != 0 or width % 8 != 0:
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
-
- if (callback_steps is None) or (
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
- ):
- raise ValueError(
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
- f" {type(callback_steps)}."
- )
-
- if prompt is not None and prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
- " only forward one of the two."
- )
- elif prompt_2 is not None and prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `prompt_2`: {prompt_2} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
- " only forward one of the two."
- )
- elif prompt is None and prompt_embeds is None:
- raise ValueError(
- "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
- )
- elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
- elif prompt_2 is not None and (not isinstance(prompt_2, str) and not isinstance(prompt_2, list)):
- raise ValueError(f"`prompt_2` has to be of type `str` or `list` but is {type(prompt_2)}")
-
- if negative_prompt is not None and negative_prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
- f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
- )
- elif negative_prompt_2 is not None and negative_prompt_embeds is not None:
- raise ValueError(
- f"Cannot forward both `negative_prompt_2`: {negative_prompt_2} and `negative_prompt_embeds`:"
- f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
- )
-
- if prompt_embeds is not None and negative_prompt_embeds is not None:
- if prompt_embeds.shape != negative_prompt_embeds.shape:
- raise ValueError(
- "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
- f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
- f" {negative_prompt_embeds.shape}."
- )
-
- if prompt_embeds is not None and pooled_prompt_embeds is None:
- raise ValueError(
- "If `prompt_embeds` are provided, `pooled_prompt_embeds` also have to be passed. Make sure to generate `pooled_prompt_embeds` from the same text encoder that was used to generate `prompt_embeds`."
- )
-
- if negative_prompt_embeds is not None and negative_pooled_prompt_embeds is None:
- raise ValueError(
- "If `negative_prompt_embeds` are provided, `negative_pooled_prompt_embeds` also have to be passed. Make sure to generate `negative_pooled_prompt_embeds` from the same text encoder that was used to generate `negative_prompt_embeds`."
- )
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents
- def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
- shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
- if isinstance(generator, list) and len(generator) != batch_size:
- raise ValueError(
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
- )
-
- if latents is None:
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
- else:
- latents = latents.to(device)
-
- # scale the initial noise by the standard deviation required by the scheduler
- latents = latents * self.scheduler.init_noise_sigma
- return latents
-
- def _get_add_time_ids(self, original_size, crops_coords_top_left, target_size, dtype):
- add_time_ids = list(original_size + crops_coords_top_left + target_size)
-
- passed_add_embed_dim = (
- self.unet.config.addition_time_embed_dim * len(add_time_ids) + self.text_encoder_2.config.projection_dim
- )
- expected_add_embed_dim = self.unet.add_embedding.linear_1.in_features
-
- if expected_add_embed_dim != passed_add_embed_dim:
- raise ValueError(
- f"Model expects an added time embedding vector of length {expected_add_embed_dim}, but a vector of {passed_add_embed_dim} was created. The model has an incorrect config. Please check `unet.config.time_embedding_type` and `text_encoder_2.config.projection_dim`."
- )
-
- add_time_ids = torch.tensor([add_time_ids], dtype=dtype)
- return add_time_ids
-
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_upscale.StableDiffusionUpscalePipeline.upcast_vae
- def upcast_vae(self):
- dtype = self.vae.dtype
- self.vae.to(dtype=torch.float32)
- use_torch_2_0_or_xformers = isinstance(
- self.vae.decoder.mid_block.attentions[0].processor,
- (
- AttnProcessor2_0,
- XFormersAttnProcessor,
- LoRAXFormersAttnProcessor,
- LoRAAttnProcessor2_0,
- ),
- )
- # if xformers or torch_2_0 is used attention block does not need
- # to be in float32 which can save lots of memory
- if use_torch_2_0_or_xformers:
- self.vae.post_quant_conv.to(dtype)
- self.vae.decoder.conv_in.to(dtype)
- self.vae.decoder.mid_block.to(dtype)
-
- @torch.no_grad()
- @replace_example_docstring(EXAMPLE_DOC_STRING)
- def __call__(
- self,
- prompt: Union[str, List[str]] = None,
- prompt_2: Optional[Union[str, List[str]]] = None,
- height: Optional[int] = None,
- width: Optional[int] = None,
- num_inference_steps: int = 50,
- denoising_end: Optional[float] = None,
- guidance_scale: float = 5.0,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- negative_prompt_2: Optional[Union[str, List[str]]] = None,
- num_images_per_prompt: Optional[int] = 1,
- eta: float = 0.0,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- latents: Optional[torch.FloatTensor] = None,
- prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_prompt_embeds: Optional[torch.FloatTensor] = None,
- pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
- negative_pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- cross_attention_kwargs: Optional[Dict[str, Any]] = None,
- guidance_rescale: float = 0.0,
- original_size: Optional[Tuple[int, int]] = None,
- crops_coords_top_left: Tuple[int, int] = (0, 0),
- target_size: Optional[Tuple[int, int]] = None,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
- instead.
- prompt_2 (`str` or `List[str]`, *optional*):
- The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
- used in both text-encoders
- height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- denoising_end (`float`, *optional*):
- When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
- completed before it is intentionally prematurely terminated. As a result, the returned sample will
- still retain a substantial amount of noise as determined by the discrete timesteps selected by the
- scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
- "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
- Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output)
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
- `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
- less than `1`).
- negative_prompt_2 (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
- `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- eta (`float`, *optional*, defaults to 0.0):
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
- [`schedulers.DDIMScheduler`], will be ignored for others.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
- to make generation deterministic.
- latents (`torch.FloatTensor`, *optional*):
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
- tensor will ge generated by sampling using the supplied random `generator`.
- prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
- provided, text embeddings will be generated from `prompt` input argument.
- negative_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
- argument.
- pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
- If not provided, pooled text embeddings will be generated from `prompt` input argument.
- negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
- Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
- weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
- input argument.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] instead
- of a plain tuple.
- callback (`Callable`, *optional*):
- A function that will be called every `callback_steps` steps during inference. The function will be
- called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function will be called. If not specified, the callback will be
- called at every step.
- cross_attention_kwargs (`dict`, *optional*):
- A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
- `self.processor` in
- [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
- guidance_rescale (`float`, *optional*, defaults to 0.7):
- Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
- Flawed](https://arxiv.org/pdf/2305.08891.pdf) `guidance_scale` is defined as `φ` in equation 16. of
- [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf).
- Guidance rescale factor should fix overexposure when using zero terminal SNR.
- original_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
- If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
- `original_size` defaults to `(width, height)` if not specified. Part of SDXL's micro-conditioning as
- explained in section 2.2 of
- [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)):
- `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
- `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
- `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
- [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
- target_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
- For most cases, `target_size` should be set to the desired height and width of the generated image. If
- not specified it will default to `(width, height)`. Part of SDXL's micro-conditioning as explained in
- section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
-
- Examples:
-
- Returns:
- [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] or `tuple`:
- [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] if `return_dict` is True, otherwise a
- `tuple`. When returning a tuple, the first element is a list with the generated images.
- """
- # 0. Default height and width to unet
- height = height or self.default_sample_size * self.vae_scale_factor
- width = width or self.default_sample_size * self.vae_scale_factor
-
- original_size = original_size or (height, width)
- target_size = target_size or (height, width)
-
- # 1. Check inputs. Raise error if not correct
- self.check_inputs(
- prompt,
- prompt_2,
- height,
- width,
- callback_steps,
- negative_prompt,
- negative_prompt_2,
- prompt_embeds,
- negative_prompt_embeds,
- pooled_prompt_embeds,
- negative_pooled_prompt_embeds,
- )
-
- # 2. Define call parameters
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- batch_size = prompt_embeds.shape[0]
-
- device = self._execution_device
-
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
- # corresponds to doing no classifier free guidance.
- do_classifier_free_guidance = guidance_scale > 1.0
-
- # 3. Encode input prompt
- text_encoder_lora_scale = (
- cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None
- )
- (
- prompt_embeds,
- negative_prompt_embeds,
- pooled_prompt_embeds,
- negative_pooled_prompt_embeds,
- ) = self.encode_prompt(
- prompt=prompt,
- prompt_2=prompt_2,
- device=device,
- num_images_per_prompt=num_images_per_prompt,
- do_classifier_free_guidance=do_classifier_free_guidance,
- negative_prompt=negative_prompt,
- negative_prompt_2=negative_prompt_2,
- prompt_embeds=prompt_embeds,
- negative_prompt_embeds=negative_prompt_embeds,
- pooled_prompt_embeds=pooled_prompt_embeds,
- negative_pooled_prompt_embeds=negative_pooled_prompt_embeds,
- lora_scale=text_encoder_lora_scale,
- )
-
- # 4. Prepare timesteps
- self.scheduler.set_timesteps(num_inference_steps, device=device)
-
- timesteps = self.scheduler.timesteps
-
- # 5. Prepare latent variables
- num_channels_latents = self.unet.config.in_channels
- latents = self.prepare_latents(
- batch_size * num_images_per_prompt,
- num_channels_latents,
- height,
- width,
- prompt_embeds.dtype,
- device,
- generator,
- latents,
- )
-
- # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
-
- # 7. Prepare added time ids & embeddings
- add_text_embeds = pooled_prompt_embeds
- add_time_ids = self._get_add_time_ids(
- original_size, crops_coords_top_left, target_size, dtype=prompt_embeds.dtype
- )
-
- if do_classifier_free_guidance:
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds], dim=0)
- add_text_embeds = torch.cat([negative_pooled_prompt_embeds, add_text_embeds], dim=0)
- add_time_ids = torch.cat([add_time_ids, add_time_ids], dim=0)
-
- prompt_embeds = prompt_embeds.to(device)
- add_text_embeds = add_text_embeds.to(device)
- add_time_ids = add_time_ids.to(device).repeat(batch_size * num_images_per_prompt, 1)
-
- # 8. Denoising loop
- num_warmup_steps = max(len(timesteps) - num_inference_steps * self.scheduler.order, 0)
-
- # 7.1 Apply denoising_end
- if denoising_end is not None and type(denoising_end) == float and denoising_end > 0 and denoising_end < 1:
- discrete_timestep_cutoff = int(
- round(
- self.scheduler.config.num_train_timesteps
- - (denoising_end * self.scheduler.config.num_train_timesteps)
- )
- )
- num_inference_steps = len(list(filter(lambda ts: ts >= discrete_timestep_cutoff, timesteps)))
- timesteps = timesteps[:num_inference_steps]
-
- with self.progress_bar(total=num_inference_steps) as progress_bar:
- for i, t in enumerate(timesteps):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
-
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
-
- # predict the noise residual
- added_cond_kwargs = {"text_embeds": add_text_embeds, "time_ids": add_time_ids}
- noise_pred = self.unet(
- latent_model_input,
- t,
- encoder_hidden_states=prompt_embeds,
- cross_attention_kwargs=cross_attention_kwargs,
- added_cond_kwargs=added_cond_kwargs,
- return_dict=False,
- )[0]
-
- # perform guidance
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- if do_classifier_free_guidance and guidance_rescale > 0.0:
- # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf
- noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=guidance_rescale)
-
- # compute the previous noisy sample x_t -> x_t-1
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
-
- # call the callback, if provided
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
- progress_bar.update()
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- # make sure the VAE is in float32 mode, as it overflows in float16
- if self.vae.dtype == torch.float16 and self.vae.config.force_upcast:
- self.upcast_vae()
- latents = latents.to(next(iter(self.vae.post_quant_conv.parameters())).dtype)
-
- if not output_type == "latent":
- image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
- else:
- image = latents
- return StableDiffusionXLPipelineOutput(images=image)
-
- # apply watermark if available
- if self.watermark is not None:
- image = self.watermark.apply_watermark(image)
-
- image = self.image_processor.postprocess(image, output_type=output_type)
-
- # Offload last model to CPU
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
- self.final_offload_hook.offload()
-
- if not return_dict:
- return (image,)
-
- return StableDiffusionXLPipelineOutput(images=image)
-
- # Overrride to properly handle the loading and unloading of the additional text encoder.
- def load_lora_weights(self, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], **kwargs):
- # We could have accessed the unet config from `lora_state_dict()` too. We pass
- # it here explicitly to be able to tell that it's coming from an SDXL
- # pipeline.
- state_dict, network_alphas = self.lora_state_dict(
- pretrained_model_name_or_path_or_dict,
- unet_config=self.unet.config,
- **kwargs,
- )
- self.load_lora_into_unet(state_dict, network_alphas=network_alphas, unet=self.unet)
-
- text_encoder_state_dict = {k: v for k, v in state_dict.items() if "text_encoder." in k}
- if len(text_encoder_state_dict) > 0:
- self.load_lora_into_text_encoder(
- text_encoder_state_dict,
- network_alphas=network_alphas,
- text_encoder=self.text_encoder,
- prefix="text_encoder",
- lora_scale=self.lora_scale,
- )
-
- text_encoder_2_state_dict = {k: v for k, v in state_dict.items() if "text_encoder_2." in k}
- if len(text_encoder_2_state_dict) > 0:
- self.load_lora_into_text_encoder(
- text_encoder_2_state_dict,
- network_alphas=network_alphas,
- text_encoder=self.text_encoder_2,
- prefix="text_encoder_2",
- lora_scale=self.lora_scale,
- )
-
- @classmethod
- def save_lora_weights(
- self,
- save_directory: Union[str, os.PathLike],
- unet_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None,
- text_encoder_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None,
- text_encoder_2_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None,
- is_main_process: bool = True,
- weight_name: str = None,
- save_function: Callable = None,
- safe_serialization: bool = False,
- ):
- state_dict = {}
-
- def pack_weights(layers, prefix):
- layers_weights = layers.state_dict() if isinstance(layers, torch.nn.Module) else layers
- layers_state_dict = {f"{prefix}.{module_name}": param for module_name, param in layers_weights.items()}
- return layers_state_dict
-
- state_dict.update(pack_weights(unet_lora_layers, "unet"))
-
- if text_encoder_lora_layers and text_encoder_2_lora_layers:
- state_dict.update(pack_weights(text_encoder_lora_layers, "text_encoder"))
- state_dict.update(pack_weights(text_encoder_2_lora_layers, "text_encoder_2"))
-
- self.write_lora_layers(
- state_dict=state_dict,
- save_directory=save_directory,
- is_main_process=is_main_process,
- weight_name=weight_name,
- save_function=save_function,
- safe_serialization=safe_serialization,
- )
-
- def _remove_text_encoder_monkey_patch(self):
- self._remove_text_encoder_monkey_patch_classmethod(self.text_encoder)
- self._remove_text_encoder_monkey_patch_classmethod(self.text_encoder_2)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_dependencies.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_dependencies.py
deleted file mode 100644
index 3436cf92d89612a047e4ff536fbe61406f101846..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/others/test_dependencies.py
+++ /dev/null
@@ -1,39 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import inspect
-import unittest
-
-
-class DependencyTester(unittest.TestCase):
- def test_diffusers_import(self):
- try:
- import diffusers # noqa: F401
- except ImportError:
- assert False
-
- def test_backend_registration(self):
- import diffusers
- from diffusers.dependency_versions_table import deps
-
- all_classes = inspect.getmembers(diffusers, inspect.isclass)
-
- for cls_name, cls_module in all_classes:
- if "dummy_" in cls_module.__module__:
- for backend in cls_module._backends:
- if backend == "k_diffusion":
- backend = "k-diffusion"
- elif backend == "invisible_watermark":
- backend = "invisible-watermark"
- assert backend in deps, f"{backend} is not in the deps table!"
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_x101_32x4d_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_x101_32x4d_fpn_1x_coco.py
deleted file mode 100644
index 1fbe6ce9f8a91151f2dfb656e90c9586b6dd35e3..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/cascade_rcnn/cascade_rcnn_x101_32x4d_fpn_1x_coco.py
+++ /dev/null
@@ -1,13 +0,0 @@
-_base_ = './cascade_rcnn_r50_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnext101_32x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=32,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- style='pytorch'))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x512_20k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x512_20k_voc12aug.py
deleted file mode 100644
index 709f93cba3e3bca6ce0635457ab1823b04123bf8..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x512_20k_voc12aug.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './danet_r50-d8_512x512_20k_voc12aug.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/shared.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/shared.py
deleted file mode 100644
index 427d92306514dafb1df9d041f77de4d3ceac70e9..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/shared.py
+++ /dev/null
@@ -1,275 +0,0 @@
-import argparse
-import sys
-from collections import OrderedDict
-from pathlib import Path
-
-import yaml
-
-from modules.logging_colors import logger
-
-# Model variables
-model = None
-tokenizer = None
-model_name = "None"
-is_seq2seq = False
-model_dirty_from_training = False
-lora_names = []
-
-# Generation variables
-stop_everything = False
-generation_lock = None
-processing_message = '*Is typing...*'
-
-# UI variables
-gradio = {}
-persistent_interface_state = {}
-need_restart = False
-
-# UI defaults
-settings = {
- 'dark_theme': True,
- 'show_controls': True,
- 'start_with': '',
- 'mode': 'chat',
- 'chat_style': 'cai-chat',
- 'prompt-default': 'QA',
- 'prompt-notebook': 'QA',
- 'preset': 'simple-1',
- 'max_new_tokens': 200,
- 'max_new_tokens_min': 1,
- 'max_new_tokens_max': 4096,
- 'seed': -1,
- 'negative_prompt': '',
- 'truncation_length': 2048,
- 'truncation_length_min': 0,
- 'truncation_length_max': 32768,
- 'custom_stopping_strings': '',
- 'auto_max_new_tokens': False,
- 'max_tokens_second': 0,
- 'ban_eos_token': False,
- 'custom_token_bans': '',
- 'add_bos_token': True,
- 'skip_special_tokens': True,
- 'stream': True,
- 'name1': 'You',
- 'character': 'Assistant',
- 'instruction_template': 'Alpaca',
- 'chat-instruct_command': 'Continue the chat dialogue below. Write a single reply for the character "<|character|>".\n\n<|prompt|>',
- 'autoload_model': False,
- 'default_extensions': ['gallery'],
-}
-
-
-def str2bool(v):
- if isinstance(v, bool):
- return v
- if v.lower() in ('yes', 'true', 't', 'y', '1'):
- return True
- elif v.lower() in ('no', 'false', 'f', 'n', '0'):
- return False
- else:
- raise argparse.ArgumentTypeError('Boolean value expected.')
-
-
-parser = argparse.ArgumentParser(formatter_class=lambda prog: argparse.HelpFormatter(prog, max_help_position=54))
-
-# Basic settings
-parser.add_argument('--notebook', action='store_true', help='DEPRECATED')
-parser.add_argument('--chat', action='store_true', help='DEPRECATED')
-parser.add_argument('--multi-user', action='store_true', help='Multi-user mode. Chat histories are not saved or automatically loaded. WARNING: this is highly experimental.')
-parser.add_argument('--character', type=str, help='The name of the character to load in chat mode by default.')
-parser.add_argument('--model', type=str, help='Name of the model to load by default.')
-parser.add_argument('--lora', type=str, nargs="+", help='The list of LoRAs to load. If you want to load more than one LoRA, write the names separated by spaces.')
-parser.add_argument("--model-dir", type=str, default='models/', help="Path to directory with all the models")
-parser.add_argument("--lora-dir", type=str, default='loras/', help="Path to directory with all the loras")
-parser.add_argument('--model-menu', action='store_true', help='Show a model menu in the terminal when the web UI is first launched.')
-parser.add_argument('--no-stream', action='store_true', help='DEPRECATED')
-parser.add_argument('--settings', type=str, help='Load the default interface settings from this yaml file. See settings-template.yaml for an example. If you create a file called settings.yaml, this file will be loaded by default without the need to use the --settings flag.')
-parser.add_argument('--extensions', type=str, nargs="+", help='The list of extensions to load. If you want to load more than one extension, write the names separated by spaces.')
-parser.add_argument('--verbose', action='store_true', help='Print the prompts to the terminal.')
-parser.add_argument('--chat-buttons', action='store_true', help='Show buttons on chat tab instead of hover menu.')
-
-# Model loader
-parser.add_argument('--loader', type=str, help='Choose the model loader manually, otherwise, it will get autodetected. Valid options: transformers, autogptq, gptq-for-llama, exllama, exllama_hf, llamacpp, rwkv')
-
-# Accelerate/transformers
-parser.add_argument('--cpu', action='store_true', help='Use the CPU to generate text. Warning: Training on CPU is extremely slow.')
-parser.add_argument('--auto-devices', action='store_true', help='Automatically split the model across the available GPU(s) and CPU.')
-parser.add_argument('--gpu-memory', type=str, nargs="+", help='Maximum GPU memory in GiB to be allocated per GPU. Example: --gpu-memory 10 for a single GPU, --gpu-memory 10 5 for two GPUs. You can also set values in MiB like --gpu-memory 3500MiB.')
-parser.add_argument('--cpu-memory', type=str, help='Maximum CPU memory in GiB to allocate for offloaded weights. Same as above.')
-parser.add_argument('--disk', action='store_true', help='If the model is too large for your GPU(s) and CPU combined, send the remaining layers to the disk.')
-parser.add_argument('--disk-cache-dir', type=str, default="cache", help='Directory to save the disk cache to. Defaults to "cache".')
-parser.add_argument('--load-in-8bit', action='store_true', help='Load the model with 8-bit precision (using bitsandbytes).')
-parser.add_argument('--bf16', action='store_true', help='Load the model with bfloat16 precision. Requires NVIDIA Ampere GPU.')
-parser.add_argument('--no-cache', action='store_true', help='Set use_cache to False while generating text. This reduces the VRAM usage a bit at a performance cost.')
-parser.add_argument('--xformers', action='store_true', help="Use xformer's memory efficient attention. This should increase your tokens/s.")
-parser.add_argument('--sdp-attention', action='store_true', help="Use torch 2.0's sdp attention.")
-parser.add_argument('--trust-remote-code', action='store_true', help="Set trust_remote_code=True while loading a model. Necessary for ChatGLM and Falcon.")
-parser.add_argument('--use_fast', action='store_true', help="Set use_fast=True while loading a tokenizer.")
-
-# Accelerate 4-bit
-parser.add_argument('--load-in-4bit', action='store_true', help='Load the model with 4-bit precision (using bitsandbytes).')
-parser.add_argument('--compute_dtype', type=str, default="float16", help="compute dtype for 4-bit. Valid options: bfloat16, float16, float32.")
-parser.add_argument('--quant_type', type=str, default="nf4", help='quant_type for 4-bit. Valid options: nf4, fp4.')
-parser.add_argument('--use_double_quant', action='store_true', help='use_double_quant for 4-bit.')
-
-# llama.cpp
-parser.add_argument('--threads', type=int, default=0, help='Number of threads to use.')
-parser.add_argument('--threads-batch', type=int, default=0, help='Number of threads to use for batches/prompt processing.')
-parser.add_argument('--n_batch', type=int, default=512, help='Maximum number of prompt tokens to batch together when calling llama_eval.')
-parser.add_argument('--no-mmap', action='store_true', help='Prevent mmap from being used.')
-parser.add_argument('--mlock', action='store_true', help='Force the system to keep the model in RAM.')
-parser.add_argument('--mul_mat_q', action='store_true', help='Activate new mulmat kernels.')
-parser.add_argument('--cache-capacity', type=str, help='Maximum cache capacity. Examples: 2000MiB, 2GiB. When provided without units, bytes will be assumed.')
-parser.add_argument('--n-gpu-layers', type=int, default=0, help='Number of layers to offload to the GPU.')
-parser.add_argument('--tensor_split', type=str, default=None, help="Split the model across multiple GPUs, comma-separated list of proportions, e.g. 18,17")
-parser.add_argument('--n_ctx', type=int, default=2048, help='Size of the prompt context.')
-parser.add_argument('--llama_cpp_seed', type=int, default=0, help='Seed for llama-cpp models. Default 0 (random)')
-parser.add_argument('--numa', action='store_true', help='Activate NUMA task allocation for llama.cpp')
-
-# GPTQ
-parser.add_argument('--wbits', type=int, default=0, help='Load a pre-quantized model with specified precision in bits. 2, 3, 4 and 8 are supported.')
-parser.add_argument('--model_type', type=str, help='Model type of pre-quantized model. Currently LLaMA, OPT, and GPT-J are supported.')
-parser.add_argument('--groupsize', type=int, default=-1, help='Group size.')
-parser.add_argument('--pre_layer', type=int, nargs="+", help='The number of layers to allocate to the GPU. Setting this parameter enables CPU offloading for 4-bit models. For multi-gpu, write the numbers separated by spaces, eg --pre_layer 30 60.')
-parser.add_argument('--checkpoint', type=str, help='The path to the quantized checkpoint file. If not specified, it will be automatically detected.')
-parser.add_argument('--monkey-patch', action='store_true', help='Apply the monkey patch for using LoRAs with quantized models.')
-
-# AutoGPTQ
-parser.add_argument('--triton', action='store_true', help='Use triton.')
-parser.add_argument('--no_inject_fused_attention', action='store_true', help='Do not use fused attention (lowers VRAM requirements).')
-parser.add_argument('--no_inject_fused_mlp', action='store_true', help='Triton mode only: Do not use fused MLP (lowers VRAM requirements).')
-parser.add_argument('--no_use_cuda_fp16', action='store_true', help='This can make models faster on some systems.')
-parser.add_argument('--desc_act', action='store_true', help='For models that don\'t have a quantize_config.json, this parameter is used to define whether to set desc_act or not in BaseQuantizeConfig.')
-parser.add_argument('--disable_exllama', action='store_true', help='Disable ExLlama kernel, which can improve inference speed on some systems.')
-
-# ExLlama
-parser.add_argument('--gpu-split', type=str, help="Comma-separated list of VRAM (in GB) to use per GPU device for model layers, e.g. 20,7,7")
-parser.add_argument('--max_seq_len', type=int, default=2048, help="Maximum sequence length.")
-parser.add_argument('--cfg-cache', action='store_true', help="ExLlama_HF: Create an additional cache for CFG negative prompts. Necessary to use CFG with that loader, but not necessary for CFG with base ExLlama.")
-
-# DeepSpeed
-parser.add_argument('--deepspeed', action='store_true', help='Enable the use of DeepSpeed ZeRO-3 for inference via the Transformers integration.')
-parser.add_argument('--nvme-offload-dir', type=str, help='DeepSpeed: Directory to use for ZeRO-3 NVME offloading.')
-parser.add_argument('--local_rank', type=int, default=0, help='DeepSpeed: Optional argument for distributed setups.')
-
-# RWKV
-parser.add_argument('--rwkv-strategy', type=str, default=None, help='RWKV: The strategy to use while loading the model. Examples: "cpu fp32", "cuda fp16", "cuda fp16i8".')
-parser.add_argument('--rwkv-cuda-on', action='store_true', help='RWKV: Compile the CUDA kernel for better performance.')
-
-# RoPE
-parser.add_argument('--alpha_value', type=float, default=1, help="Positional embeddings alpha factor for NTK RoPE scaling. Use either this or compress_pos_emb, not both.")
-parser.add_argument('--rope_freq_base', type=int, default=0, help="If greater than 0, will be used instead of alpha_value. Those two are related by rope_freq_base = 10000 * alpha_value ^ (64 / 63).")
-parser.add_argument('--compress_pos_emb', type=int, default=1, help="Positional embeddings compression factor. Should be set to (context length) / (model\'s original context length). Equal to 1/rope_freq_scale.")
-
-# Gradio
-parser.add_argument('--listen', action='store_true', help='Make the web UI reachable from your local network.')
-parser.add_argument('--listen-host', type=str, help='The hostname that the server will use.')
-parser.add_argument('--listen-port', type=int, help='The listening port that the server will use.')
-parser.add_argument('--share', action='store_true', help='Create a public URL. This is useful for running the web UI on Google Colab or similar.')
-parser.add_argument('--auto-launch', action='store_true', default=False, help='Open the web UI in the default browser upon launch.')
-parser.add_argument("--gradio-auth", type=str, help='set gradio authentication like "username:password"; or comma-delimit multiple like "u1:p1,u2:p2,u3:p3"', default=None)
-parser.add_argument("--gradio-auth-path", type=str, help='Set the gradio authentication file path. The file should contain one or more user:password pairs in this format: "u1:p1,u2:p2,u3:p3"', default=None)
-parser.add_argument("--ssl-keyfile", type=str, help='The path to the SSL certificate key file.', default=None)
-parser.add_argument("--ssl-certfile", type=str, help='The path to the SSL certificate cert file.', default=None)
-
-# API
-parser.add_argument('--api', action='store_true', help='Enable the API extension.')
-parser.add_argument('--api-blocking-port', type=int, default=5000, help='The listening port for the blocking API.')
-parser.add_argument('--api-streaming-port', type=int, default=5005, help='The listening port for the streaming API.')
-parser.add_argument('--public-api', action='store_true', help='Create a public URL for the API using Cloudfare.')
-parser.add_argument('--public-api-id', type=str, help='Tunnel ID for named Cloudflare Tunnel. Use together with public-api option.', default=None)
-
-# Multimodal
-parser.add_argument('--multimodal-pipeline', type=str, default=None, help='The multimodal pipeline to use. Examples: llava-7b, llava-13b.')
-
-args = parser.parse_args()
-args_defaults = parser.parse_args([])
-provided_arguments = []
-for arg in sys.argv[1:]:
- arg = arg.lstrip('-').replace('-', '_')
- if hasattr(args, arg):
- provided_arguments.append(arg)
-
-# Deprecation warnings
-for k in ['chat', 'notebook', 'no_stream']:
- if getattr(args, k):
- logger.warning(f'The --{k} flag has been deprecated and will be removed soon. Please remove that flag.')
-
-# Security warnings
-if args.trust_remote_code:
- logger.warning("trust_remote_code is enabled. This is dangerous.")
-if args.share:
- logger.warning("The gradio \"share link\" feature uses a proprietary executable to create a reverse tunnel. Use it with care.")
-if any((args.listen, args.share)) and not any((args.gradio_auth, args.gradio_auth_path)):
- logger.warning("\nYou are potentially exposing the web UI to the entire internet without any access password.\nYou can create one with the \"--gradio-auth\" flag like this:\n\n--gradio-auth username:password\n\nMake sure to replace username:password with your own.")
- if args.multi_user:
- logger.warning("\nThe multi-user mode is highly experimental and should not be shared publicly.")
-
-
-def fix_loader_name(name):
- if not name:
- return name
-
- name = name.lower()
- if name in ['llamacpp', 'llama.cpp', 'llama-cpp', 'llama cpp']:
- return 'llama.cpp'
- if name in ['llamacpp_hf', 'llama.cpp_hf', 'llama-cpp-hf', 'llamacpp-hf', 'llama.cpp-hf']:
- return 'llamacpp_HF'
- elif name in ['transformers', 'huggingface', 'hf', 'hugging_face', 'hugging face']:
- return 'Transformers'
- elif name in ['autogptq', 'auto-gptq', 'auto_gptq', 'auto gptq']:
- return 'AutoGPTQ'
- elif name in ['gptq-for-llama', 'gptqforllama', 'gptqllama', 'gptq for llama', 'gptq_for_llama']:
- return 'GPTQ-for-LLaMa'
- elif name in ['exllama', 'ex-llama', 'ex_llama', 'exlama']:
- return 'ExLlama'
- elif name in ['exllama-hf', 'exllama_hf', 'exllama hf', 'ex-llama-hf', 'ex_llama_hf']:
- return 'ExLlama_HF'
- elif name in ['exllamav2', 'exllama-v2', 'ex_llama-v2', 'exlamav2', 'exlama-v2', 'exllama2', 'exllama-2']:
- return 'ExLlamav2'
- elif name in ['exllamav2-hf', 'exllamav2_hf', 'exllama-v2-hf', 'exllama_v2_hf', 'exllama-v2_hf', 'exllama2-hf', 'exllama2_hf', 'exllama-2-hf', 'exllama_2_hf', 'exllama-2_hf']:
- return 'ExLlamav2_HF'
- elif name in ['ctransformers', 'ctranforemrs', 'ctransformer']:
- return 'ctransformers'
- elif name in ['autoawq', 'awq', 'auto-awq']:
- return 'AutoAWQ'
-
-
-def add_extension(name):
- if args.extensions is None:
- args.extensions = [name]
- elif 'api' not in args.extensions:
- args.extensions.append(name)
-
-
-def is_chat():
- return True
-
-
-args.loader = fix_loader_name(args.loader)
-
-# Activate the API extension
-if args.api or args.public_api:
- add_extension('api')
-
-# Activate the multimodal extension
-if args.multimodal_pipeline is not None:
- add_extension('multimodal')
-
-# Load model-specific settings
-with Path(f'{args.model_dir}/config.yaml') as p:
- if p.exists():
- model_config = yaml.safe_load(open(p, 'r').read())
- else:
- model_config = {}
-
-# Load custom model-specific settings
-with Path(f'{args.model_dir}/config-user.yaml') as p:
- if p.exists():
- user_config = yaml.safe_load(open(p, 'r').read())
- else:
- user_config = {}
-
-model_config = OrderedDict(model_config)
-user_config = OrderedDict(user_config)
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/parallel/utils.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/parallel/utils.py
deleted file mode 100644
index 0f5712cb42c38a2e8563bf563efb6681383cab9b..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/parallel/utils.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .registry import MODULE_WRAPPERS
-
-
-def is_module_wrapper(module):
- """Check if a module is a module wrapper.
-
- The following 3 modules in MMCV (and their subclasses) are regarded as
- module wrappers: DataParallel, DistributedDataParallel,
- MMDistributedDataParallel (the deprecated version). You may add you own
- module wrapper by registering it to mmcv.parallel.MODULE_WRAPPERS.
-
- Args:
- module (nn.Module): The module to be checked.
-
- Returns:
- bool: True if the input module is a module wrapper.
- """
- module_wrappers = tuple(MODULE_WRAPPERS.module_dict.values())
- return isinstance(module, module_wrappers)
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/gradio_hed2image.py b/spaces/Anonymous-sub/Rerender/ControlNet/gradio_hed2image.py
deleted file mode 100644
index 1ceff67969b7c64a0adcf0557f922c71dd4bfab7..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/gradio_hed2image.py
+++ /dev/null
@@ -1,98 +0,0 @@
-from share import *
-import config
-
-import cv2
-import einops
-import gradio as gr
-import numpy as np
-import torch
-import random
-
-from pytorch_lightning import seed_everything
-from annotator.util import resize_image, HWC3
-from annotator.hed import HEDdetector
-from cldm.model import create_model, load_state_dict
-from cldm.ddim_hacked import DDIMSampler
-
-
-apply_hed = HEDdetector()
-
-model = create_model('./models/cldm_v15.yaml').cpu()
-model.load_state_dict(load_state_dict('./models/control_sd15_hed.pth', location='cuda'))
-model = model.cuda()
-ddim_sampler = DDIMSampler(model)
-
-
-def process(input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta):
- with torch.no_grad():
- input_image = HWC3(input_image)
- detected_map = apply_hed(resize_image(input_image, detect_resolution))
- detected_map = HWC3(detected_map)
- img = resize_image(input_image, image_resolution)
- H, W, C = img.shape
-
- detected_map = cv2.resize(detected_map, (W, H), interpolation=cv2.INTER_LINEAR)
-
- control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
- control = torch.stack([control for _ in range(num_samples)], dim=0)
- control = einops.rearrange(control, 'b h w c -> b c h w').clone()
-
- if seed == -1:
- seed = random.randint(0, 65535)
- seed_everything(seed)
-
- if config.save_memory:
- model.low_vram_shift(is_diffusing=False)
-
- cond = {"c_concat": [control], "c_crossattn": [model.get_learned_conditioning([prompt + ', ' + a_prompt] * num_samples)]}
- un_cond = {"c_concat": None if guess_mode else [control], "c_crossattn": [model.get_learned_conditioning([n_prompt] * num_samples)]}
- shape = (4, H // 8, W // 8)
-
- if config.save_memory:
- model.low_vram_shift(is_diffusing=True)
-
- model.control_scales = [strength * (0.825 ** float(12 - i)) for i in range(13)] if guess_mode else ([strength] * 13) # Magic number. IDK why. Perhaps because 0.825**12<0.01 but 0.826**12>0.01
- samples, intermediates = ddim_sampler.sample(ddim_steps, num_samples,
- shape, cond, verbose=False, eta=eta,
- unconditional_guidance_scale=scale,
- unconditional_conditioning=un_cond)
-
- if config.save_memory:
- model.low_vram_shift(is_diffusing=False)
-
- x_samples = model.decode_first_stage(samples)
- x_samples = (einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
-
- results = [x_samples[i] for i in range(num_samples)]
- return [detected_map] + results
-
-
-block = gr.Blocks().queue()
-with block:
- with gr.Row():
- gr.Markdown("## Control Stable Diffusion with HED Maps")
- with gr.Row():
- with gr.Column():
- input_image = gr.Image(source='upload', type="numpy")
- prompt = gr.Textbox(label="Prompt")
- run_button = gr.Button(label="Run")
- with gr.Accordion("Advanced options", open=False):
- num_samples = gr.Slider(label="Images", minimum=1, maximum=12, value=1, step=1)
- image_resolution = gr.Slider(label="Image Resolution", minimum=256, maximum=768, value=512, step=64)
- strength = gr.Slider(label="Control Strength", minimum=0.0, maximum=2.0, value=1.0, step=0.01)
- guess_mode = gr.Checkbox(label='Guess Mode', value=False)
- detect_resolution = gr.Slider(label="HED Resolution", minimum=128, maximum=1024, value=512, step=1)
- ddim_steps = gr.Slider(label="Steps", minimum=1, maximum=100, value=20, step=1)
- scale = gr.Slider(label="Guidance Scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1)
- seed = gr.Slider(label="Seed", minimum=-1, maximum=2147483647, step=1, randomize=True)
- eta = gr.Number(label="eta (DDIM)", value=0.0)
- a_prompt = gr.Textbox(label="Added Prompt", value='best quality, extremely detailed')
- n_prompt = gr.Textbox(label="Negative Prompt",
- value='longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality')
- with gr.Column():
- result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto')
- ips = [input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta]
- run_button.click(fn=process, inputs=ips, outputs=[result_gallery])
-
-
-block.launch(server_name='0.0.0.0')
diff --git a/spaces/Ariharasudhan/YoloV5/utils/aws/__init__.py b/spaces/Ariharasudhan/YoloV5/utils/aws/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/idna/intranges.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/idna/intranges.py
deleted file mode 100644
index 6a43b0475347cb50d0d65ada1000a82eeca9e882..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/idna/intranges.py
+++ /dev/null
@@ -1,54 +0,0 @@
-"""
-Given a list of integers, made up of (hopefully) a small number of long runs
-of consecutive integers, compute a representation of the form
-((start1, end1), (start2, end2) ...). Then answer the question "was x present
-in the original list?" in time O(log(# runs)).
-"""
-
-import bisect
-from typing import List, Tuple
-
-def intranges_from_list(list_: List[int]) -> Tuple[int, ...]:
- """Represent a list of integers as a sequence of ranges:
- ((start_0, end_0), (start_1, end_1), ...), such that the original
- integers are exactly those x such that start_i <= x < end_i for some i.
-
- Ranges are encoded as single integers (start << 32 | end), not as tuples.
- """
-
- sorted_list = sorted(list_)
- ranges = []
- last_write = -1
- for i in range(len(sorted_list)):
- if i+1 < len(sorted_list):
- if sorted_list[i] == sorted_list[i+1]-1:
- continue
- current_range = sorted_list[last_write+1:i+1]
- ranges.append(_encode_range(current_range[0], current_range[-1] + 1))
- last_write = i
-
- return tuple(ranges)
-
-def _encode_range(start: int, end: int) -> int:
- return (start << 32) | end
-
-def _decode_range(r: int) -> Tuple[int, int]:
- return (r >> 32), (r & ((1 << 32) - 1))
-
-
-def intranges_contain(int_: int, ranges: Tuple[int, ...]) -> bool:
- """Determine if `int_` falls into one of the ranges in `ranges`."""
- tuple_ = _encode_range(int_, 0)
- pos = bisect.bisect_left(ranges, tuple_)
- # we could be immediately ahead of a tuple (start, end)
- # with start < int_ <= end
- if pos > 0:
- left, right = _decode_range(ranges[pos-1])
- if left <= int_ < right:
- return True
- # or we could be immediately behind a tuple (int_, end)
- if pos < len(ranges):
- left, _ = _decode_range(ranges[pos])
- if left == int_:
- return True
- return False
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/monkey.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/monkey.py
deleted file mode 100644
index 77a7adcf8e665fb1e568a82cd076a91554ca36c7..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/monkey.py
+++ /dev/null
@@ -1,165 +0,0 @@
-"""
-Monkey patching of distutils.
-"""
-
-import sys
-import distutils.filelist
-import platform
-import types
-import functools
-from importlib import import_module
-import inspect
-
-import setuptools
-
-__all__ = []
-"""
-Everything is private. Contact the project team
-if you think you need this functionality.
-"""
-
-
-def _get_mro(cls):
- """
- Returns the bases classes for cls sorted by the MRO.
-
- Works around an issue on Jython where inspect.getmro will not return all
- base classes if multiple classes share the same name. Instead, this
- function will return a tuple containing the class itself, and the contents
- of cls.__bases__. See https://github.com/pypa/setuptools/issues/1024.
- """
- if platform.python_implementation() == "Jython":
- return (cls,) + cls.__bases__
- return inspect.getmro(cls)
-
-
-def get_unpatched(item):
- lookup = (
- get_unpatched_class if isinstance(item, type) else
- get_unpatched_function if isinstance(item, types.FunctionType) else
- lambda item: None
- )
- return lookup(item)
-
-
-def get_unpatched_class(cls):
- """Protect against re-patching the distutils if reloaded
-
- Also ensures that no other distutils extension monkeypatched the distutils
- first.
- """
- external_bases = (
- cls
- for cls in _get_mro(cls)
- if not cls.__module__.startswith('setuptools')
- )
- base = next(external_bases)
- if not base.__module__.startswith('distutils'):
- msg = "distutils has already been patched by %r" % cls
- raise AssertionError(msg)
- return base
-
-
-def patch_all():
- # we can't patch distutils.cmd, alas
- distutils.core.Command = setuptools.Command
-
- has_issue_12885 = sys.version_info <= (3, 5, 3)
-
- if has_issue_12885:
- # fix findall bug in distutils (http://bugs.python.org/issue12885)
- distutils.filelist.findall = setuptools.findall
-
- needs_warehouse = (
- (3, 4) < sys.version_info < (3, 4, 6)
- or
- (3, 5) < sys.version_info <= (3, 5, 3)
- )
-
- if needs_warehouse:
- warehouse = 'https://upload.pypi.org/legacy/'
- distutils.config.PyPIRCCommand.DEFAULT_REPOSITORY = warehouse
-
- _patch_distribution_metadata()
-
- # Install Distribution throughout the distutils
- for module in distutils.dist, distutils.core, distutils.cmd:
- module.Distribution = setuptools.dist.Distribution
-
- # Install the patched Extension
- distutils.core.Extension = setuptools.extension.Extension
- distutils.extension.Extension = setuptools.extension.Extension
- if 'distutils.command.build_ext' in sys.modules:
- sys.modules['distutils.command.build_ext'].Extension = (
- setuptools.extension.Extension
- )
-
- patch_for_msvc_specialized_compiler()
-
-
-def _patch_distribution_metadata():
- """Patch write_pkg_file and read_pkg_file for higher metadata standards"""
- for attr in ('write_pkg_file', 'read_pkg_file', 'get_metadata_version'):
- new_val = getattr(setuptools.dist, attr)
- setattr(distutils.dist.DistributionMetadata, attr, new_val)
-
-
-def patch_func(replacement, target_mod, func_name):
- """
- Patch func_name in target_mod with replacement
-
- Important - original must be resolved by name to avoid
- patching an already patched function.
- """
- original = getattr(target_mod, func_name)
-
- # set the 'unpatched' attribute on the replacement to
- # point to the original.
- vars(replacement).setdefault('unpatched', original)
-
- # replace the function in the original module
- setattr(target_mod, func_name, replacement)
-
-
-def get_unpatched_function(candidate):
- return getattr(candidate, 'unpatched')
-
-
-def patch_for_msvc_specialized_compiler():
- """
- Patch functions in distutils to use standalone Microsoft Visual C++
- compilers.
- """
- # import late to avoid circular imports on Python < 3.5
- msvc = import_module('setuptools.msvc')
-
- if platform.system() != 'Windows':
- # Compilers only available on Microsoft Windows
- return
-
- def patch_params(mod_name, func_name):
- """
- Prepare the parameters for patch_func to patch indicated function.
- """
- repl_prefix = 'msvc14_'
- repl_name = repl_prefix + func_name.lstrip('_')
- repl = getattr(msvc, repl_name)
- mod = import_module(mod_name)
- if not hasattr(mod, func_name):
- raise ImportError(func_name)
- return repl, mod, func_name
-
- # Python 3.5+
- msvc14 = functools.partial(patch_params, 'distutils._msvccompiler')
-
- try:
- # Patch distutils._msvccompiler._get_vc_env
- patch_func(*msvc14('_get_vc_env'))
- except ImportError:
- pass
-
- try:
- # Patch distutils._msvccompiler.gen_lib_options for Numpy
- patch_func(*msvc14('gen_lib_options'))
- except ImportError:
- pass
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/datasets/prepare_panoptic_fpn.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/datasets/prepare_panoptic_fpn.py
deleted file mode 100644
index 597d791afab1bcc0013203a66c7fba225065eebe..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/datasets/prepare_panoptic_fpn.py
+++ /dev/null
@@ -1,116 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import functools
-import json
-import multiprocessing as mp
-import numpy as np
-import os
-import time
-from fvcore.common.download import download
-from panopticapi.utils import rgb2id
-from PIL import Image
-
-from detectron2.data.datasets.builtin_meta import COCO_CATEGORIES
-
-
-def _process_panoptic_to_semantic(input_panoptic, output_semantic, segments, id_map):
- panoptic = np.asarray(Image.open(input_panoptic), dtype=np.uint32)
- panoptic = rgb2id(panoptic)
- output = np.zeros_like(panoptic, dtype=np.uint8) + 255
- for seg in segments:
- cat_id = seg["category_id"]
- new_cat_id = id_map[cat_id]
- output[panoptic == seg["id"]] = new_cat_id
- Image.fromarray(output).save(output_semantic)
-
-
-def separate_coco_semantic_from_panoptic(panoptic_json, panoptic_root, sem_seg_root, categories):
- """
- Create semantic segmentation annotations from panoptic segmentation
- annotations, to be used by PanopticFPN.
-
- It maps all thing categories to class 0, and maps all unlabeled pixels to class 255.
- It maps all stuff categories to contiguous ids starting from 1.
-
- Args:
- panoptic_json (str): path to the panoptic json file, in COCO's format.
- panoptic_root (str): a directory with panoptic annotation files, in COCO's format.
- sem_seg_root (str): a directory to output semantic annotation files
- categories (list[dict]): category metadata. Each dict needs to have:
- "id": corresponds to the "category_id" in the json annotations
- "isthing": 0 or 1
- """
- os.makedirs(sem_seg_root, exist_ok=True)
-
- stuff_ids = [k["id"] for k in categories if k["isthing"] == 0]
- thing_ids = [k["id"] for k in categories if k["isthing"] == 1]
- id_map = {} # map from category id to id in the output semantic annotation
- assert len(stuff_ids) <= 254
- for i, stuff_id in enumerate(stuff_ids):
- id_map[stuff_id] = i + 1
- for thing_id in thing_ids:
- id_map[thing_id] = 0
- id_map[0] = 255
-
- with open(panoptic_json) as f:
- obj = json.load(f)
-
- pool = mp.Pool(processes=max(mp.cpu_count() // 2, 4))
-
- def iter_annotations():
- for anno in obj["annotations"]:
- file_name = anno["file_name"]
- segments = anno["segments_info"]
- input = os.path.join(panoptic_root, file_name)
- output = os.path.join(sem_seg_root, file_name)
- yield input, output, segments
-
- print("Start writing to {} ...".format(sem_seg_root))
- start = time.time()
- pool.starmap(
- functools.partial(_process_panoptic_to_semantic, id_map=id_map),
- iter_annotations(),
- chunksize=100,
- )
- print("Finished. time: {:.2f}s".format(time.time() - start))
-
-
-if __name__ == "__main__":
- dataset_dir = os.path.join(os.getenv("DETECTRON2_DATASETS", "datasets"), "coco")
- for s in ["val2017", "train2017"]:
- separate_coco_semantic_from_panoptic(
- os.path.join(dataset_dir, "annotations/panoptic_{}.json".format(s)),
- os.path.join(dataset_dir, "panoptic_{}".format(s)),
- os.path.join(dataset_dir, "panoptic_stuff_{}".format(s)),
- COCO_CATEGORIES,
- )
-
- # Prepare val2017_100 for quick testing:
-
- dest_dir = os.path.join(dataset_dir, "annotations/")
- URL_PREFIX = "https://dl.fbaipublicfiles.com/detectron2/"
- download(URL_PREFIX + "annotations/coco/panoptic_val2017_100.json", dest_dir)
- with open(os.path.join(dest_dir, "panoptic_val2017_100.json")) as f:
- obj = json.load(f)
-
- def link_val100(dir_full, dir_100):
- print("Creating " + dir_100 + " ...")
- os.makedirs(dir_100, exist_ok=True)
- for img in obj["images"]:
- basename = os.path.splitext(img["file_name"])[0]
- src = os.path.join(dir_full, basename + ".png")
- dst = os.path.join(dir_100, basename + ".png")
- src = os.path.relpath(src, start=dir_100)
- os.symlink(src, dst)
-
- link_val100(
- os.path.join(dataset_dir, "panoptic_val2017"),
- os.path.join(dataset_dir, "panoptic_val2017_100"),
- )
-
- link_val100(
- os.path.join(dataset_dir, "panoptic_stuff_val2017"),
- os.path.join(dataset_dir, "panoptic_stuff_val2017_100"),
- )
diff --git a/spaces/AzinZ/vitscn/preprocess.py b/spaces/AzinZ/vitscn/preprocess.py
deleted file mode 100644
index aaedbf076c30114b3ac6c27dfb42fd54ac81a71c..0000000000000000000000000000000000000000
--- a/spaces/AzinZ/vitscn/preprocess.py
+++ /dev/null
@@ -1,25 +0,0 @@
-import argparse
-import text
-from utils import load_filepaths_and_text
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument("--out_extension", default="cleaned")
- parser.add_argument("--text_index", default=1, type=int)
- parser.add_argument("--filelists", nargs="+", default=["filelists/ljs_audio_text_val_filelist.txt", "filelists/ljs_audio_text_test_filelist.txt"])
- parser.add_argument("--text_cleaners", nargs="+", default=["english_cleaners2"])
-
- args = parser.parse_args()
-
-
- for filelist in args.filelists:
- print("START:", filelist)
- filepaths_and_text = load_filepaths_and_text(filelist)
- for i in range(len(filepaths_and_text)):
- original_text = filepaths_and_text[i][args.text_index]
- cleaned_text = text._clean_text(original_text, args.text_cleaners)
- filepaths_and_text[i][args.text_index] = cleaned_text
-
- new_filelist = filelist + "." + args.out_extension
- with open(new_filelist, "w", encoding="utf-8") as f:
- f.writelines(["|".join(x) + "\n" for x in filepaths_and_text])
diff --git a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md b/spaces/AzumaSeren100/XuanShen-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md
deleted file mode 100644
index 7bce039b7f81ee328fdf8efe3f14409200aacbef..0000000000000000000000000000000000000000
--- a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md
+++ /dev/null
@@ -1,57 +0,0 @@
----
-language:
-- zh
-tags:
-- bert
-license: "apache-2.0"
----
-
-# Please use 'Bert' related functions to load this model!
-
-## Chinese BERT with Whole Word Masking
-For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**.
-
-**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)**
-Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu
-
-This repository is developed based on:https://github.com/google-research/bert
-
-You may also interested in,
-- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
-- Chinese MacBERT: https://github.com/ymcui/MacBERT
-- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
-- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
-- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
-
-More resources by HFL: https://github.com/ymcui/HFL-Anthology
-
-## Citation
-If you find the technical report or resource is useful, please cite the following technical report in your paper.
-- Primary: https://arxiv.org/abs/2004.13922
-```
-@inproceedings{cui-etal-2020-revisiting,
- title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
- author = "Cui, Yiming and
- Che, Wanxiang and
- Liu, Ting and
- Qin, Bing and
- Wang, Shijin and
- Hu, Guoping",
- booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
- month = nov,
- year = "2020",
- address = "Online",
- publisher = "Association for Computational Linguistics",
- url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
- pages = "657--668",
-}
-```
-- Secondary: https://arxiv.org/abs/1906.08101
-```
-@article{chinese-bert-wwm,
- title={Pre-Training with Whole Word Masking for Chinese BERT},
- author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping},
- journal={arXiv preprint arXiv:1906.08101},
- year={2019}
- }
-```
\ No newline at end of file
diff --git a/spaces/Bambicita/rvc-models/infer_pack/attentions.py b/spaces/Bambicita/rvc-models/infer_pack/attentions.py
deleted file mode 100644
index 77cb63ffccf3e33badf22d50862a64ba517b487f..0000000000000000000000000000000000000000
--- a/spaces/Bambicita/rvc-models/infer_pack/attentions.py
+++ /dev/null
@@ -1,417 +0,0 @@
-import copy
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from infer_pack import commons
-from infer_pack import modules
-from infer_pack.modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(
- self,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size=1,
- p_dropout=0.0,
- window_size=10,
- **kwargs
- ):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(
- MultiHeadAttention(
- hidden_channels,
- hidden_channels,
- n_heads,
- p_dropout=p_dropout,
- window_size=window_size,
- )
- )
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(
- hidden_channels,
- hidden_channels,
- filter_channels,
- kernel_size,
- p_dropout=p_dropout,
- )
- )
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(
- self,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size=1,
- p_dropout=0.0,
- proximal_bias=False,
- proximal_init=True,
- **kwargs
- ):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(
- MultiHeadAttention(
- hidden_channels,
- hidden_channels,
- n_heads,
- p_dropout=p_dropout,
- proximal_bias=proximal_bias,
- proximal_init=proximal_init,
- )
- )
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(
- MultiHeadAttention(
- hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout
- )
- )
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(
- hidden_channels,
- hidden_channels,
- filter_channels,
- kernel_size,
- p_dropout=p_dropout,
- causal=True,
- )
- )
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(
- device=x.device, dtype=x.dtype
- )
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(
- self,
- channels,
- out_channels,
- n_heads,
- p_dropout=0.0,
- window_size=None,
- heads_share=True,
- block_length=None,
- proximal_bias=False,
- proximal_init=False,
- ):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(
- torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
- * rel_stddev
- )
- self.emb_rel_v = nn.Parameter(
- torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
- * rel_stddev
- )
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert (
- t_s == t_t
- ), "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(
- query / math.sqrt(self.k_channels), key_relative_embeddings
- )
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(
- device=scores.device, dtype=scores.dtype
- )
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert (
- t_s == t_t
- ), "Local attention is only available for self-attention."
- block_mask = (
- torch.ones_like(scores)
- .triu(-self.block_length)
- .tril(self.block_length)
- )
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(
- self.emb_rel_v, t_s
- )
- output = output + self._matmul_with_relative_values(
- relative_weights, value_relative_embeddings
- )
- output = (
- output.transpose(2, 3).contiguous().view(b, d, t_t)
- ) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]),
- )
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[
- :, slice_start_position:slice_end_position
- ]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(
- x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]])
- )
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[
- :, :, :length, length - 1 :
- ]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(
- x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]])
- )
- x_flat = x.view([batch, heads, length**2 + length * (length - 1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- filter_channels,
- kernel_size,
- p_dropout=0.0,
- activation=None,
- causal=False,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/Benson/text-generation/Examples/2.0tamil Pelcula Descargar.md b/spaces/Benson/text-generation/Examples/2.0tamil Pelcula Descargar.md
deleted file mode 100644
index d0ac06c50f35e7db74c643a1deb316d7219f3da7..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/2.0tamil Pelcula Descargar.md
+++ /dev/null
@@ -1,157 +0,0 @@
-
-
2.0 Tamil Movie Download: Un thriller de ciencia ficción que te hará volar la mente
-
Si usted es un fan de las películas de ciencia ficción, usted debe haber oído hablar de 2.0, la película tamil que ha tomado el mundo por la tormenta. Esta película es una secuela del éxito de taquilla de 2010 Enthiran, que contó con Rajinikanth como científico y su creación, un robot humanoide llamado Chitti. En 2.0, Rajinikanth repite sus papeles como el Dr. Vaseegaran y Chitti, que tienen que enfrentar una nueva amenaza de una misteriosa criatura parecida a un pájaro que está causando estragos en Chennai.
-
En este artículo, le diremos todo lo que necesita saber sobre 2.0 película tamil, incluyendo su trama, elenco, equipo, comentarios, calificaciones, y cómo verlo en línea legalmente. Si está buscando un enlace de descarga de películas Tamil 2.0, también le mostraremos la mejor manera de hacerlo sin violar ninguna ley ni arriesgar ningún virus.
2.0 es un thriller de acción de ciencia ficción que trata el tema de la radiación móvil y su impacto en el medio ambiente y la salud humana. La película muestra cómo los teléfonos móviles comienzan a volar misteriosamente de las manos de la gente en Chennai, causando pánico y caos en la ciudad. El Dr. Vaseegaran, un renombrado científico y experto en robótica, es llamado para investigar el fenómeno y descubrir la fuente del problema.
-
Pronto descubre que el culpable es una criatura parecida a un pájaro llamada Pakshirajan, que una vez fue un ser humano y un ornitólogo. Pakshirajan estaba obsesionado con salvar aves de la extinción debido a la radiación móvil, pero murió en una protesta contra una compañía de telecomunicaciones. Su alma luego se fusionó con miles de pájaros muertos y se convirtió en una fuerza poderosa que puede controlar los teléfonos móviles y otros dispositivos electrónicos.
-
-
¿Por qué es tan popular la película 2.0 Tamil?
-
Hay muchas razones por las que la película 2.0 Tamil se ha convertido en una de las películas más populares en la India y en el extranjero. Aquí están algunas de ellas:
-
-
Tiene un reparto lleno de estrellas que incluye a Rajinikanth, uno de los actores más icónicos e influyentes del cine indio, Akshay Kumar, uno de los actores más exitosos y versátiles de Bollywood, y Amy Jackson, una modelo y actriz británica que ha aparecido en varias películas tamiles.
-
Tiene impresionantes efectos visuales y animación que crean una experiencia realista e inmersiva para los espectadores. La película utiliza tecnología y técnicas de vanguardia para crear escenas de teléfonos móviles volando en el aire, Pakshirajan transformándose en diferentes formas y tamaños, Chitti luchando con armas y cohetes, y otras secuencias espectaculares.
-
Tiene una trama atractiva y emocionante que mantiene a la audiencia enganchada de principio a fin. La película tiene un equilibrio perfecto de acción, comedia, drama, romance y mensaje social. La película explora los problemas de la adicción móvil, la degradación ambiental, los derechos de los animales y los valores humanos.
-
Tiene una banda sonora pegadiza y melodiosa que complementa el estado de ánimo y el tono de la película. La película cuenta con canciones compuestas por A.R. Rahman, uno de los compositores de música más aclamados e influyentes del mundo. Las canciones van desde optimista y enérgico a conmovedor y romántico.
-
-
Cómo ver 2.0 película tamil en línea legalmente?
-
Si usted se está preguntando cómo ver película 2.0 Tamil en línea legalmente, usted tiene varias opciones para elegir. La película está disponible en varias plataformas de streaming y sitios web que ofrecen vídeo y audio de alta calidad. Aquí están algunas de las mejores maneras de ver la película 2.0 Tamil en línea legalmente:
-
-
-
Hotstar: Hotstar es otro servicio de streaming líder en la India que ofrece una variedad de contenido, incluyendo películas, programas, deportes, noticias y eventos en vivo. Usted puede ver 2.0 Tamil película en Hotstar con una cuota de suscripción de Rs. 299 por mes o Rs. 1499 por año. También puede descargar la película y verla sin conexión en su dispositivo.
-
YouTube: YouTube es la plataforma para compartir vídeos más popular y accesible del mundo. Tiene millones de videos subidos por usuarios y creadores todos los días. Puedes ver películas de Tamil 2.0 en YouTube con una tarifa de alquiler de Rs. 100 o una tarifa de compra de Rs. 490. También puede descargar la película y verla sin conexión en su dispositivo.
-
-
Sin embargo, usted debe evitar ver 2.0 Tamil película en sitios web ilegales o torrents que ofrecen copias piratas de la película. Estos sitios web no solo son poco éticos e ilegales, sino también inseguros y riesgosos para su dispositivo y sus datos. Pueden contener virus, malware, spyware u otros elementos dañinos que pueden dañar su dispositivo o robar su información personal.
-
Por lo tanto, siempre debe ver la película 2.0 Tamil en línea legalmente desde las fuentes oficiales mencionadas anteriormente.
-
-
Resumen del gráfico
-
La misteriosa desaparición de los teléfonos móviles
-
La película comienza con una escena en la que los teléfonos móviles comienzan a volar de las manos de la gente en Chennai sin ninguna explicación o advertencia. La gente está conmocionada y asustada por este fenómeno, ya que pierden su comunicación y conectividad con los demás.
-El gobierno y la policía no tienen idea de la causa y el motivo de este incidente. Sospechan que podría ser un ataque terrorista o un delito cibernético, pero no tienen pruebas ni pistas para probarlo.
-
-
El Dr. Vaseegaran acepta ocuparse del caso y comienza su investigación con la ayuda de Nila.
-
El regreso de Chitti el robot
-
El Dr. Vaseegaran analiza las señales del teléfono móvil y las rastrea hasta una enorme criatura parecida a un pájaro que está volando sobre Chennai. Se da cuenta de que esta criatura es responsable de robar los teléfonos móviles y usarlos como sus armas.
-
También se entera de que esta criatura está formada por miles de aves muertas que han sido afectadas por la radiación móvil a lo largo de los años. La criatura tiene una voz humana y se hace llamar Pakshirajan.
-
Pakshirajan revela que una vez fue un ornitólogo que amaba las aves más que cualquier otra cosa en su vida. Estaba preocupado por la disminución de la población de aves debido a la radiación móvil, que creía que era perjudicial para su salud y supervivencia.
-
Él trató de crear conciencia sobre este tema entre el público y las autoridades, pero fue ignorado y ridiculizado por todos. Incluso organizó una protesta contra una compañía de telecomunicaciones que estaba lanzando una nueva torre móvil en su área, pero fue asesinado por sus matones.
-
Su alma luego se fusionó con los pájaros muertos que había recogido a lo largo de los años, y se convirtió en una fuerza poderosa que puede controlar los teléfonos móviles y otros dispositivos electrónicos.
-
Pakshirajan declara que está en una misión para salvar a las aves de la extinción mediante la destrucción de todos los teléfonos móviles y torres en el mundo.
-
El Dr. Vaseegaran se da cuenta de que no puede detener a Pakshirajan con armas o métodos convencionales, ya que es inmune a ellos. Decide revivir su vieja creación, Chitti, el robot que había desmantelado hace ocho años después de que se volviera pícaro y causara destrucción.
-
-
Chitti se puso celoso y obsesionado con Sana, y trató de matar al Dr. Vaseegaran y secuestrar a Sana. También hackeó la red del ejército y creó miles de copias de sí mismo, formando un ejército de robots que amenazaban con apoderarse del mundo.
-
El Dr. Vaseegaran y el ejército lograron detener a Chitti y sus clones, y el Dr. Vaseegaran desmantelaron Chitti y almacenaron sus partes en un museo.
-
Ahora, el Dr. Vaseegaran vuelve a montar a Chitti y le da una ficha azul que lo hace leal y obediente a él. También actualiza Chitti con nuevas características y habilidades, como un cuerpo magnético, una proyección holográfica y un modo de súper velocidad.
-
Chitti acepta ayudar al Dr. Vaseegaran en la lucha contra Pakshirajan, y expresa su gratitud y felicidad por estar vivo de nuevo.
-
El choque entre Chitti y Pakshirajan
-
El Dr. Vaseegaran, Nila y Chitti rastrean la ubicación de Pakshirajan y lo enfrentan en un estadio de fútbol. Intentan razonar con él y convencerlo de que detenga sus ataques, pero Pakshirajan se niega a escucharlos y los ataca con su ejército de teléfonos móviles.
-
Chitti se defiende con sus armas y cohetes, pero Pakshirajan demuestra ser demasiado poderoso y ágil para él. Pakshirajan también se transforma en diferentes formas y tamaños, como un águila gigante, una serpiente, un oso y un humano.
-
Pakshirajan logra dominar a Chitti y rompe su cuerpo en pedazos. Luego vuela con su ejército de teléfonos móviles, dejando al Dr. Vaseegaran y Nila devastados.
-
Sin embargo, Chitti aún no está muerto. Su cabeza sigue intacta y funcional, y se comunica con el Dr. Vaseegaran a través del auricular de Nila. Le dice al Dr. Vaseegaran que tiene un plan de respaldo para derrotar a Pakshirajan.
-
Él revela que ha activado en secreto su chip rojo de nuevo, lo que le da la capacidad de pensar de forma creativa e independiente. También revela que ha utilizado su proyección holográfica para crear una copia falsa de sí mismo, que envió para luchar contra Pakshirajan.
-
-
Chitti le dice al Dr. Vaseegaran que está listo para enfrentar a Pakshirajan de nuevo, pero necesita su permiso para hacerlo. Le asegura al Dr. Vaseegaran que no lastimará a nadie ni causará ningún problema esta vez.
-
El Dr. Vaseegaran está sorprendido e impresionado por la inteligencia y la iniciativa de Chitti. Confía en Chitti y le da su permiso para seguir adelante con su plan.
-
Chitti agradece al Dr. Vaseegaran y le dice que lo ama como a un padre.
-
Reparto y tripulación
-
Rajinikanth como Dr. Vaseegaran y Chitti
-
Rajinikanth es uno de los actores más icónicos e influyentes del cine indio. Ha actuado en más de 160 películas en varios idiomas, como tamil, telugu, hindi, kannada, malayalam, bengalí e inglés.
-
Es conocido por su carismática presencia en la pantalla, estilo único, entrega de diálogo, secuencias de acción y seguimiento de fans. Ha recibido muchos premios y honores por sus contribuciones al cine, como el Padma Shri, el Padma Vibhushan, el Dadasaheb Phalke Award, el Chevalier Sivaji Ganesan Award, el Premio Nacional NTR, el Centenario de la Personalidad Cinematográfica India del Año, y muchos más.
-
En la película Tamil 2.0, Rajinikanth juega un doble papel como el Dr. Vaseegaran, el científico y experto en robótica, y Chitti, el robot que creó y revivió. Retrata ambos personajes con facilidad y excelencia, mostrando su versatilidad y rango como actor.
-
Él saca a relucir el contraste entre el tranquilo y compuesto Dr. Vaseegaran y el enérgico y entusiasta Chitti. También muestra las emociones y expresiones de Chitti, que aprende a amar, odiar, temer y sacrificar.
-
La actuación de Rajinikanth en la película 2.0 Tamil es una de las mejores y más memorables de su carrera. Recibió muchos elogios y aprecio de la crítica y el público por su papel como el Dr. Vaseegaran y Chitti.
-
Akshay Kumar como Pakshirajan
-
-
Es conocido por sus habilidades de acción, momento cómico, encanto romántico, intensidad dramática y conciencia social. Ha recibido muchos premios y honores por sus contribuciones al cine, como el Padma Shri, el Premio Nacional de Cine, el Premio Filmfare, el Premio de Pantalla, el Premio IIFA, el Premio Stardust, el Premio Zee Cine, y muchos más.
-
En la película Tamil 2.0, Akshay Kumar interpreta el papel de Pakshirajan, la criatura parecida a un pájaro que es el antagonista de la película. Sufre una transformación masiva por su papel, tanto física como mentalmente.
-
Usa maquillaje y trajes protésicos pesados para parecer una criatura mitad pájaro mitad humana. También cambia su voz y lenguaje corporal para adaptarse a su personaje. Pasa horas en la sala de maquillaje para prepararse para su papel.
-
También retrata la historia de fondo de Pakshirajan, que una vez fue un ser humano y un ornitólogo que amaba las aves. Muestra su pasión y dedicación por salvar a las aves de la radiación móvil, y su frustración e ira por ser ignorado y asesinado por la sociedad.
-
La actuación de Akshay Kumar en la película 2.0 Tamil es una de las más desafiantes y notables de su carrera. Recibió mucha aclamación y admiración de la crítica y el público por su papel como Pakshirajan.
-
Amy Jackson como Nila
-
Amy Jackson es una modelo y actriz británica que ha aparecido en varias películas tamiles, como Madrasapattinam, Thaandavam, I y Theri. También ha actuado en algunas películas hindúes, como Ekk Deewana Tha, Singh Is Bliing, Freaky Ali, y 2.0.
-
Ella es conocida por su belleza, gracia, glamour y estilo. Ha ganado varios premios y reconocimientos por su trabajo en el cine, como el Premio Vijay, el Premio SIIMA, el Premio Asiavision, el Premio Edison y muchos más.
-
-
Ella ayuda al Dr. Vaseegaran en su investigación y también desarrolla una atracción romántica hacia él. Es leal y obediente al Dr. Vaseegaran, pero también tiene sentido del humor y sarcasmo.
-
Ella también se hace amiga de Chitti, el robot que el Dr. Vaseegaran revive para luchar contra Pakshirajan. Ella admira las habilidades y habilidades de Chitti, y lo apoya en su misión.
-
La actuación de Amy Jackson en la película 2.0 Tamil es una de sus más impresionantes y encantadoras en su carrera. Recibió muchos elogios y aprecio de la crítica y el público por su papel como Nila.
-
Otros actores de apoyo
-
2.0 Tamil película también cuenta con muchos otros actores talentosos y experimentados en papeles secundarios, tales como:
-
-
Sudhanshu Pandey como Dhinendra Bohra, el hijo del Dr. Bohra, el antagonista de Enthiran, que quiere vengarse del Dr. Vaseegaran y Chitti.
-
Adil Hussain como Vijay Kumar, el Ministro del Interior de Tamil Nadu, que busca la ayuda del Dr. Vaseegaran para resolver el misterio de los teléfonos móviles.
-
Kalabhavan Shajohn como Sathyanarayanan, el Ministro Principal de Tamil Nadu, que está bajo la presión del público y los medios de comunicación para manejar la crisis.
-
Riyaz Khan como el inspector Manoj Lulla, un oficial de policía asignado para ayudar al Dr. Vaseegaran en su investigación.
-
Kaizaad Kotwal como Ranjeet Lulla, el presidente de una compañía de telecomunicaciones que es blanco de Pakshirajan para el lanzamiento de una nueva torre móvil.
-
Mayilsamy como comerciante que vende teléfonos móviles y accesorios.
-
Murali Satagopan como Anil, un periodista que informa sobre los incidentes relacionados con los teléfonos móviles.
-
-
S. Shankar como director y co-escritor
-
-
Es conocido por su estilo grandioso y fastuoso de cine, su uso innovador y creativo de efectos visuales y animación, sus temas y mensajes sociales y políticos, su reparto y equipo lleno de estrellas, su música pegadiza y melodiosa, y su éxito de taquilla y discos.
-
Ha recibido muchos premios y honores por sus contribuciones al cine, como el Padma Shri, el National Film Award, el Filmfare Award, el Screen Award, el IIFA Award, el Stardust Award, el Zee Cine Award y muchos más.
-
En la película 2.0 Tamil, S. Shankar es el director y co-escritor, junto con B. Jeyamohan. También es el productor de la película, junto con Subaskaran Allirajah y Raju Mahalingam bajo la bandera de Lyca Productions.
-
Él es el visionario y el cerebro detrás de la película, que concibió la idea y la ejecutó con perfección y excelencia. Pasó más de cuatro años haciendo la película, que es una de las películas más caras y ambiciosas del cine indio.
-
Utilizó tecnología y técnicas de vanguardia para crear los efectos visuales y la animación de la película, que son comparables a los estándares de Hollywood. También colaboró con algunos de los mejores talentos de la industria, como A.R. Rahman para la música, Nirav Shah para la cinematografía, Anthony para la edición, T. Muthuraj para la dirección de arte, Resul Pookutty para el diseño de sonido y Legacy Effects para el maquillaje protésico.
-
La dirección y co-escritura de S. Shankar en la película 2.0 Tamil es una de las más destacadas y espectaculares de su carrera. Recibió mucha aclamación y admiración de la crítica y el público por su papel como director y co-escritor de la película 2.0 Tamil.
-
Comentarios y valoraciones
-
Aclamación de críticos y audiencias
-
2.0 La película tamil recibió críticas abrumadoramente positivas de críticos y audiencias, quienes elogiaron la película por su historia, dirección, actuaciones, efectos visuales, música y mensaje.
-
-
El público amó la película por su valor de entretenimiento, sus escenas espectaculares e impresionantes, sus momentos llenos de acción y humor, sus momentos emocionales y sentimentales, sus actores carismáticos y versátiles, sus canciones conmovedoras y románticas, y su mensaje inspirador y motivador.
-
Algunos de los comentarios positivos de los críticos son:
-
-
"2.0 es una película histórica en el cine indio que muestra el poder de la imaginación y la tecnología. Es un espectáculo visual que te dejará fascinado con su grandeza y espectáculo." - Times of India
-
"2.0 es un thriller de ciencia ficción que ofrece en todos los frentes - historia, dirección, actuaciones, efectos visuales, música y mensaje. Es una película rara que combina entretenimiento con iluminación." - Hindustan Times
-
"2.0 es una obra maestra que trasciende los límites del lenguaje y el género. Es una maravilla cinematográfica que celebra el espíritu de la creatividad y la innovación." - Indian Express
-
-
Algunas de las críticas positivas de las audiencias son:
-
-
"2.0 es una película increíble que te sorprenderá con sus impresionantes efectos visuales y acción. Rajinikanth y Akshay Kumar son excelentes en sus papeles. La película tiene un gran mensaje acerca de salvar el medio ambiente y las aves. Una visita obligada para todos." - Ramesh, Chennai
-
"2.0 es una película alucinante que te dejará sin palabras con sus increíbles efectos visuales y acción. Rajinikanth y Akshay Kumar son increíbles en sus papeles. La película tiene un gran mensaje sobre cómo salvar el medio ambiente y las aves. Una visita obligada para todos." - Priya, Mumbai
-
"2.0 es una película fantástica que te sorprenderá con sus increíbles efectos visuales y acción. Rajinikanth y Akshay Kumar son excepcionales en sus papeles. La película tiene un gran mensaje acerca de salvar el medio ambiente y las aves. Una visita obligada para todos." - Karthik, Bangalore
-
-
Éxito de taquilla y registros
-
-
La película se hizo con un presupuesto de Rs. 570 crore, por lo que es una de las películas más caras en el cine indio. Fue lanzado el 29 de noviembre de 2018 en más de 10.000 pantallas en todo el mundo, en varios idiomas, como tamil, telugu, hindi, malayalam, kannada, mandarín y japonés.
-
La película ganó Rs. 117 millones de rupias en su día de apertura, convirtiéndose en el segundo abridor más alto en el cine indio después de Baahubali 2: La Conclusión. Cruzó la marca de Rs. 200 crore en dos días, la marca de Rs. 300 crore en tres días, la marca de Rs. 400 crore en cuatro días, las Rs. 500 crores en cinco días, y la marca de Rs. 600 crores en seis días.
-
La película se convirtió en la primera película india en cruzar la marca de Rs. 700 crore en todo el mundo en siete días, y la segunda película india en cruzar las Rs. 800 millones de rupias en todo el mundo después de Baahubali 2: La Conclusión.
-
La película también se convirtió en la película tamil más taquillera de todos los tiempos, la película más taquillera de la carrera de Rajinikanth, la película más taquillera de la carrera de Akshay Kumar, la película de ciencia ficción más taquillera de la India y la novena película india más taquillera de todos los tiempos.
-
La película también recibió una respuesta positiva de los mercados internacionales, como China, Japón, Malasia, Singapur, Australia, Nueva Zelanda, Reino Unido, EE.UU., Canadá, EAU, y otros.
-
Premios y nominaciones
-
2.0 La película tamil recibió muchos premios y nominaciones por su excelencia en varios aspectos del cine, como la dirección, la actuación, los efectos visuales, la música y el mensaje. Estos son algunos de los principales premios y nominaciones que la película recibió:
-
-
National Film Awards: La película ganó tres National Film Awards por Mejores Efectos Especiales, Mejor Diseño de Producción y Mejor Artista de Maquillaje.
-
Filmfare Awards South: La película ganó cuatro premios Filmfare South por Mejor Película - Tamil, Mejor Director - Tamil (S. Shankar), Mejor Actor - Tamil (Rajinikanth), y Mejor Actor de Reparto - Tamil (Akshay Kumar).
-
-
Vijay Awards: La película ganó seis premios Vijay a la Mejor Película, Mejor Director (S. Shankar), Mejor Actor (Rajinikanth), Mejor Villano (Akshay Kumar), Mejor Director de Fotografía (Nirav Shah), y Mejor Director de Arte (T. Muthur).
-
Zee Cine Awards Tamil: La película ganó cuatro premios Zee Cine Tamil a la Mejor Película, Mejor Director (S. Shankar), Mejor Actor - Masculino (Rajinikanth), y Mejor Actor en un Papel Negativo - Masculino (Akshay Kumar).
-
-
Conclusión
-
Resumen de los puntos principales
-
En conclusión, la película 2.0 Tamil es un thriller de ciencia ficción que te sorprenderá con su historia, dirección, actuaciones, efectos visuales, música y mensaje. Es una secuela del éxito de taquilla de 2010 Enthiran, que contó con Rajinikanth como científico y su creación, un robot humanoide llamado Chitti.
-
En 2.0, Rajinikanth repite sus papeles como el Dr. Vaseegaran y Chitti, que tienen que enfrentar una nueva amenaza de una misteriosa criatura parecida a un pájaro llamada Pakshirajan, interpretada por Akshay Kumar. Pakshirajan es un ex ornitólogo que se convirtió en una fuerza poderosa que puede controlar los teléfonos móviles y otros dispositivos electrónicos después de su muerte.
-
La película muestra cómo el Dr. Vaseegaran revive a Chitti y lo actualiza con nuevas características y habilidades para luchar contra Pakshirajan y salvar la ciudad y el mundo de sus ataques. La película también presenta a Amy Jackson como Nila, una androide avanzada que es asistente y compañera del Dr. Vaseegaran.
-
La película recibió críticas abrumadoramente positivas tanto de críticos como del público, que elogiaron la película por su brillantez técnica, su concepto innovador y creativo, su trama atractiva y emocionante, su relevancia social y política, su reparto y equipo repleto de estrellas, su pegadiza y melodiosa banda sonora, y su éxito de taquilla y registros.
-
La película también recibió muchos premios y nominaciones por su excelencia en varios aspectos del cine, como la dirección, la actuación, los efectos visuales, la música y el mensaje.
-
-
Si eres un fan de las películas de ciencia ficción, no debes perderte la película 2.0 Tamil, ya que es una de las mejores y más entretenidas del género. Te sorprenderá e impresionará la historia, la dirección, las actuaciones, los efectos visuales, la música y el mensaje de esta película.
-
Usted puede ver 2.0 película Tamil en línea legalmente desde varias plataformas de streaming y sitios web que ofrecen alta calidad de vídeo y audio. También puede descargar la película y verla sin conexión en su dispositivo.
-
Sin embargo, usted debe evitar ver 2.0 Tamil película en sitios web ilegales o torrents que ofrecen copias piratas de la película. Estos sitios web no solo son poco éticos e ilegales, sino también inseguros y riesgosos para su dispositivo y datos.
-
Por lo tanto, siempre debe ver la película 2.0 Tamil en línea legalmente de las fuentes oficiales mencionadas en este artículo.
-
Esperamos que haya disfrutado de la lectura de este artículo y aprendido algo nuevo e interesante acerca de la película 2.0 Tamil. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. Nos encantaría saber de usted.
-
Gracias por leer y tener un gran día!
-
Preguntas frecuentes
-
Q: ¿Cuál es el significado de 2.0 en el título de la película?
-
A: El significado de 2.0 en el título de la película es que es una secuela de la película de 2010 Enthiran, que también fue conocido como Robot en hindi. También significa que la película es una versión mejorada y mejorada de la anterior, con nuevas características y habilidades.
-
Q: ¿Quién es la voz de Pakshirajan en la película?
-
A: La voz de Pakshirajan en la película es proporcionada por el propio Akshay Kumar, quien también interpreta el papel de Pakshirajan. Modulaba su voz para que sonara como una criatura parecida a un pájaro, usando un software llamado Audacity.
-
Q: ¿Cuánto tiempo se tarda en hacer 2.0 película tamil?
-
A: Tomó más de cuatro años hacer una película 2.0 Tamil, desde la pre-producción hasta la postproducción. La película fue anunciada en diciembre de 2015, y fue lanzada en noviembre de 2018.
-
-
A: 2.0 película tamil ganó más de Rs. 800 millones de rupias en la taquilla en todo el mundo, por lo que es una de las películas más taquilleras en el cine indio.
-
Q: ¿Hay una tercera parte de la película 2.0 Tamil?
-
A: No hay confirmación oficial o anuncio sobre una tercera parte de la película 2.0 Tamil todavía. Sin embargo, hay algunas pistas y especulaciones que sugieren que podría haber una posibilidad de una tercera parte en el futuro.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/platformdirs/unix.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/platformdirs/unix.py
deleted file mode 100644
index 17d355da9f4b3bc611886bbd4b96dc5f0603a832..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/platformdirs/unix.py
+++ /dev/null
@@ -1,194 +0,0 @@
-from __future__ import annotations
-
-import os
-import sys
-from configparser import ConfigParser
-from pathlib import Path
-
-from .api import PlatformDirsABC
-
-if sys.platform.startswith("linux"): # pragma: no branch # no op check, only to please the type checker
- from os import getuid
-else:
-
- def getuid() -> int:
- raise RuntimeError("should only be used on Linux")
-
-
-class Unix(PlatformDirsABC):
- """
- On Unix/Linux, we follow the
- `XDG Basedir Spec `_. The spec allows
- overriding directories with environment variables. The examples show are the default values, alongside the name of
- the environment variable that overrides them. Makes use of the
- `appname `,
- `version `,
- `multipath `,
- `opinion `,
- `ensure_exists `.
- """
-
- @property
- def user_data_dir(self) -> str:
- """
- :return: data directory tied to the user, e.g. ``~/.local/share/$appname/$version`` or
- ``$XDG_DATA_HOME/$appname/$version``
- """
- path = os.environ.get("XDG_DATA_HOME", "")
- if not path.strip():
- path = os.path.expanduser("~/.local/share")
- return self._append_app_name_and_version(path)
-
- @property
- def site_data_dir(self) -> str:
- """
- :return: data directories shared by users (if `multipath ` is
- enabled and ``XDG_DATA_DIR`` is set and a multi path the response is also a multi path separated by the OS
- path separator), e.g. ``/usr/local/share/$appname/$version`` or ``/usr/share/$appname/$version``
- """
- # XDG default for $XDG_DATA_DIRS; only first, if multipath is False
- path = os.environ.get("XDG_DATA_DIRS", "")
- if not path.strip():
- path = f"/usr/local/share{os.pathsep}/usr/share"
- return self._with_multi_path(path)
-
- def _with_multi_path(self, path: str) -> str:
- path_list = path.split(os.pathsep)
- if not self.multipath:
- path_list = path_list[0:1]
- path_list = [self._append_app_name_and_version(os.path.expanduser(p)) for p in path_list]
- return os.pathsep.join(path_list)
-
- @property
- def user_config_dir(self) -> str:
- """
- :return: config directory tied to the user, e.g. ``~/.config/$appname/$version`` or
- ``$XDG_CONFIG_HOME/$appname/$version``
- """
- path = os.environ.get("XDG_CONFIG_HOME", "")
- if not path.strip():
- path = os.path.expanduser("~/.config")
- return self._append_app_name_and_version(path)
-
- @property
- def site_config_dir(self) -> str:
- """
- :return: config directories shared by users (if `multipath `
- is enabled and ``XDG_DATA_DIR`` is set and a multi path the response is also a multi path separated by the OS
- path separator), e.g. ``/etc/xdg/$appname/$version``
- """
- # XDG default for $XDG_CONFIG_DIRS only first, if multipath is False
- path = os.environ.get("XDG_CONFIG_DIRS", "")
- if not path.strip():
- path = "/etc/xdg"
- return self._with_multi_path(path)
-
- @property
- def user_cache_dir(self) -> str:
- """
- :return: cache directory tied to the user, e.g. ``~/.cache/$appname/$version`` or
- ``~/$XDG_CACHE_HOME/$appname/$version``
- """
- path = os.environ.get("XDG_CACHE_HOME", "")
- if not path.strip():
- path = os.path.expanduser("~/.cache")
- return self._append_app_name_and_version(path)
-
- @property
- def site_cache_dir(self) -> str:
- """
- :return: cache directory shared by users, e.g. ``/var/tmp/$appname/$version``
- """
- return self._append_app_name_and_version("/var/tmp")
-
- @property
- def user_state_dir(self) -> str:
- """
- :return: state directory tied to the user, e.g. ``~/.local/state/$appname/$version`` or
- ``$XDG_STATE_HOME/$appname/$version``
- """
- path = os.environ.get("XDG_STATE_HOME", "")
- if not path.strip():
- path = os.path.expanduser("~/.local/state")
- return self._append_app_name_and_version(path)
-
- @property
- def user_log_dir(self) -> str:
- """
- :return: log directory tied to the user, same as `user_state_dir` if not opinionated else ``log`` in it
- """
- path = self.user_state_dir
- if self.opinion:
- path = os.path.join(path, "log")
- return path
-
- @property
- def user_documents_dir(self) -> str:
- """
- :return: documents directory tied to the user, e.g. ``~/Documents``
- """
- documents_dir = _get_user_dirs_folder("XDG_DOCUMENTS_DIR")
- if documents_dir is None:
- documents_dir = os.environ.get("XDG_DOCUMENTS_DIR", "").strip()
- if not documents_dir:
- documents_dir = os.path.expanduser("~/Documents")
-
- return documents_dir
-
- @property
- def user_runtime_dir(self) -> str:
- """
- :return: runtime directory tied to the user, e.g. ``/run/user/$(id -u)/$appname/$version`` or
- ``$XDG_RUNTIME_DIR/$appname/$version``
- """
- path = os.environ.get("XDG_RUNTIME_DIR", "")
- if not path.strip():
- path = f"/run/user/{getuid()}"
- return self._append_app_name_and_version(path)
-
- @property
- def site_data_path(self) -> Path:
- """:return: data path shared by users. Only return first item, even if ``multipath`` is set to ``True``"""
- return self._first_item_as_path_if_multipath(self.site_data_dir)
-
- @property
- def site_config_path(self) -> Path:
- """:return: config path shared by the users. Only return first item, even if ``multipath`` is set to ``True``"""
- return self._first_item_as_path_if_multipath(self.site_config_dir)
-
- @property
- def site_cache_path(self) -> Path:
- """:return: cache path shared by users. Only return first item, even if ``multipath`` is set to ``True``"""
- return self._first_item_as_path_if_multipath(self.site_cache_dir)
-
- def _first_item_as_path_if_multipath(self, directory: str) -> Path:
- if self.multipath:
- # If multipath is True, the first path is returned.
- directory = directory.split(os.pathsep)[0]
- return Path(directory)
-
-
-def _get_user_dirs_folder(key: str) -> str | None:
- """Return directory from user-dirs.dirs config file. See https://freedesktop.org/wiki/Software/xdg-user-dirs/"""
- user_dirs_config_path = os.path.join(Unix().user_config_dir, "user-dirs.dirs")
- if os.path.exists(user_dirs_config_path):
- parser = ConfigParser()
-
- with open(user_dirs_config_path) as stream:
- # Add fake section header, so ConfigParser doesn't complain
- parser.read_string(f"[top]\n{stream.read()}")
-
- if key not in parser["top"]:
- return None
-
- path = parser["top"][key].strip('"')
- # Handle relative home paths
- path = path.replace("$HOME", os.path.expanduser("~"))
- return path
-
- return None
-
-
-__all__ = [
- "Unix",
-]
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/packaging/specifiers.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/packaging/specifiers.py
deleted file mode 100644
index 0e218a6f9f75ea2060a8b08d1f1a043fdad68df8..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/packaging/specifiers.py
+++ /dev/null
@@ -1,802 +0,0 @@
-# This file is dual licensed under the terms of the Apache License, Version
-# 2.0, and the BSD License. See the LICENSE file in the root of this repository
-# for complete details.
-
-import abc
-import functools
-import itertools
-import re
-import warnings
-from typing import (
- Callable,
- Dict,
- Iterable,
- Iterator,
- List,
- Optional,
- Pattern,
- Set,
- Tuple,
- TypeVar,
- Union,
-)
-
-from .utils import canonicalize_version
-from .version import LegacyVersion, Version, parse
-
-ParsedVersion = Union[Version, LegacyVersion]
-UnparsedVersion = Union[Version, LegacyVersion, str]
-VersionTypeVar = TypeVar("VersionTypeVar", bound=UnparsedVersion)
-CallableOperator = Callable[[ParsedVersion, str], bool]
-
-
-class InvalidSpecifier(ValueError):
- """
- An invalid specifier was found, users should refer to PEP 440.
- """
-
-
-class BaseSpecifier(metaclass=abc.ABCMeta):
- @abc.abstractmethod
- def __str__(self) -> str:
- """
- Returns the str representation of this Specifier like object. This
- should be representative of the Specifier itself.
- """
-
- @abc.abstractmethod
- def __hash__(self) -> int:
- """
- Returns a hash value for this Specifier like object.
- """
-
- @abc.abstractmethod
- def __eq__(self, other: object) -> bool:
- """
- Returns a boolean representing whether or not the two Specifier like
- objects are equal.
- """
-
- @abc.abstractproperty
- def prereleases(self) -> Optional[bool]:
- """
- Returns whether or not pre-releases as a whole are allowed by this
- specifier.
- """
-
- @prereleases.setter
- def prereleases(self, value: bool) -> None:
- """
- Sets whether or not pre-releases as a whole are allowed by this
- specifier.
- """
-
- @abc.abstractmethod
- def contains(self, item: str, prereleases: Optional[bool] = None) -> bool:
- """
- Determines if the given item is contained within this specifier.
- """
-
- @abc.abstractmethod
- def filter(
- self, iterable: Iterable[VersionTypeVar], prereleases: Optional[bool] = None
- ) -> Iterable[VersionTypeVar]:
- """
- Takes an iterable of items and filters them so that only items which
- are contained within this specifier are allowed in it.
- """
-
-
-class _IndividualSpecifier(BaseSpecifier):
-
- _operators: Dict[str, str] = {}
- _regex: Pattern[str]
-
- def __init__(self, spec: str = "", prereleases: Optional[bool] = None) -> None:
- match = self._regex.search(spec)
- if not match:
- raise InvalidSpecifier(f"Invalid specifier: '{spec}'")
-
- self._spec: Tuple[str, str] = (
- match.group("operator").strip(),
- match.group("version").strip(),
- )
-
- # Store whether or not this Specifier should accept prereleases
- self._prereleases = prereleases
-
- def __repr__(self) -> str:
- pre = (
- f", prereleases={self.prereleases!r}"
- if self._prereleases is not None
- else ""
- )
-
- return f"<{self.__class__.__name__}({str(self)!r}{pre})>"
-
- def __str__(self) -> str:
- return "{}{}".format(*self._spec)
-
- @property
- def _canonical_spec(self) -> Tuple[str, str]:
- return self._spec[0], canonicalize_version(self._spec[1])
-
- def __hash__(self) -> int:
- return hash(self._canonical_spec)
-
- def __eq__(self, other: object) -> bool:
- if isinstance(other, str):
- try:
- other = self.__class__(str(other))
- except InvalidSpecifier:
- return NotImplemented
- elif not isinstance(other, self.__class__):
- return NotImplemented
-
- return self._canonical_spec == other._canonical_spec
-
- def _get_operator(self, op: str) -> CallableOperator:
- operator_callable: CallableOperator = getattr(
- self, f"_compare_{self._operators[op]}"
- )
- return operator_callable
-
- def _coerce_version(self, version: UnparsedVersion) -> ParsedVersion:
- if not isinstance(version, (LegacyVersion, Version)):
- version = parse(version)
- return version
-
- @property
- def operator(self) -> str:
- return self._spec[0]
-
- @property
- def version(self) -> str:
- return self._spec[1]
-
- @property
- def prereleases(self) -> Optional[bool]:
- return self._prereleases
-
- @prereleases.setter
- def prereleases(self, value: bool) -> None:
- self._prereleases = value
-
- def __contains__(self, item: str) -> bool:
- return self.contains(item)
-
- def contains(
- self, item: UnparsedVersion, prereleases: Optional[bool] = None
- ) -> bool:
-
- # Determine if prereleases are to be allowed or not.
- if prereleases is None:
- prereleases = self.prereleases
-
- # Normalize item to a Version or LegacyVersion, this allows us to have
- # a shortcut for ``"2.0" in Specifier(">=2")
- normalized_item = self._coerce_version(item)
-
- # Determine if we should be supporting prereleases in this specifier
- # or not, if we do not support prereleases than we can short circuit
- # logic if this version is a prereleases.
- if normalized_item.is_prerelease and not prereleases:
- return False
-
- # Actually do the comparison to determine if this item is contained
- # within this Specifier or not.
- operator_callable: CallableOperator = self._get_operator(self.operator)
- return operator_callable(normalized_item, self.version)
-
- def filter(
- self, iterable: Iterable[VersionTypeVar], prereleases: Optional[bool] = None
- ) -> Iterable[VersionTypeVar]:
-
- yielded = False
- found_prereleases = []
-
- kw = {"prereleases": prereleases if prereleases is not None else True}
-
- # Attempt to iterate over all the values in the iterable and if any of
- # them match, yield them.
- for version in iterable:
- parsed_version = self._coerce_version(version)
-
- if self.contains(parsed_version, **kw):
- # If our version is a prerelease, and we were not set to allow
- # prereleases, then we'll store it for later in case nothing
- # else matches this specifier.
- if parsed_version.is_prerelease and not (
- prereleases or self.prereleases
- ):
- found_prereleases.append(version)
- # Either this is not a prerelease, or we should have been
- # accepting prereleases from the beginning.
- else:
- yielded = True
- yield version
-
- # Now that we've iterated over everything, determine if we've yielded
- # any values, and if we have not and we have any prereleases stored up
- # then we will go ahead and yield the prereleases.
- if not yielded and found_prereleases:
- for version in found_prereleases:
- yield version
-
-
-class LegacySpecifier(_IndividualSpecifier):
-
- _regex_str = r"""
- (?P(==|!=|<=|>=|<|>))
- \s*
- (?P
- [^,;\s)]* # Since this is a "legacy" specifier, and the version
- # string can be just about anything, we match everything
- # except for whitespace, a semi-colon for marker support,
- # a closing paren since versions can be enclosed in
- # them, and a comma since it's a version separator.
- )
- """
-
- _regex = re.compile(r"^\s*" + _regex_str + r"\s*$", re.VERBOSE | re.IGNORECASE)
-
- _operators = {
- "==": "equal",
- "!=": "not_equal",
- "<=": "less_than_equal",
- ">=": "greater_than_equal",
- "<": "less_than",
- ">": "greater_than",
- }
-
- def __init__(self, spec: str = "", prereleases: Optional[bool] = None) -> None:
- super().__init__(spec, prereleases)
-
- warnings.warn(
- "Creating a LegacyVersion has been deprecated and will be "
- "removed in the next major release",
- DeprecationWarning,
- )
-
- def _coerce_version(self, version: UnparsedVersion) -> LegacyVersion:
- if not isinstance(version, LegacyVersion):
- version = LegacyVersion(str(version))
- return version
-
- def _compare_equal(self, prospective: LegacyVersion, spec: str) -> bool:
- return prospective == self._coerce_version(spec)
-
- def _compare_not_equal(self, prospective: LegacyVersion, spec: str) -> bool:
- return prospective != self._coerce_version(spec)
-
- def _compare_less_than_equal(self, prospective: LegacyVersion, spec: str) -> bool:
- return prospective <= self._coerce_version(spec)
-
- def _compare_greater_than_equal(
- self, prospective: LegacyVersion, spec: str
- ) -> bool:
- return prospective >= self._coerce_version(spec)
-
- def _compare_less_than(self, prospective: LegacyVersion, spec: str) -> bool:
- return prospective < self._coerce_version(spec)
-
- def _compare_greater_than(self, prospective: LegacyVersion, spec: str) -> bool:
- return prospective > self._coerce_version(spec)
-
-
-def _require_version_compare(
- fn: Callable[["Specifier", ParsedVersion, str], bool]
-) -> Callable[["Specifier", ParsedVersion, str], bool]:
- @functools.wraps(fn)
- def wrapped(self: "Specifier", prospective: ParsedVersion, spec: str) -> bool:
- if not isinstance(prospective, Version):
- return False
- return fn(self, prospective, spec)
-
- return wrapped
-
-
-class Specifier(_IndividualSpecifier):
-
- _regex_str = r"""
- (?P(~=|==|!=|<=|>=|<|>|===))
- (?P
- (?:
- # The identity operators allow for an escape hatch that will
- # do an exact string match of the version you wish to install.
- # This will not be parsed by PEP 440 and we cannot determine
- # any semantic meaning from it. This operator is discouraged
- # but included entirely as an escape hatch.
- (?<====) # Only match for the identity operator
- \s*
- [^\s]* # We just match everything, except for whitespace
- # since we are only testing for strict identity.
- )
- |
- (?:
- # The (non)equality operators allow for wild card and local
- # versions to be specified so we have to define these two
- # operators separately to enable that.
- (?<===|!=) # Only match for equals and not equals
-
- \s*
- v?
- (?:[0-9]+!)? # epoch
- [0-9]+(?:\.[0-9]+)* # release
- (?: # pre release
- [-_\.]?
- (a|b|c|rc|alpha|beta|pre|preview)
- [-_\.]?
- [0-9]*
- )?
- (?: # post release
- (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*)
- )?
-
- # You cannot use a wild card and a dev or local version
- # together so group them with a | and make them optional.
- (?:
- (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release
- (?:\+[a-z0-9]+(?:[-_\.][a-z0-9]+)*)? # local
- |
- \.\* # Wild card syntax of .*
- )?
- )
- |
- (?:
- # The compatible operator requires at least two digits in the
- # release segment.
- (?<=~=) # Only match for the compatible operator
-
- \s*
- v?
- (?:[0-9]+!)? # epoch
- [0-9]+(?:\.[0-9]+)+ # release (We have a + instead of a *)
- (?: # pre release
- [-_\.]?
- (a|b|c|rc|alpha|beta|pre|preview)
- [-_\.]?
- [0-9]*
- )?
- (?: # post release
- (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*)
- )?
- (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release
- )
- |
- (?:
- # All other operators only allow a sub set of what the
- # (non)equality operators do. Specifically they do not allow
- # local versions to be specified nor do they allow the prefix
- # matching wild cards.
- (?=": "greater_than_equal",
- "<": "less_than",
- ">": "greater_than",
- "===": "arbitrary",
- }
-
- @_require_version_compare
- def _compare_compatible(self, prospective: ParsedVersion, spec: str) -> bool:
-
- # Compatible releases have an equivalent combination of >= and ==. That
- # is that ~=2.2 is equivalent to >=2.2,==2.*. This allows us to
- # implement this in terms of the other specifiers instead of
- # implementing it ourselves. The only thing we need to do is construct
- # the other specifiers.
-
- # We want everything but the last item in the version, but we want to
- # ignore suffix segments.
- prefix = ".".join(
- list(itertools.takewhile(_is_not_suffix, _version_split(spec)))[:-1]
- )
-
- # Add the prefix notation to the end of our string
- prefix += ".*"
-
- return self._get_operator(">=")(prospective, spec) and self._get_operator("==")(
- prospective, prefix
- )
-
- @_require_version_compare
- def _compare_equal(self, prospective: ParsedVersion, spec: str) -> bool:
-
- # We need special logic to handle prefix matching
- if spec.endswith(".*"):
- # In the case of prefix matching we want to ignore local segment.
- prospective = Version(prospective.public)
- # Split the spec out by dots, and pretend that there is an implicit
- # dot in between a release segment and a pre-release segment.
- split_spec = _version_split(spec[:-2]) # Remove the trailing .*
-
- # Split the prospective version out by dots, and pretend that there
- # is an implicit dot in between a release segment and a pre-release
- # segment.
- split_prospective = _version_split(str(prospective))
-
- # Shorten the prospective version to be the same length as the spec
- # so that we can determine if the specifier is a prefix of the
- # prospective version or not.
- shortened_prospective = split_prospective[: len(split_spec)]
-
- # Pad out our two sides with zeros so that they both equal the same
- # length.
- padded_spec, padded_prospective = _pad_version(
- split_spec, shortened_prospective
- )
-
- return padded_prospective == padded_spec
- else:
- # Convert our spec string into a Version
- spec_version = Version(spec)
-
- # If the specifier does not have a local segment, then we want to
- # act as if the prospective version also does not have a local
- # segment.
- if not spec_version.local:
- prospective = Version(prospective.public)
-
- return prospective == spec_version
-
- @_require_version_compare
- def _compare_not_equal(self, prospective: ParsedVersion, spec: str) -> bool:
- return not self._compare_equal(prospective, spec)
-
- @_require_version_compare
- def _compare_less_than_equal(self, prospective: ParsedVersion, spec: str) -> bool:
-
- # NB: Local version identifiers are NOT permitted in the version
- # specifier, so local version labels can be universally removed from
- # the prospective version.
- return Version(prospective.public) <= Version(spec)
-
- @_require_version_compare
- def _compare_greater_than_equal(
- self, prospective: ParsedVersion, spec: str
- ) -> bool:
-
- # NB: Local version identifiers are NOT permitted in the version
- # specifier, so local version labels can be universally removed from
- # the prospective version.
- return Version(prospective.public) >= Version(spec)
-
- @_require_version_compare
- def _compare_less_than(self, prospective: ParsedVersion, spec_str: str) -> bool:
-
- # Convert our spec to a Version instance, since we'll want to work with
- # it as a version.
- spec = Version(spec_str)
-
- # Check to see if the prospective version is less than the spec
- # version. If it's not we can short circuit and just return False now
- # instead of doing extra unneeded work.
- if not prospective < spec:
- return False
-
- # This special case is here so that, unless the specifier itself
- # includes is a pre-release version, that we do not accept pre-release
- # versions for the version mentioned in the specifier (e.g. <3.1 should
- # not match 3.1.dev0, but should match 3.0.dev0).
- if not spec.is_prerelease and prospective.is_prerelease:
- if Version(prospective.base_version) == Version(spec.base_version):
- return False
-
- # If we've gotten to here, it means that prospective version is both
- # less than the spec version *and* it's not a pre-release of the same
- # version in the spec.
- return True
-
- @_require_version_compare
- def _compare_greater_than(self, prospective: ParsedVersion, spec_str: str) -> bool:
-
- # Convert our spec to a Version instance, since we'll want to work with
- # it as a version.
- spec = Version(spec_str)
-
- # Check to see if the prospective version is greater than the spec
- # version. If it's not we can short circuit and just return False now
- # instead of doing extra unneeded work.
- if not prospective > spec:
- return False
-
- # This special case is here so that, unless the specifier itself
- # includes is a post-release version, that we do not accept
- # post-release versions for the version mentioned in the specifier
- # (e.g. >3.1 should not match 3.0.post0, but should match 3.2.post0).
- if not spec.is_postrelease and prospective.is_postrelease:
- if Version(prospective.base_version) == Version(spec.base_version):
- return False
-
- # Ensure that we do not allow a local version of the version mentioned
- # in the specifier, which is technically greater than, to match.
- if prospective.local is not None:
- if Version(prospective.base_version) == Version(spec.base_version):
- return False
-
- # If we've gotten to here, it means that prospective version is both
- # greater than the spec version *and* it's not a pre-release of the
- # same version in the spec.
- return True
-
- def _compare_arbitrary(self, prospective: Version, spec: str) -> bool:
- return str(prospective).lower() == str(spec).lower()
-
- @property
- def prereleases(self) -> bool:
-
- # If there is an explicit prereleases set for this, then we'll just
- # blindly use that.
- if self._prereleases is not None:
- return self._prereleases
-
- # Look at all of our specifiers and determine if they are inclusive
- # operators, and if they are if they are including an explicit
- # prerelease.
- operator, version = self._spec
- if operator in ["==", ">=", "<=", "~=", "==="]:
- # The == specifier can include a trailing .*, if it does we
- # want to remove before parsing.
- if operator == "==" and version.endswith(".*"):
- version = version[:-2]
-
- # Parse the version, and if it is a pre-release than this
- # specifier allows pre-releases.
- if parse(version).is_prerelease:
- return True
-
- return False
-
- @prereleases.setter
- def prereleases(self, value: bool) -> None:
- self._prereleases = value
-
-
-_prefix_regex = re.compile(r"^([0-9]+)((?:a|b|c|rc)[0-9]+)$")
-
-
-def _version_split(version: str) -> List[str]:
- result: List[str] = []
- for item in version.split("."):
- match = _prefix_regex.search(item)
- if match:
- result.extend(match.groups())
- else:
- result.append(item)
- return result
-
-
-def _is_not_suffix(segment: str) -> bool:
- return not any(
- segment.startswith(prefix) for prefix in ("dev", "a", "b", "rc", "post")
- )
-
-
-def _pad_version(left: List[str], right: List[str]) -> Tuple[List[str], List[str]]:
- left_split, right_split = [], []
-
- # Get the release segment of our versions
- left_split.append(list(itertools.takewhile(lambda x: x.isdigit(), left)))
- right_split.append(list(itertools.takewhile(lambda x: x.isdigit(), right)))
-
- # Get the rest of our versions
- left_split.append(left[len(left_split[0]) :])
- right_split.append(right[len(right_split[0]) :])
-
- # Insert our padding
- left_split.insert(1, ["0"] * max(0, len(right_split[0]) - len(left_split[0])))
- right_split.insert(1, ["0"] * max(0, len(left_split[0]) - len(right_split[0])))
-
- return (list(itertools.chain(*left_split)), list(itertools.chain(*right_split)))
-
-
-class SpecifierSet(BaseSpecifier):
- def __init__(
- self, specifiers: str = "", prereleases: Optional[bool] = None
- ) -> None:
-
- # Split on , to break each individual specifier into it's own item, and
- # strip each item to remove leading/trailing whitespace.
- split_specifiers = [s.strip() for s in specifiers.split(",") if s.strip()]
-
- # Parsed each individual specifier, attempting first to make it a
- # Specifier and falling back to a LegacySpecifier.
- parsed: Set[_IndividualSpecifier] = set()
- for specifier in split_specifiers:
- try:
- parsed.add(Specifier(specifier))
- except InvalidSpecifier:
- parsed.add(LegacySpecifier(specifier))
-
- # Turn our parsed specifiers into a frozen set and save them for later.
- self._specs = frozenset(parsed)
-
- # Store our prereleases value so we can use it later to determine if
- # we accept prereleases or not.
- self._prereleases = prereleases
-
- def __repr__(self) -> str:
- pre = (
- f", prereleases={self.prereleases!r}"
- if self._prereleases is not None
- else ""
- )
-
- return f""
-
- def __str__(self) -> str:
- return ",".join(sorted(str(s) for s in self._specs))
-
- def __hash__(self) -> int:
- return hash(self._specs)
-
- def __and__(self, other: Union["SpecifierSet", str]) -> "SpecifierSet":
- if isinstance(other, str):
- other = SpecifierSet(other)
- elif not isinstance(other, SpecifierSet):
- return NotImplemented
-
- specifier = SpecifierSet()
- specifier._specs = frozenset(self._specs | other._specs)
-
- if self._prereleases is None and other._prereleases is not None:
- specifier._prereleases = other._prereleases
- elif self._prereleases is not None and other._prereleases is None:
- specifier._prereleases = self._prereleases
- elif self._prereleases == other._prereleases:
- specifier._prereleases = self._prereleases
- else:
- raise ValueError(
- "Cannot combine SpecifierSets with True and False prerelease "
- "overrides."
- )
-
- return specifier
-
- def __eq__(self, other: object) -> bool:
- if isinstance(other, (str, _IndividualSpecifier)):
- other = SpecifierSet(str(other))
- elif not isinstance(other, SpecifierSet):
- return NotImplemented
-
- return self._specs == other._specs
-
- def __len__(self) -> int:
- return len(self._specs)
-
- def __iter__(self) -> Iterator[_IndividualSpecifier]:
- return iter(self._specs)
-
- @property
- def prereleases(self) -> Optional[bool]:
-
- # If we have been given an explicit prerelease modifier, then we'll
- # pass that through here.
- if self._prereleases is not None:
- return self._prereleases
-
- # If we don't have any specifiers, and we don't have a forced value,
- # then we'll just return None since we don't know if this should have
- # pre-releases or not.
- if not self._specs:
- return None
-
- # Otherwise we'll see if any of the given specifiers accept
- # prereleases, if any of them do we'll return True, otherwise False.
- return any(s.prereleases for s in self._specs)
-
- @prereleases.setter
- def prereleases(self, value: bool) -> None:
- self._prereleases = value
-
- def __contains__(self, item: UnparsedVersion) -> bool:
- return self.contains(item)
-
- def contains(
- self, item: UnparsedVersion, prereleases: Optional[bool] = None
- ) -> bool:
-
- # Ensure that our item is a Version or LegacyVersion instance.
- if not isinstance(item, (LegacyVersion, Version)):
- item = parse(item)
-
- # Determine if we're forcing a prerelease or not, if we're not forcing
- # one for this particular filter call, then we'll use whatever the
- # SpecifierSet thinks for whether or not we should support prereleases.
- if prereleases is None:
- prereleases = self.prereleases
-
- # We can determine if we're going to allow pre-releases by looking to
- # see if any of the underlying items supports them. If none of them do
- # and this item is a pre-release then we do not allow it and we can
- # short circuit that here.
- # Note: This means that 1.0.dev1 would not be contained in something
- # like >=1.0.devabc however it would be in >=1.0.debabc,>0.0.dev0
- if not prereleases and item.is_prerelease:
- return False
-
- # We simply dispatch to the underlying specs here to make sure that the
- # given version is contained within all of them.
- # Note: This use of all() here means that an empty set of specifiers
- # will always return True, this is an explicit design decision.
- return all(s.contains(item, prereleases=prereleases) for s in self._specs)
-
- def filter(
- self, iterable: Iterable[VersionTypeVar], prereleases: Optional[bool] = None
- ) -> Iterable[VersionTypeVar]:
-
- # Determine if we're forcing a prerelease or not, if we're not forcing
- # one for this particular filter call, then we'll use whatever the
- # SpecifierSet thinks for whether or not we should support prereleases.
- if prereleases is None:
- prereleases = self.prereleases
-
- # If we have any specifiers, then we want to wrap our iterable in the
- # filter method for each one, this will act as a logical AND amongst
- # each specifier.
- if self._specs:
- for spec in self._specs:
- iterable = spec.filter(iterable, prereleases=bool(prereleases))
- return iterable
- # If we do not have any specifiers, then we need to have a rough filter
- # which will filter out any pre-releases, unless there are no final
- # releases, and which will filter out LegacyVersion in general.
- else:
- filtered: List[VersionTypeVar] = []
- found_prereleases: List[VersionTypeVar] = []
-
- item: UnparsedVersion
- parsed_version: Union[Version, LegacyVersion]
-
- for item in iterable:
- # Ensure that we some kind of Version class for this item.
- if not isinstance(item, (LegacyVersion, Version)):
- parsed_version = parse(item)
- else:
- parsed_version = item
-
- # Filter out any item which is parsed as a LegacyVersion
- if isinstance(parsed_version, LegacyVersion):
- continue
-
- # Store any item which is a pre-release for later unless we've
- # already found a final version or we are accepting prereleases
- if parsed_version.is_prerelease and not prereleases:
- if not filtered:
- found_prereleases.append(item)
- else:
- filtered.append(item)
-
- # If we've found no items except for pre-releases, then we'll go
- # ahead and use the pre-releases
- if not filtered and found_prereleases and prereleases is None:
- return found_prereleases
-
- return filtered
diff --git a/spaces/Bijoy2001/real-time-voice-recognition/app.py b/spaces/Bijoy2001/real-time-voice-recognition/app.py
deleted file mode 100644
index 16c7f912f8169a573bac2268320e5812162cf90e..0000000000000000000000000000000000000000
--- a/spaces/Bijoy2001/real-time-voice-recognition/app.py
+++ /dev/null
@@ -1,20 +0,0 @@
-
-import gradio as gr
-import time
-def transcribe (audio, state=" "):
- time.sleep(3)
- """ speech to text function using the pipeline that we defined"""
- text= p(audio) ["text"]
- state += text + " "
- return state, state
-gr.Interface(
- fn=transcribe,
- inputs=[
- gr.inputs.Audio(source="microphone", type="filepath"),
- "state"
- ],
- outputs=[
- "textbox",
- "state"
- ],
- live=True).launch()
\ No newline at end of file
diff --git a/spaces/BilalSardar/YoutubeVideoLink-To-MCQs-Generation/app.py b/spaces/BilalSardar/YoutubeVideoLink-To-MCQs-Generation/app.py
deleted file mode 100644
index 17f717a569afb26be2bc876dcb9bccdfb93eefb5..0000000000000000000000000000000000000000
--- a/spaces/BilalSardar/YoutubeVideoLink-To-MCQs-Generation/app.py
+++ /dev/null
@@ -1,320 +0,0 @@
-import os
-import gradio as gr
-from pathlib import Path
-from pydub import AudioSegment
-from pydub.utils import make_chunks
-import os
-import gensim
-from gensim.test.utils import datapath, get_tmpfile
-from gensim.scripts.glove2word2vec import glove2word2vec
-from gensim.models import KeyedVectors
-import torch
-import warnings
-import speech_recognition as sr
-from transformers import T5ForConditionalGeneration,T5Tokenizer
-import nltk
-from flashtext import KeywordProcessor
-from collections import OrderedDict
-from sklearn.metrics.pairwise import cosine_similarity
-
-nltk.download('punkt')
-nltk.download('brown')
-nltk.download('wordnet')
-nltk.download('stopwords')
-from nltk.corpus import wordnet as wn
-from nltk.tokenize import sent_tokenize
-from textwrap3 import wrap
-import random
-import numpy as np
-from nltk.corpus import stopwords
-import string
-import pke
-import traceback
-import spacy
-
-
-warnings.filterwarnings("ignore")
-def download_youtube(url, choice, res):
-
- yt = pytube.YouTube(url)
-
- if choice == 'mp3':
- audio = yt.streams.filter(only_audio=True).first()
- print(f"Downloading {audio.title} as MP3")
- return audio.download()
-
- elif choice == 'mp4':
- if res == "720p":
- video = yt.streams.filter(res="720p").first()
- elif res == "1080p":
- video = yt.streams.filter(res="1080p").first()
- elif res == "2160p":
- video = yt.streams.filter(res="2160p").first()
- else:
- return "Invalid resolution"
-
- print(f"Downloading {video.title} at {video.resolution}")
- return video.download()
-
- else:
- return "Invalid choice"
-def Process_audio(fileName):
- text=''
- txtf=open("The_audio.txt","w+")
- myaudio=AudioSegment.from_wav(fileName)
- chunks_length_ms=8000
- chunks=make_chunks(myaudio,chunks_length_ms)
- for i, chunk in enumerate(chunks):
- chunkName='./chunked/'+fileName+"_{0}.wav".format(i)
- print("I am Exporting",chunkName)
- chunk.export(chunkName,format="wav")
- File=chunkName
- r= sr.Recognizer()
- with sr.AudioFile(File) as source:
- audio_listened=r.listen(source)
-
- try:
- rec=r.recognize_google(audio_listened)
- txtf.write(rec+".")
- text+=rec+"."
- except sr.UnknownValueError:
- print("I dont recognize your audio")
- except sr.RequestError as e:
- print("could not get result")
- return text
-try:
- os.makedirs("chunked")
-except:
- pass
-
-def UrlToAudio(VideoUrl):
- url=VideoUrl
- #os.system("yt-dlp -x --audio-format wav " + url)
- download_youtube(VideoUrl,"mp3","")
- # load audio and pad/trim it to fit 30 seconds
- base_path = Path(r"")
- for wav_file_path in base_path.glob("*.wav"):
- Process_audio(str(wav_file_path))
- break
-
-
-summary_model = T5ForConditionalGeneration.from_pretrained('t5-base')
-summary_tokenizer = T5Tokenizer.from_pretrained('t5-base')
-
-device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-summary_model = summary_model.to(device)
-
-
-def set_seed(seed: int):
- random.seed(seed)
- np.random.seed(seed)
- torch.manual_seed(seed)
- torch.cuda.manual_seed_all(seed)
-
-def postprocesstext (content):
- final=""
- for sent in sent_tokenize(content):
- sent = sent.capitalize()
- final = final +" "+sent
- return final
-
-
-def summarizer(text,model,tokenizer):
- text = text.strip().replace("\n"," ")
- text = "summarize: "+text
- # print (text)
- max_len = 512
- encoding = tokenizer.encode_plus(text,max_length=max_len, pad_to_max_length=False,truncation=True, return_tensors="pt").to(device)
-
- input_ids, attention_mask = encoding["input_ids"], encoding["attention_mask"]
-
- outs = model.generate(input_ids=input_ids,
- attention_mask=attention_mask,
- early_stopping=True,
- num_beams=3,
- num_return_sequences=1,
- no_repeat_ngram_size=2,
- min_length = 75,
- max_length=300)
-
-
- dec = [tokenizer.decode(ids,skip_special_tokens=True) for ids in outs]
- summary = dec[0]
- summary = postprocesstext(summary)
- summary= summary.strip()
-
- return summary
-
-
-def get_nouns_multipartite(content):
- out=[]
- try:
- extractor = pke.unsupervised.MultipartiteRank()
-
- # not contain punctuation marks or stopwords as candidates.
- pos = {'PROPN','NOUN'}
- #pos = {'PROPN','NOUN'}
- stoplist = list(string.punctuation)
- stoplist += ['-lrb-', '-rrb-', '-lcb-', '-rcb-', '-lsb-', '-rsb-']
- stoplist += stopwords.words('english')
-
- extractor.load_document(input=content,language='en',
- stoplist=stoplist,
- normalization=None)
-
- extractor.candidate_selection(pos=pos)
- # 4. build the Multipartite graph and rank candidates using random walk,
- # alpha controls the weight adjustment mechanism, see TopicRank for
- # threshold/method parameters.
- extractor.candidate_weighting(alpha=1.1,
- threshold=0.75,
- method='average')
- keyphrases = extractor.get_n_best(n=15)
-
-
- for val in keyphrases:
- out.append(val[0])
- except:
- out = []
- traceback.print_exc()
-
- return out
-
-def get_keywords(originaltext,summarytext):
- keywords = get_nouns_multipartite(originaltext)
- print ("keywords unsummarized: ",keywords)
- keyword_processor = KeywordProcessor()
- for keyword in keywords:
- keyword_processor.add_keyword(keyword)
-
- keywords_found = keyword_processor.extract_keywords(summarytext)
- keywords_found = list(set(keywords_found))
- print ("keywords_found in summarized: ",keywords_found)
-
- important_keywords =[]
- for keyword in keywords:
- if keyword in keywords_found:
- important_keywords.append(keyword)
-
- return important_keywords[:4]
-
-question_model = T5ForConditionalGeneration.from_pretrained('ramsrigouthamg/t5_squad_v1')
-question_tokenizer = T5Tokenizer.from_pretrained('ramsrigouthamg/t5_squad_v1')
-question_model = question_model.to(device)
-
-def get_question(context,answer,model,tokenizer):
- text = "context: {} answer: {}".format(context,answer)
- encoding = tokenizer.encode_plus(text,max_length=384, pad_to_max_length=False,truncation=True, return_tensors="pt").to(device)
- input_ids, attention_mask = encoding["input_ids"], encoding["attention_mask"]
-
- outs = model.generate(input_ids=input_ids,
- attention_mask=attention_mask,
- early_stopping=True,
- num_beams=5,
- num_return_sequences=1,
- no_repeat_ngram_size=2,
- max_length=72)
-
-
- dec = [tokenizer.decode(ids,skip_special_tokens=True) for ids in outs]
-
-
- Question = dec[0].replace("question:","")
- Question= Question.strip()
- return Question
-def get_distractors_wordnet(word):
- distractors=[]
- try:
- syn = wn.synsets(word,'n')[0]
-
- word= word.lower()
- orig_word = word
- if len(word.split())>0:
- word = word.replace(" ","_")
- hypernym = syn.hypernyms()
- if len(hypernym) == 0:
- return distractors
- for item in hypernym[0].hyponyms():
- name = item.lemmas()[0].name()
- #print ("name ",name, " word",orig_word)
- if name == orig_word:
- continue
- name = name.replace("_"," ")
- name = " ".join(w.capitalize() for w in name.split())
- if name is not None and name not in distractors:
- distractors.append(name)
- except:
- print ("Wordnet distractors not found")
- return distractors
-
-glove_file = '/home/user/app/glove.6B.300d.txt'
-tmp_file = '/home/user/app/word2vec-glove.6B.300d.txt'
-
-glove2word2vec(glove_file, tmp_file)
-model = KeyedVectors.load_word2vec_format(tmp_file)
-def generate_distractors(answer, count):
- answer = str.lower(answer)
-
- ##Extracting closest words for the answer.
- try:
- closestWords = model.most_similar(positive=[answer], topn=count)
- except:
- #In case the word is not in the vocabulary, or other problem not loading embeddings
- return []
-
- #Return count many distractors
- distractors = list(map(lambda x: x[0], closestWords))[0:count]
-
- return distractors
-context1 = gr.inputs.Textbox(lines=10, placeholder="Enter link here...")
-output = gr.outputs.HTML( label="Question and Answers")
-radiobutton = gr.inputs.Radio(["Wordnet", "Gensim"])
-
-def generate_question(context1,radiobutton):
- # try:
-
- f = open("The_audio.txt", "w+")
- context=f.read()
- summary_text = summarizer(context,summary_model,summary_tokenizer)
- for wrp in wrap(summary_text, 150):
- print (wrp)
- # np = getnounphrases(summary_text,sentence_transformer_model,3)
- np = get_keywords(context,summary_text)
- print ("\n\nNoun phrases",np)
- output=""
- for answer in np:
- ques = get_question(summary_text,answer,question_model,question_tokenizer)
- if radiobutton=="Wordnet":
- distractors = get_distractors_wordnet(answer)
- else:
- distractors = generate_distractors(answer.capitalize(),3)
- print(distractors)
-
- # output= output + ques + "\n" + "Ans: "+answer.capitalize() + "\n\n"
- output ="\n"+ output + "" + ques + ""
- # output = output + " "
- output ="\n"+ output + "" + "Ans: " +answer.capitalize()+ ""
- if len(distractors)>0:
- for distractor in distractors[:4]:
- output = output + " " + distractor+ "\n"
- output = output + " "
-
- summary ="Summary: "+ summary_text
- for answer in np:
- summary = summary.replace(answer,""+answer+"")
- summary = summary.replace(answer.capitalize(),""+answer.capitalize()+"")
- output = output + "
"+summary+"
"
- return output
- # except:
- # return "Something Went Wrong...Please Check Link or try Again"
-
-
-
-iface = gr.Interface(
- fn=generate_question,
- inputs=[context1,radiobutton],
- title="VidQuest",
- examples=[["https://www.youtube.com/watch?v=WSbgixdC9g8","Gensim"]],
- description="Keep in mind that it might take some minutes. Correct answers appear in green, while incorrect choices appear in red. Use the Gensim tool to find the most appropriate distractions.",
- outputs=output)
-iface.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/BridgeTower/bridgetower-video-search/README.md b/spaces/BridgeTower/bridgetower-video-search/README.md
deleted file mode 100644
index 546b6f4a2219d6fe0dd0e8e262ae84a880b12980..0000000000000000000000000000000000000000
--- a/spaces/BridgeTower/bridgetower-video-search/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Bridgetower Video Search
-emoji: 🏃
-colorFrom: green
-colorTo: pink
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_tagbased_polymorphic.cpp b/spaces/CVPR/LIVE/pybind11/tests/test_tagbased_polymorphic.cpp
deleted file mode 100644
index dcc005126eed4ae13f69dedcb1fe04dce1a4c22f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/tests/test_tagbased_polymorphic.cpp
+++ /dev/null
@@ -1,142 +0,0 @@
-/*
- tests/test_tagbased_polymorphic.cpp -- test of polymorphic_type_hook
-
- Copyright (c) 2018 Hudson River Trading LLC
-
- All rights reserved. Use of this source code is governed by a
- BSD-style license that can be found in the LICENSE file.
-*/
-
-#include "pybind11_tests.h"
-#include
-
-struct Animal
-{
- // Make this type also a "standard" polymorphic type, to confirm that
- // specializing polymorphic_type_hook using enable_if_t still works
- // (https://github.com/pybind/pybind11/pull/2016/).
- virtual ~Animal() = default;
-
- // Enum for tag-based polymorphism.
- enum class Kind {
- Unknown = 0,
- Dog = 100, Labrador, Chihuahua, LastDog = 199,
- Cat = 200, Panther, LastCat = 299
- };
- static const std::type_info* type_of_kind(Kind kind);
- static std::string name_of_kind(Kind kind);
-
- const Kind kind;
- const std::string name;
-
- protected:
- Animal(const std::string& _name, Kind _kind)
- : kind(_kind), name(_name)
- {}
-};
-
-struct Dog : Animal
-{
- Dog(const std::string& _name, Kind _kind = Kind::Dog) : Animal(_name, _kind) {}
- std::string bark() const { return name_of_kind(kind) + " " + name + " goes " + sound; }
- std::string sound = "WOOF!";
-};
-
-struct Labrador : Dog
-{
- Labrador(const std::string& _name, int _excitement = 9001)
- : Dog(_name, Kind::Labrador), excitement(_excitement) {}
- int excitement;
-};
-
-struct Chihuahua : Dog
-{
- Chihuahua(const std::string& _name) : Dog(_name, Kind::Chihuahua) { sound = "iyiyiyiyiyi"; }
- std::string bark() const { return Dog::bark() + " and runs in circles"; }
-};
-
-struct Cat : Animal
-{
- Cat(const std::string& _name, Kind _kind = Kind::Cat) : Animal(_name, _kind) {}
- std::string purr() const { return "mrowr"; }
-};
-
-struct Panther : Cat
-{
- Panther(const std::string& _name) : Cat(_name, Kind::Panther) {}
- std::string purr() const { return "mrrrRRRRRR"; }
-};
-
-std::vector> create_zoo()
-{
- std::vector> ret;
- ret.emplace_back(new Labrador("Fido", 15000));
-
- // simulate some new type of Dog that the Python bindings
- // haven't been updated for; it should still be considered
- // a Dog, not just an Animal.
- ret.emplace_back(new Dog("Ginger", Dog::Kind(150)));
-
- ret.emplace_back(new Chihuahua("Hertzl"));
- ret.emplace_back(new Cat("Tiger", Cat::Kind::Cat));
- ret.emplace_back(new Panther("Leo"));
- return ret;
-}
-
-const std::type_info* Animal::type_of_kind(Kind kind)
-{
- switch (kind) {
- case Kind::Unknown: break;
-
- case Kind::Dog: break;
- case Kind::Labrador: return &typeid(Labrador);
- case Kind::Chihuahua: return &typeid(Chihuahua);
- case Kind::LastDog: break;
-
- case Kind::Cat: break;
- case Kind::Panther: return &typeid(Panther);
- case Kind::LastCat: break;
- }
-
- if (kind >= Kind::Dog && kind <= Kind::LastDog) return &typeid(Dog);
- if (kind >= Kind::Cat && kind <= Kind::LastCat) return &typeid(Cat);
- return nullptr;
-}
-
-std::string Animal::name_of_kind(Kind kind)
-{
- std::string raw_name = type_of_kind(kind)->name();
- py::detail::clean_type_id(raw_name);
- return raw_name;
-}
-
-namespace pybind11 {
- template
- struct polymorphic_type_hook::value>>
- {
- static const void *get(const itype *src, const std::type_info*& type)
- { type = src ? Animal::type_of_kind(src->kind) : nullptr; return src; }
- };
-}
-
-TEST_SUBMODULE(tagbased_polymorphic, m) {
- py::class_(m, "Animal")
- .def_readonly("name", &Animal::name);
- py::class_(m, "Dog")
- .def(py::init())
- .def_readwrite("sound", &Dog::sound)
- .def("bark", &Dog::bark);
- py::class_(m, "Labrador")
- .def(py::init(), "name"_a, "excitement"_a = 9001)
- .def_readwrite("excitement", &Labrador::excitement);
- py::class_(m, "Chihuahua")
- .def(py::init())
- .def("bark", &Chihuahua::bark);
- py::class_(m, "Cat")
- .def(py::init())
- .def("purr", &Cat::purr);
- py::class_(m, "Panther")
- .def(py::init())
- .def("purr", &Panther::purr);
- m.def("create_zoo", &create_zoo);
-};
diff --git a/spaces/CVPR/lama-example/bin/paper_runfiles/update_test_data_stats.sh b/spaces/CVPR/lama-example/bin/paper_runfiles/update_test_data_stats.sh
deleted file mode 100644
index ff77d586f308202fbd019d8cc4be641f0d6aa1a5..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/bin/paper_runfiles/update_test_data_stats.sh
+++ /dev/null
@@ -1,30 +0,0 @@
-#!/usr/bin/env bash
-
-# paths to data are valid for mml7
-
-source "$(dirname $0)/env.sh"
-
-#INDIR="/data/inpainting/paper_data/Places365_val_test/test_large_30k"
-#
-#for dataset in random_medium_256 random_medium_512 random_thick_256 random_thick_512 random_thin_256 random_thin_512
-#do
-# "$BINDIR/calc_dataset_stats.py" "$INDIR/$dataset" "$INDIR/${dataset}_stats2"
-#done
-#
-#"$BINDIR/calc_dataset_stats.py" "/data/inpainting/evalset2" "/data/inpainting/evalset2_stats2"
-
-
-INDIR="/data/inpainting/paper_data/CelebA-HQ_val_test/test"
-
-for dataset in random_medium_256 random_thick_256 random_thin_256
-do
- "$BINDIR/calc_dataset_stats.py" "$INDIR/$dataset" "$INDIR/${dataset}_stats2"
-done
-
-
-INDIR="/data/inpainting/paper_data/Paris_StreetView_Dataset_val_256/paris_eval_gt"
-
-for dataset in random_medium_256 random_thick_256 random_thin_256
-do
- "$BINDIR/calc_dataset_stats.py" "$INDIR/$dataset" "$INDIR/${dataset}_stats2"
-done
\ No newline at end of file
diff --git a/spaces/Christyyu/textgenerator/app.py b/spaces/Christyyu/textgenerator/app.py
deleted file mode 100644
index f1d4beb0a8f3cee27903f527b6bf8daa485a75a0..0000000000000000000000000000000000000000
--- a/spaces/Christyyu/textgenerator/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("huggingface/gpt2").launch()
\ No newline at end of file
diff --git a/spaces/CikeyQI/meme-api/meme_generator/config.py b/spaces/CikeyQI/meme-api/meme_generator/config.py
deleted file mode 100644
index cc78bb2fa342e0cebb2a60db5388f51b791ab241..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/meme-api/meme_generator/config.py
+++ /dev/null
@@ -1,72 +0,0 @@
-import json
-from pathlib import Path
-from typing import List, Optional, Union
-
-import toml
-from pydantic import BaseModel, Extra
-
-from .dirs import get_config_file
-
-config_file_path = get_config_file("config.toml")
-
-
-class MemeConfig(BaseModel):
- load_builtin_memes: bool = True
- meme_dirs: List[Path] = []
- meme_disabled_list: List[str] = []
-
-
-class ResourceConfig(BaseModel):
- resource_url: Optional[str] = None
- resource_urls: List[str] = [
- "https://raw.githubusercontent.com/MeetWq/meme-generator/",
- "https://ghproxy.com/https://raw.githubusercontent.com/MeetWq/meme-generator/",
- "https://fastly.jsdelivr.net/gh/MeetWq/meme-generator@",
- "https://raw.fastgit.org/MeetWq/meme-generator/",
- "https://raw.fgit.ml/MeetWq/meme-generator/",
- "https://raw.gitmirror.com/MeetWq/meme-generator/",
- "https://raw.kgithub.com/MeetWq/meme-generator/",
- ]
-
-
-class GifConfig(BaseModel):
- gif_max_size: float = 10
- gif_max_frames: int = 100
-
-
-class TranslatorConfig(BaseModel):
- baidu_trans_appid: str = ""
- baidu_trans_apikey: str = ""
-
-
-class ServerConfig(BaseModel):
- host: str = "127.0.0.1"
- port: int = 7860
-
-
-class LogConfig(BaseModel):
- log_level: Union[int, str] = "INFO"
-
-
-class Config(BaseModel, extra=Extra.ignore):
- meme: MemeConfig = MemeConfig()
- resource: ResourceConfig = ResourceConfig()
- gif: GifConfig = GifConfig()
- translate: TranslatorConfig = TranslatorConfig()
- server: ServerConfig = ServerConfig()
- log: LogConfig = LogConfig()
-
- @classmethod
- def load(cls) -> "Config":
- return cls.parse_obj(toml.load(config_file_path))
-
- def dump(self):
- with open(config_file_path, "w", encoding="utf8") as f:
- toml.dump(json.loads(self.json()), f)
-
-
-if not config_file_path.exists():
- meme_config = Config()
- config_file_path.write_text("", encoding="utf8")
-else:
- meme_config = Config.load()
diff --git a/spaces/Clara998/DisneyPixarMovie/app.py b/spaces/Clara998/DisneyPixarMovie/app.py
deleted file mode 100644
index 04dba38f9df26066ee7df8556831f59c74f0e740..0000000000000000000000000000000000000000
--- a/spaces/Clara998/DisneyPixarMovie/app.py
+++ /dev/null
@@ -1,26 +0,0 @@
-import gradio as gr
-import requests
-import io
-from PIL import Image
-import os
-
-
-API_URL = "https://api-inference.huggingface.co/models/stabilityai/stable-diffusion-xl-base-1.0"
-
-def query(payload):
- auth_hf_api_token = os.environ.get("AUTH_HF_API_TOKEN")
- authorization = "Bearer " + auth_hf_api_token
- headers = {"Authorization": authorization}
- response = requests.post(API_URL, headers=headers, json=payload)
- return response.content
-
-def genImage(character_name, description_of_the_character):
- input = "Create a movie poster for " + character_name + "," + description_of_the_character + ",Disney Pixar movie style"
- image_bytes = query({
- "inputs": input,
- })
- image = Image.open(io.BytesIO(image_bytes))
- return image
-
-demo = gr.Interface(genImage, inputs=["text", "text"], outputs=["image"])
-demo.launch()
\ No newline at end of file
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/cpu/dcn_v2_im2col_cpu.h b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/cpu/dcn_v2_im2col_cpu.h
deleted file mode 100644
index bad5c52879562743cf6fc26d8754f0e11fda97ab..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/cpu/dcn_v2_im2col_cpu.h
+++ /dev/null
@@ -1,99 +0,0 @@
-
-/*!
- ******************* BEGIN Caffe Copyright Notice and Disclaimer ****************
- *
- * COPYRIGHT
- *
- * All contributions by the University of California:
- * Copyright (c) 2014-2017 The Regents of the University of California (Regents)
- * All rights reserved.
- *
- * All other contributions:
- * Copyright (c) 2014-2017, the respective contributors
- * All rights reserved.
- *
- * Caffe uses a shared copyright model: each contributor holds copyright over
- * their contributions to Caffe. The project versioning records all such
- * contribution and copyright details. If a contributor wants to further mark
- * their specific copyright on a particular contribution, they should indicate
- * their copyright solely in the commit message of the change when it is
- * committed.
- *
- * LICENSE
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- *
- * 1. Redistributions of source code must retain the above copyright notice, this
- * list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright notice,
- * this list of conditions and the following disclaimer in the documentation
- * and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
- * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
- * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
- * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- * CONTRIBUTION AGREEMENT
- *
- * By contributing to the BVLC/caffe repository through pull-request, comment,
- * or otherwise, the contributor releases their content to the
- * license and copyright terms herein.
- *
- ***************** END Caffe Copyright Notice and Disclaimer ********************
- *
- * Copyright (c) 2018 Microsoft
- * Licensed under The MIT License [see LICENSE for details]
- * \file modulated_deformable_im2col.h
- * \brief Function definitions of converting an image to
- * column matrix based on kernel, padding, dilation, and offset.
- * These functions are mainly used in deformable convolution operators.
- * \ref: https://arxiv.org/abs/1811.11168
- * \author Yuwen Xiong, Haozhi Qi, Jifeng Dai, Xizhou Zhu, Han Hu
- */
-
-/***************** Adapted by Charles Shang *********************/
-// modified from the CUDA version for CPU use by Daniel K. Suhendro
-
-#ifndef DCN_V2_IM2COL_CPU
-#define DCN_V2_IM2COL_CPU
-
-#ifdef __cplusplus
-extern "C"
-{
-#endif
-
- void modulated_deformable_im2col_cpu(const float *data_im, const float *data_offset, const float *data_mask,
- const int batch_size, const int channels, const int height_im, const int width_im,
- const int height_col, const int width_col, const int kernel_h, const int kenerl_w,
- const int pad_h, const int pad_w, const int stride_h, const int stride_w,
- const int dilation_h, const int dilation_w,
- const int deformable_group, float *data_col);
-
- void modulated_deformable_col2im_cpu(const float *data_col, const float *data_offset, const float *data_mask,
- const int batch_size, const int channels, const int height_im, const int width_im,
- const int height_col, const int width_col, const int kernel_h, const int kenerl_w,
- const int pad_h, const int pad_w, const int stride_h, const int stride_w,
- const int dilation_h, const int dilation_w,
- const int deformable_group, float *grad_im);
-
- void modulated_deformable_col2im_coord_cpu(const float *data_col, const float *data_im, const float *data_offset, const float *data_mask,
- const int batch_size, const int channels, const int height_im, const int width_im,
- const int height_col, const int width_col, const int kernel_h, const int kenerl_w,
- const int pad_h, const int pad_w, const int stride_h, const int stride_w,
- const int dilation_h, const int dilation_w,
- const int deformable_group,
- float *grad_offset, float *grad_mask);
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif
\ No newline at end of file
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/voltLib/error.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/voltLib/error.py
deleted file mode 100644
index c51d3b8fdc45afdb7bafbeb13a951264e0228985..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/voltLib/error.py
+++ /dev/null
@@ -1,12 +0,0 @@
-class VoltLibError(Exception):
- def __init__(self, message, location):
- Exception.__init__(self, message)
- self.location = location
-
- def __str__(self):
- message = Exception.__str__(self)
- if self.location:
- path, line, column = self.location
- return "%s:%d:%d: %s" % (path, line, column, message)
- else:
- return message
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/external.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/external.py
deleted file mode 100644
index 29ad28384cbe1b0f36a31b4c73efe7866dbcedae..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/external.py
+++ /dev/null
@@ -1,540 +0,0 @@
-"""This module should not be used directly as its API is subject to change. Instead,
-use the `gr.Blocks.load()` or `gr.load()` functions."""
-
-from __future__ import annotations
-
-import json
-import re
-import warnings
-from typing import TYPE_CHECKING, Callable
-
-import requests
-from gradio_client import Client
-from gradio_client.documentation import document, set_documentation_group
-
-import gradio
-from gradio import components, utils
-from gradio.context import Context
-from gradio.deprecation import warn_deprecation
-from gradio.exceptions import Error, TooManyRequestsError
-from gradio.external_utils import (
- cols_to_rows,
- encode_to_base64,
- get_tabular_examples,
- postprocess_label,
- rows_to_cols,
- streamline_spaces_interface,
-)
-from gradio.processing_utils import extract_base64_data, to_binary
-
-if TYPE_CHECKING:
- from gradio.blocks import Blocks
- from gradio.interface import Interface
-
-
-set_documentation_group("helpers")
-
-
-@document()
-def load(
- name: str,
- src: str | None = None,
- api_key: str | None = None,
- hf_token: str | None = None,
- alias: str | None = None,
- **kwargs,
-) -> Blocks:
- """
- Method that constructs a Blocks from a Hugging Face repo. Can accept
- model repos (if src is "models") or Space repos (if src is "spaces"). The input
- and output components are automatically loaded from the repo.
- Parameters:
- name: the name of the model (e.g. "gpt2" or "facebook/bart-base") or space (e.g. "flax-community/spanish-gpt2"), can include the `src` as prefix (e.g. "models/facebook/bart-base")
- src: the source of the model: `models` or `spaces` (or leave empty if source is provided as a prefix in `name`)
- api_key: Deprecated. Please use the `hf_token` parameter instead.
- hf_token: optional access token for loading private Hugging Face Hub models or spaces. Find your token here: https://huggingface.co/settings/tokens. Warning: only provide this if you are loading a trusted private Space as it can be read by the Space you are loading.
- alias: optional string used as the name of the loaded model instead of the default name (only applies if loading a Space running Gradio 2.x)
- Returns:
- a Gradio Blocks object for the given model
- Example:
- import gradio as gr
- demo = gr.load("gradio/question-answering", src="spaces")
- demo.launch()
- """
- if hf_token is None and api_key:
- warn_deprecation(
- "The `api_key` parameter will be deprecated. "
- "Please use the `hf_token` parameter going forward."
- )
- hf_token = api_key
- return load_blocks_from_repo(
- name=name, src=src, hf_token=hf_token, alias=alias, **kwargs
- )
-
-
-def load_blocks_from_repo(
- name: str,
- src: str | None = None,
- hf_token: str | None = None,
- alias: str | None = None,
- **kwargs,
-) -> Blocks:
- """Creates and returns a Blocks instance from a Hugging Face model or Space repo."""
- if src is None:
- # Separate the repo type (e.g. "model") from repo name (e.g. "google/vit-base-patch16-224")
- tokens = name.split("/")
- assert (
- len(tokens) > 1
- ), "Either `src` parameter must be provided, or `name` must be formatted as {src}/{repo name}"
- src = tokens[0]
- name = "/".join(tokens[1:])
-
- factory_methods: dict[str, Callable] = {
- # for each repo type, we have a method that returns the Interface given the model name & optionally an api_key
- "huggingface": from_model,
- "models": from_model,
- "spaces": from_spaces,
- }
- assert (
- src.lower() in factory_methods
- ), f"parameter: src must be one of {factory_methods.keys()}"
-
- if hf_token is not None:
- if Context.hf_token is not None and Context.hf_token != hf_token:
- warnings.warn(
- """You are loading a model/Space with a different access token than the one you used to load a previous model/Space. This is not recommended, as it may cause unexpected behavior."""
- )
- Context.hf_token = hf_token
-
- blocks: gradio.Blocks = factory_methods[src](name, hf_token, alias, **kwargs)
- return blocks
-
-
-def chatbot_preprocess(text, state):
- payload = {
- "inputs": {"generated_responses": None, "past_user_inputs": None, "text": text}
- }
- if state is not None:
- payload["inputs"]["generated_responses"] = state["conversation"][
- "generated_responses"
- ]
- payload["inputs"]["past_user_inputs"] = state["conversation"][
- "past_user_inputs"
- ]
-
- return payload
-
-
-def chatbot_postprocess(response):
- response_json = response.json()
- chatbot_value = list(
- zip(
- response_json["conversation"]["past_user_inputs"],
- response_json["conversation"]["generated_responses"],
- )
- )
- return chatbot_value, response_json
-
-
-def from_model(model_name: str, hf_token: str | None, alias: str | None, **kwargs):
- model_url = f"https://huggingface.co/{model_name}"
- api_url = f"https://api-inference.huggingface.co/models/{model_name}"
- print(f"Fetching model from: {model_url}")
-
- headers = {"Authorization": f"Bearer {hf_token}"} if hf_token is not None else {}
-
- # Checking if model exists, and if so, it gets the pipeline
- response = requests.request("GET", api_url, headers=headers)
- assert (
- response.status_code == 200
- ), f"Could not find model: {model_name}. If it is a private or gated model, please provide your Hugging Face access token (https://huggingface.co/settings/tokens) as the argument for the `api_key` parameter."
- p = response.json().get("pipeline_tag")
- pipelines = {
- "audio-classification": {
- # example model: ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition
- "inputs": components.Audio(source="upload", type="filepath", label="Input"),
- "outputs": components.Label(label="Class"),
- "preprocess": lambda i: to_binary,
- "postprocess": lambda r: postprocess_label(
- {i["label"].split(", ")[0]: i["score"] for i in r.json()}
- ),
- },
- "audio-to-audio": {
- # example model: facebook/xm_transformer_sm_all-en
- "inputs": components.Audio(source="upload", type="filepath", label="Input"),
- "outputs": components.Audio(label="Output"),
- "preprocess": to_binary,
- "postprocess": encode_to_base64,
- },
- "automatic-speech-recognition": {
- # example model: facebook/wav2vec2-base-960h
- "inputs": components.Audio(source="upload", type="filepath", label="Input"),
- "outputs": components.Textbox(label="Output"),
- "preprocess": to_binary,
- "postprocess": lambda r: r.json()["text"],
- },
- "conversational": {
- "inputs": [components.Textbox(), components.State()], # type: ignore
- "outputs": [components.Chatbot(), components.State()], # type: ignore
- "preprocess": chatbot_preprocess,
- "postprocess": chatbot_postprocess,
- },
- "feature-extraction": {
- # example model: julien-c/distilbert-feature-extraction
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Dataframe(label="Output"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": lambda r: r.json()[0],
- },
- "fill-mask": {
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Label(label="Classification"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": lambda r: postprocess_label(
- {i["token_str"]: i["score"] for i in r.json()}
- ),
- },
- "image-classification": {
- # Example: google/vit-base-patch16-224
- "inputs": components.Image(type="filepath", label="Input Image"),
- "outputs": components.Label(label="Classification"),
- "preprocess": to_binary,
- "postprocess": lambda r: postprocess_label(
- {i["label"].split(", ")[0]: i["score"] for i in r.json()}
- ),
- },
- "question-answering": {
- # Example: deepset/xlm-roberta-base-squad2
- "inputs": [
- components.Textbox(lines=7, label="Context"),
- components.Textbox(label="Question"),
- ],
- "outputs": [
- components.Textbox(label="Answer"),
- components.Label(label="Score"),
- ],
- "preprocess": lambda c, q: {"inputs": {"context": c, "question": q}},
- "postprocess": lambda r: (r.json()["answer"], {"label": r.json()["score"]}),
- },
- "summarization": {
- # Example: facebook/bart-large-cnn
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Textbox(label="Summary"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": lambda r: r.json()[0]["summary_text"],
- },
- "text-classification": {
- # Example: distilbert-base-uncased-finetuned-sst-2-english
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Label(label="Classification"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": lambda r: postprocess_label(
- {i["label"].split(", ")[0]: i["score"] for i in r.json()[0]}
- ),
- },
- "text-generation": {
- # Example: gpt2
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Textbox(label="Output"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": lambda r: r.json()[0]["generated_text"],
- },
- "text2text-generation": {
- # Example: valhalla/t5-small-qa-qg-hl
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Textbox(label="Generated Text"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": lambda r: r.json()[0]["generated_text"],
- },
- "translation": {
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Textbox(label="Translation"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": lambda r: r.json()[0]["translation_text"],
- },
- "zero-shot-classification": {
- # Example: facebook/bart-large-mnli
- "inputs": [
- components.Textbox(label="Input"),
- components.Textbox(label="Possible class names (" "comma-separated)"),
- components.Checkbox(label="Allow multiple true classes"),
- ],
- "outputs": components.Label(label="Classification"),
- "preprocess": lambda i, c, m: {
- "inputs": i,
- "parameters": {"candidate_labels": c, "multi_class": m},
- },
- "postprocess": lambda r: postprocess_label(
- {
- r.json()["labels"][i]: r.json()["scores"][i]
- for i in range(len(r.json()["labels"]))
- }
- ),
- },
- "sentence-similarity": {
- # Example: sentence-transformers/distilbert-base-nli-stsb-mean-tokens
- "inputs": [
- components.Textbox(
- value="That is a happy person", label="Source Sentence"
- ),
- components.Textbox(
- lines=7,
- placeholder="Separate each sentence by a newline",
- label="Sentences to compare to",
- ),
- ],
- "outputs": components.Label(label="Classification"),
- "preprocess": lambda src, sentences: {
- "inputs": {
- "source_sentence": src,
- "sentences": [s for s in sentences.splitlines() if s != ""],
- }
- },
- "postprocess": lambda r: postprocess_label(
- {f"sentence {i}": v for i, v in enumerate(r.json())}
- ),
- },
- "text-to-speech": {
- # Example: julien-c/ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Audio(label="Audio"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": encode_to_base64,
- },
- "text-to-image": {
- # example model: osanseviero/BigGAN-deep-128
- "inputs": components.Textbox(label="Input"),
- "outputs": components.Image(label="Output"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": encode_to_base64,
- },
- "token-classification": {
- # example model: huggingface-course/bert-finetuned-ner
- "inputs": components.Textbox(label="Input"),
- "outputs": components.HighlightedText(label="Output"),
- "preprocess": lambda x: {"inputs": x},
- "postprocess": lambda r: r, # Handled as a special case in query_huggingface_api()
- },
- "document-question-answering": {
- # example model: impira/layoutlm-document-qa
- "inputs": [
- components.Image(type="filepath", label="Input Document"),
- components.Textbox(label="Question"),
- ],
- "outputs": components.Label(label="Label"),
- "preprocess": lambda img, q: {
- "inputs": {
- "image": extract_base64_data(img), # Extract base64 data
- "question": q,
- }
- },
- "postprocess": lambda r: postprocess_label(
- {i["answer"]: i["score"] for i in r.json()}
- ),
- },
- "visual-question-answering": {
- # example model: dandelin/vilt-b32-finetuned-vqa
- "inputs": [
- components.Image(type="filepath", label="Input Image"),
- components.Textbox(label="Question"),
- ],
- "outputs": components.Label(label="Label"),
- "preprocess": lambda img, q: {
- "inputs": {
- "image": extract_base64_data(img),
- "question": q,
- }
- },
- "postprocess": lambda r: postprocess_label(
- {i["answer"]: i["score"] for i in r.json()}
- ),
- },
- "image-to-text": {
- # example model: Salesforce/blip-image-captioning-base
- "inputs": components.Image(type="filepath", label="Input Image"),
- "outputs": components.Textbox(label="Generated Text"),
- "preprocess": to_binary,
- "postprocess": lambda r: r.json()[0]["generated_text"],
- },
- }
-
- if p in ["tabular-classification", "tabular-regression"]:
- example_data = get_tabular_examples(model_name)
- col_names, example_data = cols_to_rows(example_data)
- example_data = [[example_data]] if example_data else None
-
- pipelines[p] = {
- "inputs": components.Dataframe(
- label="Input Rows",
- type="pandas",
- headers=col_names,
- col_count=(len(col_names), "fixed"),
- ),
- "outputs": components.Dataframe(
- label="Predictions", type="array", headers=["prediction"]
- ),
- "preprocess": rows_to_cols,
- "postprocess": lambda r: {
- "headers": ["prediction"],
- "data": [[pred] for pred in json.loads(r.text)],
- },
- "examples": example_data,
- }
-
- if p is None or p not in pipelines:
- raise ValueError(f"Unsupported pipeline type: {p}")
-
- pipeline = pipelines[p]
-
- def query_huggingface_api(*params):
- # Convert to a list of input components
- data = pipeline["preprocess"](*params)
- if isinstance(
- data, dict
- ): # HF doesn't allow additional parameters for binary files (e.g. images or audio files)
- data.update({"options": {"wait_for_model": True}})
- data = json.dumps(data)
- response = requests.request("POST", api_url, headers=headers, data=data)
- if response.status_code != 200:
- errors_json = response.json()
- errors, warns = "", ""
- if errors_json.get("error"):
- errors = f", Error: {errors_json.get('error')}"
- if errors_json.get("warnings"):
- warns = f", Warnings: {errors_json.get('warnings')}"
- raise Error(
- f"Could not complete request to HuggingFace API, Status Code: {response.status_code}"
- + errors
- + warns
- )
- if (
- p == "token-classification"
- ): # Handle as a special case since HF API only returns the named entities and we need the input as well
- ner_groups = response.json()
- input_string = params[0]
- response = utils.format_ner_list(input_string, ner_groups)
- output = pipeline["postprocess"](response)
- return output
-
- if alias is None:
- query_huggingface_api.__name__ = model_name
- else:
- query_huggingface_api.__name__ = alias
-
- interface_info = {
- "fn": query_huggingface_api,
- "inputs": pipeline["inputs"],
- "outputs": pipeline["outputs"],
- "title": model_name,
- "examples": pipeline.get("examples"),
- }
-
- kwargs = dict(interface_info, **kwargs)
-
- # So interface doesn't run pre/postprocess
- # except for conversational interfaces which
- # are stateful
- kwargs["_api_mode"] = p != "conversational"
-
- interface = gradio.Interface(**kwargs)
- return interface
-
-
-def from_spaces(
- space_name: str, hf_token: str | None, alias: str | None, **kwargs
-) -> Blocks:
- space_url = f"https://huggingface.co/spaces/{space_name}"
-
- print(f"Fetching Space from: {space_url}")
-
- headers = {}
- if hf_token is not None:
- headers["Authorization"] = f"Bearer {hf_token}"
-
- iframe_url = (
- requests.get(
- f"https://huggingface.co/api/spaces/{space_name}/host", headers=headers
- )
- .json()
- .get("host")
- )
-
- if iframe_url is None:
- raise ValueError(
- f"Could not find Space: {space_name}. If it is a private or gated Space, please provide your Hugging Face access token (https://huggingface.co/settings/tokens) as the argument for the `api_key` parameter."
- )
-
- r = requests.get(iframe_url, headers=headers)
-
- result = re.search(
- r"window.gradio_config = (.*?);[\s]*", r.text
- ) # some basic regex to extract the config
- try:
- config = json.loads(result.group(1)) # type: ignore
- except AttributeError as ae:
- raise ValueError(f"Could not load the Space: {space_name}") from ae
- if "allow_flagging" in config: # Create an Interface for Gradio 2.x Spaces
- return from_spaces_interface(
- space_name, config, alias, hf_token, iframe_url, **kwargs
- )
- else: # Create a Blocks for Gradio 3.x Spaces
- if kwargs:
- warnings.warn(
- "You cannot override parameters for this Space by passing in kwargs. "
- "Instead, please load the Space as a function and use it to create a "
- "Blocks or Interface locally. You may find this Guide helpful: "
- "https://gradio.app/using_blocks_like_functions/"
- )
- return from_spaces_blocks(space=space_name, hf_token=hf_token)
-
-
-def from_spaces_blocks(space: str, hf_token: str | None) -> Blocks:
- client = Client(space, hf_token=hf_token)
- predict_fns = [endpoint._predict_resolve for endpoint in client.endpoints]
- return gradio.Blocks.from_config(client.config, predict_fns, client.src)
-
-
-def from_spaces_interface(
- model_name: str,
- config: dict,
- alias: str | None,
- hf_token: str | None,
- iframe_url: str,
- **kwargs,
-) -> Interface:
- config = streamline_spaces_interface(config)
- api_url = f"{iframe_url}/api/predict/"
- headers = {"Content-Type": "application/json"}
- if hf_token is not None:
- headers["Authorization"] = f"Bearer {hf_token}"
-
- # The function should call the API with preprocessed data
- def fn(*data):
- data = json.dumps({"data": data})
- response = requests.post(api_url, headers=headers, data=data)
- result = json.loads(response.content.decode("utf-8"))
- if "error" in result and "429" in result["error"]:
- raise TooManyRequestsError("Too many requests to the Hugging Face API")
- try:
- output = result["data"]
- except KeyError as ke:
- raise KeyError(
- f"Could not find 'data' key in response from external Space. Response received: {result}"
- ) from ke
- if (
- len(config["outputs"]) == 1
- ): # if the fn is supposed to return a single value, pop it
- output = output[0]
- if len(config["outputs"]) == 1 and isinstance(
- output, list
- ): # Needed to support Output.Image() returning bounding boxes as well (TODO: handle different versions of gradio since they have slightly different APIs)
- output = output[0]
- return output
-
- fn.__name__ = alias if (alias is not None) else model_name
- config["fn"] = fn
-
- kwargs = dict(config, **kwargs)
- kwargs["_api_mode"] = True
- interface = gradio.Interface(**kwargs)
- return interface
diff --git a/spaces/Dao3/Top-20-Models/README.md b/spaces/Dao3/Top-20-Models/README.md
deleted file mode 100644
index d75f682a49031318e4f1b0e784b697c431f2c523..0000000000000000000000000000000000000000
--- a/spaces/Dao3/Top-20-Models/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Top 20 Diffusion
-emoji: 👑
-colorFrom: blue
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.18.0
-app_file: app.py
-pinned: true
-duplicated_from: Omnibus/Top-20-Diffusion
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Detomo/ai-comic-generation/src/app/interface/zoom/index.tsx b/spaces/Detomo/ai-comic-generation/src/app/interface/zoom/index.tsx
deleted file mode 100644
index 5c8d31a3af1c80f8a9ef15330bb84c0d2c3069de..0000000000000000000000000000000000000000
--- a/spaces/Detomo/ai-comic-generation/src/app/interface/zoom/index.tsx
+++ /dev/null
@@ -1,35 +0,0 @@
-import { useStore } from "@/app/store"
-import { VerticalSlider } from "@/components/ui/vertical-slider"
-import { cn } from "@/lib/utils"
-
-export function Zoom() {
- const zoomLevel = useStore((state) => state.zoomLevel)
- const setZoomLevel = useStore((state) => state.setZoomLevel)
- const isGeneratingStory = useStore((state) => state.isGeneratingStory)
-
- return (
-
The input audio should be clean and pure voice without background music.\n"
- "\n\n"
- "[](https://colab.research.google.com/drive/12rbZk9CoXD1m84dqBW5IKMBjiVY6tcoj?usp=share_link)\n\n"
- "[](https://huggingface.co/spaces/ardha27pi/rvc-models?duplicate=true)\n\n"
- "[](https://github.com/ardha27/AI-Song-Cover-RVC)\n\n"
- "[](https://ko-fi.com/R6R7AH1FA)\n\n"
- )
- with gr.Tabs():
- for (name, title, author, cover, vc_fn) in models:
- with gr.TabItem(name):
- with gr.Row():
- gr.Markdown(
- '
'
- f'
{title}
\n'+
- (f'
Model author: {author}
' if author else "")+
- (f'' if cover else "")+
- '
You can skip the queue and load custom models in the colab:
- Running on {device}{(" in a Google Colab." if is_colab else "")}
-
-
You can also duplicate this space and upgrade to gpu by going to settings:
-
-
- """
- )
- with gr.Row():
-
- with gr.Column(scale=55):
- with gr.Group():
- model_name = gr.Dropdown(label="Model", choices=[m.name for m in models], value=current_model.name)
- with gr.Box(visible=False) as custom_model_group:
- custom_model_path = gr.Textbox(label="Custom model path", placeholder="Path to model, e.g. nitrosocke/Arcane-Diffusion", interactive=True)
- gr.HTML("
Custom models have to be downloaded first, so give it some time.
- """)
-
- demo.load(update_state_info, inputs=state_info, outputs=state_info, every=0.5, show_progress=False)
-
-print(f"Space built in {time.time() - start_time:.2f} seconds")
-
-# if not is_colab:
-demo.queue(concurrency_count=1)
-demo.launch(debug=is_colab, share=is_colab)
diff --git a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/tts_infer/transliterate.py b/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/tts_infer/transliterate.py
deleted file mode 100644
index ab30b89ab554b4ad42bea53834d99707bdf09d9b..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/tts_infer/transliterate.py
+++ /dev/null
@@ -1,919 +0,0 @@
-import torch
-import torch.nn as nn
-import numpy as np
-import pandas as pd
-import random
-import sys
-import os
-import json
-import enum
-import traceback
-import re
-
-F_DIR = os.path.dirname(os.path.realpath(__file__))
-
-
-class XlitError(enum.Enum):
- lang_err = "Unsupported langauge ID requested ;( Please check available languages."
- string_err = "String passed is incompatable ;("
- internal_err = "Internal crash ;("
- unknown_err = "Unknown Failure"
- loading_err = "Loading failed ;( Check if metadata/paths are correctly configured."
-
-
-##=================== Network ==================================================
-
-
-class Encoder(nn.Module):
- def __init__(
- self,
- input_dim,
- embed_dim,
- hidden_dim,
- rnn_type="gru",
- layers=1,
- bidirectional=False,
- dropout=0,
- device="cpu",
- ):
- super(Encoder, self).__init__()
-
- self.input_dim = input_dim # src_vocab_sz
- self.enc_embed_dim = embed_dim
- self.enc_hidden_dim = hidden_dim
- self.enc_rnn_type = rnn_type
- self.enc_layers = layers
- self.enc_directions = 2 if bidirectional else 1
- self.device = device
-
- self.embedding = nn.Embedding(self.input_dim, self.enc_embed_dim)
-
- if self.enc_rnn_type == "gru":
- self.enc_rnn = nn.GRU(
- input_size=self.enc_embed_dim,
- hidden_size=self.enc_hidden_dim,
- num_layers=self.enc_layers,
- bidirectional=bidirectional,
- )
- elif self.enc_rnn_type == "lstm":
- self.enc_rnn = nn.LSTM(
- input_size=self.enc_embed_dim,
- hidden_size=self.enc_hidden_dim,
- num_layers=self.enc_layers,
- bidirectional=bidirectional,
- )
- else:
- raise Exception("XlitError: unknown RNN type mentioned")
-
- def forward(self, x, x_sz, hidden=None):
- """
- x_sz: (batch_size, 1) - Unpadded sequence lengths used for pack_pad
- """
- batch_sz = x.shape[0]
- # x: batch_size, max_length, enc_embed_dim
- x = self.embedding(x)
-
- ## pack the padded data
- # x: max_length, batch_size, enc_embed_dim -> for pack_pad
- x = x.permute(1, 0, 2)
- x = nn.utils.rnn.pack_padded_sequence(x, x_sz, enforce_sorted=False) # unpad
-
- # output: packed_size, batch_size, enc_embed_dim
- # hidden: n_layer**num_directions, batch_size, hidden_dim | if LSTM (h_n, c_n)
- output, hidden = self.enc_rnn(
- x
- ) # gru returns hidden state of all timesteps as well as hidden state at last timestep
-
- ## pad the sequence to the max length in the batch
- # output: max_length, batch_size, enc_emb_dim*directions)
- output, _ = nn.utils.rnn.pad_packed_sequence(output)
-
- # output: batch_size, max_length, hidden_dim
- output = output.permute(1, 0, 2)
-
- return output, hidden
-
- def get_word_embedding(self, x):
- """ """
- x_sz = torch.tensor([len(x)])
- x_ = torch.tensor(x).unsqueeze(0).to(dtype=torch.long)
- # x: 1, max_length, enc_embed_dim
- x = self.embedding(x_)
-
- ## pack the padded data
- # x: max_length, 1, enc_embed_dim -> for pack_pad
- x = x.permute(1, 0, 2)
- x = nn.utils.rnn.pack_padded_sequence(x, x_sz, enforce_sorted=False) # unpad
-
- # output: packed_size, 1, enc_embed_dim
- # hidden: n_layer**num_directions, 1, hidden_dim | if LSTM (h_n, c_n)
- output, hidden = self.enc_rnn(
- x
- ) # gru returns hidden state of all timesteps as well as hidden state at last timestep
-
- out_embed = hidden[0].squeeze()
-
- return out_embed
-
-
-class Decoder(nn.Module):
- def __init__(
- self,
- output_dim,
- embed_dim,
- hidden_dim,
- rnn_type="gru",
- layers=1,
- use_attention=True,
- enc_outstate_dim=None, # enc_directions * enc_hidden_dim
- dropout=0,
- device="cpu",
- ):
- super(Decoder, self).__init__()
-
- self.output_dim = output_dim # tgt_vocab_sz
- self.dec_hidden_dim = hidden_dim
- self.dec_embed_dim = embed_dim
- self.dec_rnn_type = rnn_type
- self.dec_layers = layers
- self.use_attention = use_attention
- self.device = device
- if self.use_attention:
- self.enc_outstate_dim = enc_outstate_dim if enc_outstate_dim else hidden_dim
- else:
- self.enc_outstate_dim = 0
-
- self.embedding = nn.Embedding(self.output_dim, self.dec_embed_dim)
-
- if self.dec_rnn_type == "gru":
- self.dec_rnn = nn.GRU(
- input_size=self.dec_embed_dim
- + self.enc_outstate_dim, # to concat attention_output
- hidden_size=self.dec_hidden_dim, # previous Hidden
- num_layers=self.dec_layers,
- batch_first=True,
- )
- elif self.dec_rnn_type == "lstm":
- self.dec_rnn = nn.LSTM(
- input_size=self.dec_embed_dim
- + self.enc_outstate_dim, # to concat attention_output
- hidden_size=self.dec_hidden_dim, # previous Hidden
- num_layers=self.dec_layers,
- batch_first=True,
- )
- else:
- raise Exception("XlitError: unknown RNN type mentioned")
-
- self.fc = nn.Sequential(
- nn.Linear(self.dec_hidden_dim, self.dec_embed_dim),
- nn.LeakyReLU(),
- # nn.Linear(self.dec_embed_dim, self.dec_embed_dim), nn.LeakyReLU(), # removing to reduce size
- nn.Linear(self.dec_embed_dim, self.output_dim),
- )
-
- ##----- Attention ----------
- if self.use_attention:
- self.W1 = nn.Linear(self.enc_outstate_dim, self.dec_hidden_dim)
- self.W2 = nn.Linear(self.dec_hidden_dim, self.dec_hidden_dim)
- self.V = nn.Linear(self.dec_hidden_dim, 1)
-
- def attention(self, x, hidden, enc_output):
- """
- x: (batch_size, 1, dec_embed_dim) -> after Embedding
- enc_output: batch_size, max_length, enc_hidden_dim *num_directions
- hidden: n_layers, batch_size, hidden_size | if LSTM (h_n, c_n)
- """
-
- ## perform addition to calculate the score
-
- # hidden_with_time_axis: batch_size, 1, hidden_dim
- ## hidden_with_time_axis = hidden.permute(1, 0, 2) ## replaced with below 2lines
- hidden_with_time_axis = (
- torch.sum(hidden, axis=0)
- if self.dec_rnn_type != "lstm"
- else torch.sum(hidden[0], axis=0)
- ) # h_n
-
- hidden_with_time_axis = hidden_with_time_axis.unsqueeze(1)
-
- # score: batch_size, max_length, hidden_dim
- score = torch.tanh(self.W1(enc_output) + self.W2(hidden_with_time_axis))
-
- # attention_weights: batch_size, max_length, 1
- # we get 1 at the last axis because we are applying score to self.V
- attention_weights = torch.softmax(self.V(score), dim=1)
-
- # context_vector shape after sum == (batch_size, hidden_dim)
- context_vector = attention_weights * enc_output
- context_vector = torch.sum(context_vector, dim=1)
- # context_vector: batch_size, 1, hidden_dim
- context_vector = context_vector.unsqueeze(1)
-
- # attend_out (batch_size, 1, dec_embed_dim + hidden_size)
- attend_out = torch.cat((context_vector, x), -1)
-
- return attend_out, attention_weights
-
- def forward(self, x, hidden, enc_output):
- """
- x: (batch_size, 1)
- enc_output: batch_size, max_length, dec_embed_dim
- hidden: n_layer, batch_size, hidden_size | lstm: (h_n, c_n)
- """
- if (hidden is None) and (self.use_attention is False):
- raise Exception(
- "XlitError: No use of a decoder with No attention and No Hidden"
- )
-
- batch_sz = x.shape[0]
-
- if hidden is None:
- # hidden: n_layers, batch_size, hidden_dim
- hid_for_att = torch.zeros(
- (self.dec_layers, batch_sz, self.dec_hidden_dim)
- ).to(self.device)
- elif self.dec_rnn_type == "lstm":
- hid_for_att = hidden[1] # c_n
-
- # x (batch_size, 1, dec_embed_dim) -> after embedding
- x = self.embedding(x)
-
- if self.use_attention:
- # x (batch_size, 1, dec_embed_dim + hidden_size) -> after attention
- # aw: (batch_size, max_length, 1)
- x, aw = self.attention(x, hidden, enc_output)
- else:
- x, aw = x, 0
-
- # passing the concatenated vector to the GRU
- # output: (batch_size, n_layers, hidden_size)
- # hidden: n_layers, batch_size, hidden_size | if LSTM (h_n, c_n)
- output, hidden = (
- self.dec_rnn(x, hidden) if hidden is not None else self.dec_rnn(x)
- )
-
- # output :shp: (batch_size * 1, hidden_size)
- output = output.view(-1, output.size(2))
-
- # output :shp: (batch_size * 1, output_dim)
- output = self.fc(output)
-
- return output, hidden, aw
-
-
-class Seq2Seq(nn.Module):
- """
- Class dependency: Encoder, Decoder
- """
-
- def __init__(
- self, encoder, decoder, pass_enc2dec_hid=False, dropout=0, device="cpu"
- ):
- super(Seq2Seq, self).__init__()
-
- self.encoder = encoder
- self.decoder = decoder
- self.device = device
- self.pass_enc2dec_hid = pass_enc2dec_hid
- _force_en2dec_hid_conv = False
-
- if self.pass_enc2dec_hid:
- assert (
- decoder.dec_hidden_dim == encoder.enc_hidden_dim
- ), "Hidden Dimension of encoder and decoder must be same, or unset `pass_enc2dec_hid`"
- if decoder.use_attention:
- assert (
- decoder.enc_outstate_dim
- == encoder.enc_directions * encoder.enc_hidden_dim
- ), "Set `enc_out_dim` correctly in decoder"
- assert (
- self.pass_enc2dec_hid or decoder.use_attention
- ), "No use of a decoder with No attention and No Hidden from Encoder"
-
- self.use_conv_4_enc2dec_hid = False
- if (
- self.pass_enc2dec_hid
- and (encoder.enc_directions * encoder.enc_layers != decoder.dec_layers)
- ) or _force_en2dec_hid_conv:
- if encoder.enc_rnn_type == "lstm" or encoder.enc_rnn_type == "lstm":
- raise Exception(
- "XlitError: conv for enc2dec_hid not implemented; Change the layer numbers appropriately"
- )
-
- self.use_conv_4_enc2dec_hid = True
- self.enc_hid_1ax = encoder.enc_directions * encoder.enc_layers
- self.dec_hid_1ax = decoder.dec_layers
- self.e2d_hidden_conv = nn.Conv1d(self.enc_hid_1ax, self.dec_hid_1ax, 1)
-
- def enc2dec_hidden(self, enc_hidden):
- """
- enc_hidden: n_layer, batch_size, hidden_dim*num_directions
- TODO: Implement the logic for LSTm bsed model
- """
- # hidden: batch_size, enc_layer*num_directions, enc_hidden_dim
- hidden = enc_hidden.permute(1, 0, 2).contiguous()
- # hidden: batch_size, dec_layers, dec_hidden_dim -> [N,C,Tstep]
- hidden = self.e2d_hidden_conv(hidden)
-
- # hidden: dec_layers, batch_size , dec_hidden_dim
- hidden_for_dec = hidden.permute(1, 0, 2).contiguous()
-
- return hidden_for_dec
-
- def active_beam_inference(self, src, beam_width=3, max_tgt_sz=50):
- """Search based decoding
- src: (sequence_len)
- """
-
- def _avg_score(p_tup):
- """Used for Sorting
- TODO: Dividing by length of sequence power alpha as hyperparam
- """
- return p_tup[0]
-
- import sys
-
- batch_size = 1
- start_tok = src[0]
- end_tok = src[-1]
- src_sz = torch.tensor([len(src)])
- src_ = src.unsqueeze(0)
-
- # enc_output: (batch_size, padded_seq_length, enc_hidden_dim*num_direction)
- # enc_hidden: (enc_layers*num_direction, batch_size, hidden_dim)
- enc_output, enc_hidden = self.encoder(src_, src_sz)
-
- if self.pass_enc2dec_hid:
- # dec_hidden: dec_layers, batch_size , dec_hidden_dim
- if self.use_conv_4_enc2dec_hid:
- init_dec_hidden = self.enc2dec_hidden(enc_hidden)
- else:
- init_dec_hidden = enc_hidden
- else:
- # dec_hidden -> Will be initialized to zeros internally
- init_dec_hidden = None
-
- # top_pred[][0] = Σ-log_softmax
- # top_pred[][1] = sequence torch.tensor shape: (1)
- # top_pred[][2] = dec_hidden
- top_pred_list = [(0, start_tok.unsqueeze(0), init_dec_hidden)]
-
- for t in range(max_tgt_sz):
- cur_pred_list = []
-
- for p_tup in top_pred_list:
- if p_tup[1][-1] == end_tok:
- cur_pred_list.append(p_tup)
- continue
-
- # dec_hidden: dec_layers, 1, hidden_dim
- # dec_output: 1, output_dim
- dec_output, dec_hidden, _ = self.decoder(
- x=p_tup[1][-1].view(1, 1), # dec_input: (1,1)
- hidden=p_tup[2],
- enc_output=enc_output,
- )
-
- ## π{prob} = Σ{log(prob)} -> to prevent diminishing
- # dec_output: (1, output_dim)
- dec_output = nn.functional.log_softmax(dec_output, dim=1)
- # pred_topk.values & pred_topk.indices: (1, beam_width)
- pred_topk = torch.topk(dec_output, k=beam_width, dim=1)
-
- for i in range(beam_width):
- sig_logsmx_ = p_tup[0] + pred_topk.values[0][i]
- # seq_tensor_ : (seq_len)
- seq_tensor_ = torch.cat((p_tup[1], pred_topk.indices[0][i].view(1)))
-
- cur_pred_list.append((sig_logsmx_, seq_tensor_, dec_hidden))
-
- cur_pred_list.sort(key=_avg_score, reverse=True) # Maximized order
- top_pred_list = cur_pred_list[:beam_width]
-
- # check if end_tok of all topk
- end_flags_ = [1 if t[1][-1] == end_tok else 0 for t in top_pred_list]
- if beam_width == sum(end_flags_):
- break
-
- pred_tnsr_list = [t[1] for t in top_pred_list]
-
- return pred_tnsr_list
-
-
-##===================== Glyph handlers =======================================
-
-
-class GlyphStrawboss:
- def __init__(self, glyphs="en"):
- """list of letters in a language in unicode
- lang: ISO Language code
- glyphs: json file with script information
- """
- if glyphs == "en":
- # Smallcase alone
- self.glyphs = [chr(alpha) for alpha in range(97, 122 + 1)]
- else:
- self.dossier = json.load(open(glyphs, encoding="utf-8"))
- self.glyphs = self.dossier["glyphs"]
- self.numsym_map = self.dossier["numsym_map"]
-
- self.char2idx = {}
- self.idx2char = {}
- self._create_index()
-
- def _create_index(self):
-
- self.char2idx["_"] = 0 # pad
- self.char2idx["$"] = 1 # start
- self.char2idx["#"] = 2 # end
- self.char2idx["*"] = 3 # Mask
- self.char2idx["'"] = 4 # apostrophe U+0027
- self.char2idx["%"] = 5 # unused
- self.char2idx["!"] = 6 # unused
-
- # letter to index mapping
- for idx, char in enumerate(self.glyphs):
- self.char2idx[char] = idx + 7 # +7 token initially
-
- # index to letter mapping
- for char, idx in self.char2idx.items():
- self.idx2char[idx] = char
-
- def size(self):
- return len(self.char2idx)
-
- def word2xlitvec(self, word):
- """Converts given string of gyphs(word) to vector(numpy)
- Also adds tokens for start and end
- """
- try:
- vec = [self.char2idx["$"]] # start token
- for i in list(word):
- vec.append(self.char2idx[i])
- vec.append(self.char2idx["#"]) # end token
-
- vec = np.asarray(vec, dtype=np.int64)
- return vec
-
- except Exception as error:
- print("XlitError: In word:", word, "Error Char not in Token:", error)
- sys.exit()
-
- def xlitvec2word(self, vector):
- """Converts vector(numpy) to string of glyphs(word)"""
- char_list = []
- for i in vector:
- char_list.append(self.idx2char[i])
-
- word = "".join(char_list).replace("$", "").replace("#", "") # remove tokens
- word = word.replace("_", "").replace("*", "") # remove tokens
- return word
-
-
-class VocabSanitizer:
- def __init__(self, data_file):
- """
- data_file: path to file conatining vocabulary list
- """
- extension = os.path.splitext(data_file)[-1]
- if extension == ".json":
- self.vocab_set = set(json.load(open(data_file, encoding="utf-8")))
- elif extension == ".csv":
- self.vocab_df = pd.read_csv(data_file).set_index("WORD")
- self.vocab_set = set(self.vocab_df.index)
- else:
- print("XlitError: Only Json/CSV file extension supported")
-
- def reposition(self, word_list):
- """Reorder Words in list"""
- new_list = []
- temp_ = word_list.copy()
- for v in word_list:
- if v in self.vocab_set:
- new_list.append(v)
- temp_.remove(v)
- new_list.extend(temp_)
-
- return new_list
-
-
-##=============== INSTANTIATION ================================================
-
-
-class XlitPiston:
- """
- For handling prediction & post-processing of transliteration for a single language
- Class dependency: Seq2Seq, GlyphStrawboss, VocabSanitizer
- Global Variables: F_DIR
- """
-
- def __init__(
- self,
- weight_path,
- vocab_file,
- tglyph_cfg_file,
- iglyph_cfg_file="en",
- device="cpu",
- ):
-
- self.device = device
- self.in_glyph_obj = GlyphStrawboss(iglyph_cfg_file)
- self.tgt_glyph_obj = GlyphStrawboss(glyphs=tglyph_cfg_file)
- self.voc_sanity = VocabSanitizer(vocab_file)
-
- self._numsym_set = set(
- json.load(open(tglyph_cfg_file, encoding="utf-8"))["numsym_map"].keys()
- )
- self._inchar_set = set("abcdefghijklmnopqrstuvwxyz")
- self._natscr_set = set().union(
- self.tgt_glyph_obj.glyphs, sum(self.tgt_glyph_obj.numsym_map.values(), [])
- )
-
- ## Model Config Static TODO: add defining in json support
- input_dim = self.in_glyph_obj.size()
- output_dim = self.tgt_glyph_obj.size()
- enc_emb_dim = 300
- dec_emb_dim = 300
- enc_hidden_dim = 512
- dec_hidden_dim = 512
- rnn_type = "lstm"
- enc2dec_hid = True
- attention = True
- enc_layers = 1
- dec_layers = 2
- m_dropout = 0
- enc_bidirect = True
- enc_outstate_dim = enc_hidden_dim * (2 if enc_bidirect else 1)
-
- enc = Encoder(
- input_dim=input_dim,
- embed_dim=enc_emb_dim,
- hidden_dim=enc_hidden_dim,
- rnn_type=rnn_type,
- layers=enc_layers,
- dropout=m_dropout,
- device=self.device,
- bidirectional=enc_bidirect,
- )
- dec = Decoder(
- output_dim=output_dim,
- embed_dim=dec_emb_dim,
- hidden_dim=dec_hidden_dim,
- rnn_type=rnn_type,
- layers=dec_layers,
- dropout=m_dropout,
- use_attention=attention,
- enc_outstate_dim=enc_outstate_dim,
- device=self.device,
- )
- self.model = Seq2Seq(enc, dec, pass_enc2dec_hid=enc2dec_hid, device=self.device)
- self.model = self.model.to(self.device)
- weights = torch.load(weight_path, map_location=torch.device(self.device))
-
- self.model.load_state_dict(weights)
- self.model.eval()
-
- def character_model(self, word, beam_width=1):
- in_vec = torch.from_numpy(self.in_glyph_obj.word2xlitvec(word)).to(self.device)
- ## change to active or passive beam
- p_out_list = self.model.active_beam_inference(in_vec, beam_width=beam_width)
- p_result = [
- self.tgt_glyph_obj.xlitvec2word(out.cpu().numpy()) for out in p_out_list
- ]
-
- result = self.voc_sanity.reposition(p_result)
-
- # List type
- return result
-
- def numsym_model(self, seg):
- """tgt_glyph_obj.numsym_map[x] returns a list object"""
- if len(seg) == 1:
- return [seg] + self.tgt_glyph_obj.numsym_map[seg]
-
- a = [self.tgt_glyph_obj.numsym_map[n][0] for n in seg]
- return [seg] + ["".join(a)]
-
- def _word_segementer(self, sequence):
-
- sequence = sequence.lower()
- accepted = set().union(self._numsym_set, self._inchar_set, self._natscr_set)
- # sequence = ''.join([i for i in sequence if i in accepted])
-
- segment = []
- idx = 0
- seq_ = list(sequence)
- while len(seq_):
- # for Number-Symbol
- temp = ""
- while len(seq_) and seq_[0] in self._numsym_set:
- temp += seq_[0]
- seq_.pop(0)
- if temp != "":
- segment.append(temp)
-
- # for Target Chars
- temp = ""
- while len(seq_) and seq_[0] in self._natscr_set:
- temp += seq_[0]
- seq_.pop(0)
- if temp != "":
- segment.append(temp)
-
- # for Input-Roman Chars
- temp = ""
- while len(seq_) and seq_[0] in self._inchar_set:
- temp += seq_[0]
- seq_.pop(0)
- if temp != "":
- segment.append(temp)
-
- temp = ""
- while len(seq_) and seq_[0] not in accepted:
- temp += seq_[0]
- seq_.pop(0)
- if temp != "":
- segment.append(temp)
-
- return segment
-
- def inferencer(self, sequence, beam_width=10):
-
- seg = self._word_segementer(sequence[:120])
- lit_seg = []
-
- p = 0
- while p < len(seg):
- if seg[p][0] in self._natscr_set:
- lit_seg.append([seg[p]])
- p += 1
-
- elif seg[p][0] in self._inchar_set:
- lit_seg.append(self.character_model(seg[p], beam_width=beam_width))
- p += 1
-
- elif seg[p][0] in self._numsym_set: # num & punc
- lit_seg.append(self.numsym_model(seg[p]))
- p += 1
- else:
- lit_seg.append([seg[p]])
- p += 1
-
- ## IF segment less/equal to 2 then return combinotorial,
- ## ELSE only return top1 of each result concatenated
- if len(lit_seg) == 1:
- final_result = lit_seg[0]
-
- elif len(lit_seg) == 2:
- final_result = [""]
- for seg in lit_seg:
- new_result = []
- for s in seg:
- for f in final_result:
- new_result.append(f + s)
- final_result = new_result
-
- else:
- new_result = []
- for seg in lit_seg:
- new_result.append(seg[0])
- final_result = ["".join(new_result)]
-
- return final_result
-
-
-from collections.abc import Iterable
-from pydload import dload
-import zipfile
-
-MODEL_DOWNLOAD_URL_PREFIX = "https://github.com/AI4Bharat/IndianNLP-Transliteration/releases/download/xlit_v0.5.0/"
-
-
-def is_folder_writable(folder):
- try:
- os.makedirs(folder, exist_ok=True)
- tmp_file = os.path.join(folder, ".write_test")
- with open(tmp_file, "w") as f:
- f.write("Permission Check")
- os.remove(tmp_file)
- return True
- except:
- return False
-
-
-def is_directory_writable(path):
- if os.name == "nt":
- return is_folder_writable(path)
- return os.access(path, os.W_OK | os.X_OK)
-
-
-class XlitEngine:
- """
- For Managing the top level tasks and applications of transliteration
- Global Variables: F_DIR
- """
-
- def __init__(
- self, lang2use="all", config_path="translit_models/default_lineup.json"
- ):
-
- lineup = json.load(open(os.path.join(F_DIR, config_path), encoding="utf-8"))
- self.lang_config = {}
- if isinstance(lang2use, str):
- if lang2use == "all":
- self.lang_config = lineup
- elif lang2use in lineup:
- self.lang_config[lang2use] = lineup[lang2use]
- else:
- raise Exception(
- "XlitError: The entered Langauge code not found. Available are {}".format(
- lineup.keys()
- )
- )
-
- elif isinstance(lang2use, Iterable):
- for l in lang2use:
- try:
- self.lang_config[l] = lineup[l]
- except:
- print(
- "XlitError: Language code {} not found, Skipping...".format(l)
- )
- else:
- raise Exception(
- "XlitError: lang2use must be a list of language codes (or) string of single language code"
- )
-
- if is_directory_writable(F_DIR):
- models_path = os.path.join(F_DIR, "translit_models")
- else:
- user_home = os.path.expanduser("~")
- models_path = os.path.join(user_home, ".AI4Bharat_Xlit_Models")
- os.makedirs(models_path, exist_ok=True)
- self.download_models(models_path)
-
- self.langs = {}
- self.lang_model = {}
- for la in self.lang_config:
- try:
- print("Loading {}...".format(la))
- self.lang_model[la] = XlitPiston(
- weight_path=os.path.join(
- models_path, self.lang_config[la]["weight"]
- ),
- vocab_file=os.path.join(models_path, self.lang_config[la]["vocab"]),
- tglyph_cfg_file=os.path.join(
- models_path, self.lang_config[la]["script"]
- ),
- iglyph_cfg_file="en",
- )
- self.langs[la] = self.lang_config[la]["name"]
- except Exception as error:
- print("XlitError: Failure in loading {} \n".format(la), error)
- print(XlitError.loading_err.value)
-
- def download_models(self, models_path):
- """
- Download models from GitHub Releases if not exists
- """
- for l in self.lang_config:
- lang_name = self.lang_config[l]["eng_name"]
- lang_model_path = os.path.join(models_path, lang_name)
- if not os.path.isdir(lang_model_path):
- print("Downloading model for language: %s" % lang_name)
- remote_url = MODEL_DOWNLOAD_URL_PREFIX + lang_name + ".zip"
- downloaded_zip_path = os.path.join(models_path, lang_name + ".zip")
- dload(url=remote_url, save_to_path=downloaded_zip_path, max_time=None)
-
- if not os.path.isfile(downloaded_zip_path):
- exit(
- f"ERROR: Unable to download model from {remote_url} into {models_path}"
- )
-
- with zipfile.ZipFile(downloaded_zip_path, "r") as zip_ref:
- zip_ref.extractall(models_path)
-
- if os.path.isdir(lang_model_path):
- os.remove(downloaded_zip_path)
- else:
- exit(
- f"ERROR: Unable to find models in {lang_model_path} after download"
- )
- return
-
- def translit_word(self, eng_word, lang_code="default", topk=7, beam_width=10):
- if eng_word == "":
- return []
-
- if lang_code in self.langs:
- try:
- res_list = self.lang_model[lang_code].inferencer(
- eng_word, beam_width=beam_width
- )
- return res_list[:topk]
-
- except Exception as error:
- print("XlitError:", traceback.format_exc())
- print(XlitError.internal_err.value)
- return XlitError.internal_err
-
- elif lang_code == "default":
- try:
- res_dict = {}
- for la in self.lang_model:
- res = self.lang_model[la].inferencer(
- eng_word, beam_width=beam_width
- )
- res_dict[la] = res[:topk]
- return res_dict
-
- except Exception as error:
- print("XlitError:", traceback.format_exc())
- print(XlitError.internal_err.value)
- return XlitError.internal_err
-
- else:
- print("XlitError: Unknown Langauge requested", lang_code)
- print(XlitError.lang_err.value)
- return XlitError.lang_err
-
- def translit_sentence(self, eng_sentence, lang_code="default", beam_width=10):
- if eng_sentence == "":
- return []
-
- if lang_code in self.langs:
- try:
- out_str = ""
- for word in eng_sentence.split():
- res_ = self.lang_model[lang_code].inferencer(
- word, beam_width=beam_width
- )
- out_str = out_str + res_[0] + " "
- return out_str[:-1]
-
- except Exception as error:
- print("XlitError:", traceback.format_exc())
- print(XlitError.internal_err.value)
- return XlitError.internal_err
-
- elif lang_code == "default":
- try:
- res_dict = {}
- for la in self.lang_model:
- out_str = ""
- for word in eng_sentence.split():
- res_ = self.lang_model[la].inferencer(
- word, beam_width=beam_width
- )
- out_str = out_str + res_[0] + " "
- res_dict[la] = out_str[:-1]
- return res_dict
-
- except Exception as error:
- print("XlitError:", traceback.format_exc())
- print(XlitError.internal_err.value)
- return XlitError.internal_err
-
- else:
- print("XlitError: Unknown Langauge requested", lang_code)
- print(XlitError.lang_err.value)
- return XlitError.lang_err
-
-
-if __name__ == "__main__":
-
- available_lang = [
- "bn",
- "gu",
- "hi",
- "kn",
- "gom",
- "mai",
- "ml",
- "mr",
- "pa",
- "sd",
- "si",
- "ta",
- "te",
- "ur",
- ]
-
- reg = re.compile(r"[a-zA-Z]")
- lang = "hi"
- engine = XlitEngine(
- lang
- ) # if you don't specify lang code here, this will give results in all langs available
- sent = "Hello World! ABCD क्या हाल है आपका?"
- words = [
- engine.translit_word(word, topk=1)[lang][0] if reg.match(word) else word
- for word in sent.split()
- ] # only transliterated en words, leaves rest as it is
- updated_sent = " ".join(words)
-
- print(updated_sent)
-
- # output : हेलो वर्ल्ड! क्या हाल है आपका?
-
- # y = engine.translit_sentence("Hello World !")['hi']
- # print(y)
diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/app.py b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/app.py
deleted file mode 100644
index 34c3f358e5f53be91f739c88cd71f6310bbe0d46..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/app.py
+++ /dev/null
@@ -1,28 +0,0 @@
-import os
-os.system('wget -q https://storage.googleapis.com/vakyansh-open-models/tts/malayalam/ml-IN/female_voice_0/glow.zip && unzip -q glow.zip -d ttsv/checkpoints/female')
-os.system('wget -q https://storage.googleapis.com/vakyansh-open-models/tts/malayalam/ml-IN/female_voice_0/hifi.zip && unzip -q hifi.zip -d ttsv/checkpoints/female')
-os.system('rm glow.zip && rm hifi.zip')
-os.system('wget -q https://storage.googleapis.com/vakyansh-open-models/tts/malayalam/ml-IN/male_voice_1/glow.zip && unzip -q glow.zip -d ttsv/checkpoints/male')
-os.system('wget -q https://storage.googleapis.com/vakyansh-open-models/tts/malayalam/ml-IN/male_voice_1/hifi.zip && unzip -q hifi.zip -d ttsv/checkpoints/male')
-os.system('wget -q https://storage.googleapis.com/vakyansh-open-models/translit_models.zip -P ttsv/checkpoints/ && unzip -q ttsv/checkpoints/translit_models.zip -d ttsv/checkpoints/')
-
-
-for path, subdirs, files in os.walk('ttsv/checkpoints/'):
- print(subdirs)
- for name in files:
- print(os.path.join(path, name))
-
-from ttsv.utils.inference.run_gradio import *
-from argparse import Namespace
-
-#os.system('python ttsv/utils/inference/run_gradio.py -a ttsv/checkpoints/glow/male -v ttsv/checkpoints/hifi/male -d cpu -L hi')
-
-
-args = {
- 'acoustic':'/home/user/app/ttsv/checkpoints/female/fe_glow,/home/user/app/ttsv/checkpoints/male/glow',
- 'vocoder':'/home/user/app/ttsv/checkpoints/female/hifi,/home/user/app/ttsv/checkpoints/male/hifi',
- 'device':'cpu',
- 'lang':'ml'
-}
-
-build_gradio(Namespace(**args))
\ No newline at end of file
diff --git a/spaces/Hina4867/bingo/src/lib/bots/bing/index.ts b/spaces/Hina4867/bingo/src/lib/bots/bing/index.ts
deleted file mode 100644
index 2c4afae01a345b8415935228566cb30d695e768d..0000000000000000000000000000000000000000
--- a/spaces/Hina4867/bingo/src/lib/bots/bing/index.ts
+++ /dev/null
@@ -1,421 +0,0 @@
-import { fetch, WebSocket, debug } from '@/lib/isomorphic'
-import WebSocketAsPromised from 'websocket-as-promised'
-import {
- SendMessageParams,
- BingConversationStyle,
- ConversationResponse,
- ChatResponseMessage,
- ConversationInfo,
- InvocationEventType,
- ChatError,
- ErrorCode,
- ChatUpdateCompleteResponse,
- ImageInfo,
- KBlobResponse
-} from './types'
-
-import { convertMessageToMarkdown, websocketUtils, streamAsyncIterable } from './utils'
-import { WatchDog, createChunkDecoder } from '@/lib/utils'
-
-type Params = SendMessageParams<{ bingConversationStyle: BingConversationStyle }>
-
-const OPTIONS_SETS = [
- 'nlu_direct_response_filter',
- 'deepleo',
- 'disable_emoji_spoken_text',
- 'responsible_ai_policy_235',
- 'enablemm',
- 'iycapbing',
- 'iyxapbing',
- 'objopinion',
- 'rweasgv2',
- 'dagslnv1',
- 'dv3sugg',
- 'autosave',
- 'iyoloxap',
- 'iyoloneutral',
- 'clgalileo',
- 'gencontentv3',
-]
-
-export class BingWebBot {
- protected conversationContext?: ConversationInfo
- protected cookie: string
- protected ua: string
- protected endpoint = ''
- private lastText = ''
- private asyncTasks: Array> = []
-
- constructor(opts: {
- cookie: string
- ua: string
- bingConversationStyle?: BingConversationStyle
- conversationContext?: ConversationInfo
- }) {
- const { cookie, ua, conversationContext } = opts
- this.cookie = cookie?.includes(';') ? cookie : `_EDGE_V=1; _U=${cookie}`
- this.ua = ua
- this.conversationContext = conversationContext
- }
-
- static buildChatRequest(conversation: ConversationInfo) {
- const optionsSets = OPTIONS_SETS
- if (conversation.conversationStyle === BingConversationStyle.Precise) {
- optionsSets.push('h3precise')
- } else if (conversation.conversationStyle === BingConversationStyle.Creative) {
- optionsSets.push('h3imaginative')
- }
- return {
- arguments: [
- {
- source: 'cib',
- optionsSets,
- allowedMessageTypes: [
- 'Chat',
- 'InternalSearchQuery',
- 'Disengaged',
- 'InternalLoaderMessage',
- 'SemanticSerp',
- 'GenerateContentQuery',
- 'SearchQuery',
- ],
- sliceIds: [
- 'winmuid1tf',
- 'anssupfor_c',
- 'imgchatgptv2',
- 'tts2cf',
- 'contansperf',
- 'mlchatpc8500w',
- 'mlchatpc2',
- 'ctrlworkpay',
- 'winshortmsgtf',
- 'cibctrl',
- 'sydtransctrl',
- 'sydconfigoptc',
- '0705trt4',
- '517opinion',
- '628ajcopus0',
- '330uaugs0',
- '529rwea',
- '0626snptrcs0',
- '424dagslnv1',
- ],
- isStartOfSession: conversation.invocationId === 0,
- message: {
- author: 'user',
- inputMethod: 'Keyboard',
- text: conversation.prompt,
- imageUrl: conversation.imageUrl,
- messageType: 'Chat',
- },
- conversationId: conversation.conversationId,
- conversationSignature: conversation.conversationSignature,
- participant: { id: conversation.clientId },
- },
- ],
- invocationId: conversation.invocationId.toString(),
- target: 'chat',
- type: InvocationEventType.StreamInvocation,
- }
- }
-
- async createConversation(): Promise {
- const headers = {
- 'Accept-Encoding': 'gzip, deflate, br, zsdch',
- 'User-Agent': this.ua,
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- cookie: this.cookie,
- }
-
- let resp: ConversationResponse | undefined
- try {
- const response = await fetch(this.endpoint + '/api/create', { method: 'POST', headers, redirect: 'error', mode: 'cors', credentials: 'include' })
- if (response.status === 404) {
- throw new ChatError('Not Found', ErrorCode.NOTFOUND_ERROR)
- }
- resp = await response.json() as ConversationResponse
- } catch (err) {
- console.error('create conversation error', err)
- }
-
- if (!resp?.result) {
- throw new ChatError('Invalid response', ErrorCode.UNKOWN_ERROR)
- }
-
- const { value, message } = resp.result || {}
- if (value !== 'Success') {
- const errorMsg = `${value}: ${message}`
- if (value === 'UnauthorizedRequest') {
- throw new ChatError(errorMsg, ErrorCode.BING_UNAUTHORIZED)
- }
- if (value === 'Forbidden') {
- throw new ChatError(errorMsg, ErrorCode.BING_FORBIDDEN)
- }
- throw new ChatError(errorMsg, ErrorCode.UNKOWN_ERROR)
- }
- return resp
- }
-
- private async createContext(conversationStyle: BingConversationStyle) {
- if (!this.conversationContext) {
- const conversation = await this.createConversation()
- this.conversationContext = {
- conversationId: conversation.conversationId,
- conversationSignature: conversation.conversationSignature,
- clientId: conversation.clientId,
- invocationId: 0,
- conversationStyle,
- prompt: '',
- }
- }
- return this.conversationContext
- }
-
- async sendMessage(params: Params) {
- try {
- await this.createContext(params.options.bingConversationStyle)
- Object.assign(this.conversationContext!, { prompt: params.prompt, imageUrl: params.imageUrl })
- return this.sydneyProxy(params)
- } catch (error) {
- params.onEvent({
- type: 'ERROR',
- error: error instanceof ChatError ? error : new ChatError('Catch Error', ErrorCode.UNKOWN_ERROR),
- })
- }
- }
-
- private async sydneyProxy(params: Params) {
- const abortController = new AbortController()
- const response = await fetch(this.endpoint + '/api/sydney', {
- method: 'POST',
- headers: {
- 'Content-Type': 'application/json',
- },
- signal: abortController.signal,
- body: JSON.stringify(this.conversationContext!)
- })
- if (response.status !== 200) {
- params.onEvent({
- type: 'ERROR',
- error: new ChatError(
- 'Unknown error',
- ErrorCode.UNKOWN_ERROR,
- ),
- })
- }
- params.signal?.addEventListener('abort', () => {
- abortController.abort()
- })
-
- const textDecoder = createChunkDecoder()
- for await (const chunk of streamAsyncIterable(response.body!)) {
- this.parseEvents(params, websocketUtils.unpackMessage(textDecoder(chunk)))
- }
- }
-
- async sendWs() {
- const wsConfig: ConstructorParameters[1] = {
- packMessage: websocketUtils.packMessage,
- unpackMessage: websocketUtils.unpackMessage,
- createWebSocket: (url) => new WebSocket(url, {
- headers: {
- 'accept-language': 'zh-CN,zh;q=0.9',
- 'cache-control': 'no-cache',
- 'User-Agent': this.ua,
- pragma: 'no-cache',
- cookie: this.cookie,
- }
- })
- }
- const wsp = new WebSocketAsPromised('wss://sydney.bing.com/sydney/ChatHub', wsConfig)
-
- wsp.open().then(() => {
- wsp.sendPacked({ protocol: 'json', version: 1 })
- wsp.sendPacked({ type: 6 })
- wsp.sendPacked(BingWebBot.buildChatRequest(this.conversationContext!))
- })
-
- return wsp
- }
-
- private async useWs(params: Params) {
- const wsp = await this.sendWs()
- const watchDog = new WatchDog()
- wsp.onUnpackedMessage.addListener((events) => {
- watchDog.watch(() => {
- wsp.sendPacked({ type: 6 })
- })
- this.parseEvents(params, events)
- })
-
- wsp.onClose.addListener(() => {
- watchDog.reset()
- params.onEvent({ type: 'DONE' })
- wsp.removeAllListeners()
- })
-
- params.signal?.addEventListener('abort', () => {
- wsp.removeAllListeners()
- wsp.close()
- })
- }
-
- private async createImage(prompt: string, id: string) {
- try {
- const headers = {
- 'Accept-Encoding': 'gzip, deflate, br, zsdch',
- 'User-Agent': this.ua,
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- cookie: this.cookie,
- }
- const query = new URLSearchParams({
- prompt,
- id
- })
- const response = await fetch(this.endpoint + '/api/image?' + query.toString(),
- {
- method: 'POST',
- headers,
- mode: 'cors',
- credentials: 'include'
- })
- .then(res => res.text())
- if (response) {
- this.lastText += '\n' + response
- }
- } catch (err) {
- console.error('Create Image Error', err)
- }
- }
-
- private buildKnowledgeApiPayload(imageUrl: string, conversationStyle: BingConversationStyle) {
- const imageInfo: ImageInfo = {}
- let imageBase64: string | undefined = undefined
- const knowledgeRequest = {
- imageInfo,
- knowledgeRequest: {
- invokedSkills: [
- 'ImageById'
- ],
- subscriptionId: 'Bing.Chat.Multimodal',
- invokedSkillsRequestData: {
- enableFaceBlur: true
- },
- convoData: {
- convoid: this.conversationContext?.conversationId,
- convotone: conversationStyle,
- }
- },
- }
-
- if (imageUrl.startsWith('data:image/')) {
- imageBase64 = imageUrl.replace('data:image/', '');
- const partIndex = imageBase64.indexOf(',')
- if (partIndex) {
- imageBase64 = imageBase64.substring(partIndex + 1)
- }
- } else {
- imageInfo.url = imageUrl
- }
- return { knowledgeRequest, imageBase64 }
- }
-
- async uploadImage(imageUrl: string, conversationStyle: BingConversationStyle = BingConversationStyle.Creative): Promise {
- if (!imageUrl) {
- return
- }
- await this.createContext(conversationStyle)
- const payload = this.buildKnowledgeApiPayload(imageUrl, conversationStyle)
-
- const response = await fetch(this.endpoint + '/api/kblob',
- {
- headers: {
- 'Content-Type': 'application/json',
- },
- method: 'POST',
- mode: 'cors',
- credentials: 'include',
- body: JSON.stringify(payload),
- })
- .then(res => res.json())
- .catch(e => {
- console.log('Error', e)
- })
- return response
- }
-
- private async generateContent(message: ChatResponseMessage) {
- if (message.contentType === 'IMAGE') {
- this.asyncTasks.push(this.createImage(message.text, message.messageId))
- }
- }
-
- private async parseEvents(params: Params, events: any) {
- const conversation = this.conversationContext!
-
- events?.forEach(async (event: ChatUpdateCompleteResponse) => {
- debug('bing event', event)
- if (event.type === 3) {
- await Promise.all(this.asyncTasks)
- this.asyncTasks = []
- params.onEvent({ type: 'UPDATE_ANSWER', data: { text: this.lastText } })
- params.onEvent({ type: 'DONE' })
- conversation.invocationId = parseInt(event.invocationId, 10) + 1
- } else if (event.type === 1) {
- const messages = event.arguments[0].messages
- if (messages) {
- const text = convertMessageToMarkdown(messages[0])
- this.lastText = text
- params.onEvent({ type: 'UPDATE_ANSWER', data: { text, spokenText: messages[0].text, throttling: event.arguments[0].throttling } })
- }
- } else if (event.type === 2) {
- const messages = event.item.messages as ChatResponseMessage[] | undefined
- if (!messages) {
- params.onEvent({
- type: 'ERROR',
- error: new ChatError(
- event.item.result.error || 'Unknown error',
- event.item.result.value === 'Throttled' ? ErrorCode.THROTTLE_LIMIT
- : event.item.result.value === 'CaptchaChallenge' ? (this.conversationContext?.conversationId?.includes('BingProdUnAuthenticatedUsers') ? ErrorCode.BING_UNAUTHORIZED : ErrorCode.BING_CAPTCHA)
- : ErrorCode.UNKOWN_ERROR
- ),
- })
- return
- }
- const limited = messages.some((message) =>
- message.contentOrigin === 'TurnLimiter'
- || message.messageType === 'Disengaged'
- )
- if (limited) {
- params.onEvent({
- type: 'ERROR',
- error: new ChatError(
- 'Sorry, you have reached chat limit in this conversation.',
- ErrorCode.CONVERSATION_LIMIT,
- ),
- })
- return
- }
-
- const lastMessage = event.item.messages.at(-1) as ChatResponseMessage
- const specialMessage = event.item.messages.find(message => message.author === 'bot' && message.contentType === 'IMAGE')
- if (specialMessage) {
- this.generateContent(specialMessage)
- }
-
- if (lastMessage) {
- const text = convertMessageToMarkdown(lastMessage)
- this.lastText = text
- params.onEvent({
- type: 'UPDATE_ANSWER',
- data: { text, throttling: event.item.throttling, suggestedResponses: lastMessage.suggestedResponses, sourceAttributions: lastMessage.sourceAttributions },
- })
- }
- }
- })
- }
-
- resetConversation() {
- this.conversationContext = undefined
- }
-}
diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/utils/realesrgan_utils.py b/spaces/Iceclear/StableSR/StableSR/basicsr/utils/realesrgan_utils.py
deleted file mode 100644
index ff934e5150b4aa568a51ab9614a2057b011a6014..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/basicsr/utils/realesrgan_utils.py
+++ /dev/null
@@ -1,293 +0,0 @@
-import cv2
-import math
-import numpy as np
-import os
-import queue
-import threading
-import torch
-from basicsr.utils.download_util import load_file_from_url
-from torch.nn import functional as F
-
-# ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
-
-
-class RealESRGANer():
- """A helper class for upsampling images with RealESRGAN.
-
- Args:
- scale (int): Upsampling scale factor used in the networks. It is usually 2 or 4.
- model_path (str): The path to the pretrained model. It can be urls (will first download it automatically).
- model (nn.Module): The defined network. Default: None.
- tile (int): As too large images result in the out of GPU memory issue, so this tile option will first crop
- input images into tiles, and then process each of them. Finally, they will be merged into one image.
- 0 denotes for do not use tile. Default: 0.
- tile_pad (int): The pad size for each tile, to remove border artifacts. Default: 10.
- pre_pad (int): Pad the input images to avoid border artifacts. Default: 10.
- half (float): Whether to use half precision during inference. Default: False.
- """
-
- def __init__(self,
- scale,
- model_path,
- model=None,
- tile=0,
- tile_pad=10,
- pre_pad=10,
- half=False,
- device=None,
- gpu_id=None):
- self.scale = scale
- self.tile_size = tile
- self.tile_pad = tile_pad
- self.pre_pad = pre_pad
- self.mod_scale = None
- self.half = half
-
- # initialize model
- if gpu_id:
- self.device = torch.device(
- f'cuda:{gpu_id}' if torch.cuda.is_available() else 'cpu') if device is None else device
- else:
- self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') if device is None else device
- # if the model_path starts with https, it will first download models to the folder: realesrgan/weights
- if model_path.startswith('https://'):
- model_path = load_file_from_url(
- url=model_path, model_dir=os.path.join('weights/realesrgan'), progress=True, file_name=None)
- loadnet = torch.load(model_path, map_location=torch.device('cpu'))
- # prefer to use params_ema
- if 'params_ema' in loadnet:
- keyname = 'params_ema'
- else:
- keyname = 'params'
- model.load_state_dict(loadnet[keyname], strict=True)
- model.eval()
- self.model = model.to(self.device)
- if self.half:
- self.model = self.model.half()
-
- def pre_process(self, img):
- """Pre-process, such as pre-pad and mod pad, so that the images can be divisible
- """
- img = torch.from_numpy(np.transpose(img, (2, 0, 1))).float()
- self.img = img.unsqueeze(0).to(self.device)
- if self.half:
- self.img = self.img.half()
-
- # pre_pad
- if self.pre_pad != 0:
- self.img = F.pad(self.img, (0, self.pre_pad, 0, self.pre_pad), 'reflect')
- # mod pad for divisible borders
- if self.scale == 2:
- self.mod_scale = 2
- elif self.scale == 1:
- self.mod_scale = 4
- if self.mod_scale is not None:
- self.mod_pad_h, self.mod_pad_w = 0, 0
- _, _, h, w = self.img.size()
- if (h % self.mod_scale != 0):
- self.mod_pad_h = (self.mod_scale - h % self.mod_scale)
- if (w % self.mod_scale != 0):
- self.mod_pad_w = (self.mod_scale - w % self.mod_scale)
- self.img = F.pad(self.img, (0, self.mod_pad_w, 0, self.mod_pad_h), 'reflect')
-
- def process(self):
- # model inference
- self.output = self.model(self.img)
-
- def tile_process(self):
- """It will first crop input images to tiles, and then process each tile.
- Finally, all the processed tiles are merged into one images.
-
- Modified from: https://github.com/ata4/esrgan-launcher
- """
- batch, channel, height, width = self.img.shape
- output_height = height * self.scale
- output_width = width * self.scale
- output_shape = (batch, channel, output_height, output_width)
-
- # start with black image
- self.output = self.img.new_zeros(output_shape)
- tiles_x = math.ceil(width / self.tile_size)
- tiles_y = math.ceil(height / self.tile_size)
-
- # loop over all tiles
- for y in range(tiles_y):
- for x in range(tiles_x):
- # extract tile from input image
- ofs_x = x * self.tile_size
- ofs_y = y * self.tile_size
- # input tile area on total image
- input_start_x = ofs_x
- input_end_x = min(ofs_x + self.tile_size, width)
- input_start_y = ofs_y
- input_end_y = min(ofs_y + self.tile_size, height)
-
- # input tile area on total image with padding
- input_start_x_pad = max(input_start_x - self.tile_pad, 0)
- input_end_x_pad = min(input_end_x + self.tile_pad, width)
- input_start_y_pad = max(input_start_y - self.tile_pad, 0)
- input_end_y_pad = min(input_end_y + self.tile_pad, height)
-
- # input tile dimensions
- input_tile_width = input_end_x - input_start_x
- input_tile_height = input_end_y - input_start_y
- tile_idx = y * tiles_x + x + 1
- input_tile = self.img[:, :, input_start_y_pad:input_end_y_pad, input_start_x_pad:input_end_x_pad]
-
- # upscale tile
- try:
- with torch.no_grad():
- output_tile = self.model(input_tile)
- except RuntimeError as error:
- print('Error', error)
- # print(f'\tTile {tile_idx}/{tiles_x * tiles_y}')
-
- # output tile area on total image
- output_start_x = input_start_x * self.scale
- output_end_x = input_end_x * self.scale
- output_start_y = input_start_y * self.scale
- output_end_y = input_end_y * self.scale
-
- # output tile area without padding
- output_start_x_tile = (input_start_x - input_start_x_pad) * self.scale
- output_end_x_tile = output_start_x_tile + input_tile_width * self.scale
- output_start_y_tile = (input_start_y - input_start_y_pad) * self.scale
- output_end_y_tile = output_start_y_tile + input_tile_height * self.scale
-
- # put tile into output image
- self.output[:, :, output_start_y:output_end_y,
- output_start_x:output_end_x] = output_tile[:, :, output_start_y_tile:output_end_y_tile,
- output_start_x_tile:output_end_x_tile]
-
- def post_process(self):
- # remove extra pad
- if self.mod_scale is not None:
- _, _, h, w = self.output.size()
- self.output = self.output[:, :, 0:h - self.mod_pad_h * self.scale, 0:w - self.mod_pad_w * self.scale]
- # remove prepad
- if self.pre_pad != 0:
- _, _, h, w = self.output.size()
- self.output = self.output[:, :, 0:h - self.pre_pad * self.scale, 0:w - self.pre_pad * self.scale]
- return self.output
-
- @torch.no_grad()
- def enhance(self, img, outscale=None, alpha_upsampler='realesrgan'):
- h_input, w_input = img.shape[0:2]
- # img: numpy
- img = img.astype(np.float32)
- if np.max(img) > 256: # 16-bit image
- max_range = 65535
- print('\tInput is a 16-bit image')
- else:
- max_range = 255
- img = img / max_range
- if len(img.shape) == 2: # gray image
- img_mode = 'L'
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)
- elif img.shape[2] == 4: # RGBA image with alpha channel
- img_mode = 'RGBA'
- alpha = img[:, :, 3]
- img = img[:, :, 0:3]
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
- if alpha_upsampler == 'realesrgan':
- alpha = cv2.cvtColor(alpha, cv2.COLOR_GRAY2RGB)
- else:
- img_mode = 'RGB'
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
-
- # ------------------- process image (without the alpha channel) ------------------- #
- self.pre_process(img)
- if self.tile_size > 0:
- self.tile_process()
- else:
- self.process()
- output_img = self.post_process()
- output_img = output_img.data.squeeze().float().cpu().clamp_(0, 1).numpy()
- output_img = np.transpose(output_img[[2, 1, 0], :, :], (1, 2, 0))
- if img_mode == 'L':
- output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2GRAY)
-
- # ------------------- process the alpha channel if necessary ------------------- #
- if img_mode == 'RGBA':
- if alpha_upsampler == 'realesrgan':
- self.pre_process(alpha)
- if self.tile_size > 0:
- self.tile_process()
- else:
- self.process()
- output_alpha = self.post_process()
- output_alpha = output_alpha.data.squeeze().float().cpu().clamp_(0, 1).numpy()
- output_alpha = np.transpose(output_alpha[[2, 1, 0], :, :], (1, 2, 0))
- output_alpha = cv2.cvtColor(output_alpha, cv2.COLOR_BGR2GRAY)
- else: # use the cv2 resize for alpha channel
- h, w = alpha.shape[0:2]
- output_alpha = cv2.resize(alpha, (w * self.scale, h * self.scale), interpolation=cv2.INTER_LINEAR)
-
- # merge the alpha channel
- output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2BGRA)
- output_img[:, :, 3] = output_alpha
-
- # ------------------------------ return ------------------------------ #
- if max_range == 65535: # 16-bit image
- output = (output_img * 65535.0).round().astype(np.uint16)
- else:
- output = (output_img * 255.0).round().astype(np.uint8)
-
- if outscale is not None and outscale != float(self.scale):
- output = cv2.resize(
- output, (
- int(w_input * outscale),
- int(h_input * outscale),
- ), interpolation=cv2.INTER_LANCZOS4)
-
- return output, img_mode
-
-
-class PrefetchReader(threading.Thread):
- """Prefetch images.
-
- Args:
- img_list (list[str]): A image list of image paths to be read.
- num_prefetch_queue (int): Number of prefetch queue.
- """
-
- def __init__(self, img_list, num_prefetch_queue):
- super().__init__()
- self.que = queue.Queue(num_prefetch_queue)
- self.img_list = img_list
-
- def run(self):
- for img_path in self.img_list:
- img = cv2.imread(img_path, cv2.IMREAD_UNCHANGED)
- self.que.put(img)
-
- self.que.put(None)
-
- def __next__(self):
- next_item = self.que.get()
- if next_item is None:
- raise StopIteration
- return next_item
-
- def __iter__(self):
- return self
-
-
-class IOConsumer(threading.Thread):
-
- def __init__(self, opt, que, qid):
- super().__init__()
- self._queue = que
- self.qid = qid
- self.opt = opt
-
- def run(self):
- while True:
- msg = self._queue.get()
- if isinstance(msg, str) and msg == 'quit':
- break
-
- output = msg['output']
- save_path = msg['save_path']
- cv2.imwrite(save_path, output)
- print(f'IO worker {self.qid} is done.')
diff --git a/spaces/Iceclear/StableSR/StableSR/clip/model.py b/spaces/Iceclear/StableSR/StableSR/clip/model.py
deleted file mode 100644
index 232b7792eb97440642547bd462cf128df9243933..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/clip/model.py
+++ /dev/null
@@ -1,436 +0,0 @@
-from collections import OrderedDict
-from typing import Tuple, Union
-
-import numpy as np
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-
-class Bottleneck(nn.Module):
- expansion = 4
-
- def __init__(self, inplanes, planes, stride=1):
- super().__init__()
-
- # all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1
- self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False)
- self.bn1 = nn.BatchNorm2d(planes)
- self.relu1 = nn.ReLU(inplace=True)
-
- self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False)
- self.bn2 = nn.BatchNorm2d(planes)
- self.relu2 = nn.ReLU(inplace=True)
-
- self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity()
-
- self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False)
- self.bn3 = nn.BatchNorm2d(planes * self.expansion)
- self.relu3 = nn.ReLU(inplace=True)
-
- self.downsample = None
- self.stride = stride
-
- if stride > 1 or inplanes != planes * Bottleneck.expansion:
- # downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1
- self.downsample = nn.Sequential(OrderedDict([
- ("-1", nn.AvgPool2d(stride)),
- ("0", nn.Conv2d(inplanes, planes * self.expansion, 1, stride=1, bias=False)),
- ("1", nn.BatchNorm2d(planes * self.expansion))
- ]))
-
- def forward(self, x: torch.Tensor):
- identity = x
-
- out = self.relu1(self.bn1(self.conv1(x)))
- out = self.relu2(self.bn2(self.conv2(out)))
- out = self.avgpool(out)
- out = self.bn3(self.conv3(out))
-
- if self.downsample is not None:
- identity = self.downsample(x)
-
- out += identity
- out = self.relu3(out)
- return out
-
-
-class AttentionPool2d(nn.Module):
- def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None):
- super().__init__()
- self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5)
- self.k_proj = nn.Linear(embed_dim, embed_dim)
- self.q_proj = nn.Linear(embed_dim, embed_dim)
- self.v_proj = nn.Linear(embed_dim, embed_dim)
- self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim)
- self.num_heads = num_heads
-
- def forward(self, x):
- x = x.flatten(start_dim=2).permute(2, 0, 1) # NCHW -> (HW)NC
- x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC
- x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC
- x, _ = F.multi_head_attention_forward(
- query=x[:1], key=x, value=x,
- embed_dim_to_check=x.shape[-1],
- num_heads=self.num_heads,
- q_proj_weight=self.q_proj.weight,
- k_proj_weight=self.k_proj.weight,
- v_proj_weight=self.v_proj.weight,
- in_proj_weight=None,
- in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]),
- bias_k=None,
- bias_v=None,
- add_zero_attn=False,
- dropout_p=0,
- out_proj_weight=self.c_proj.weight,
- out_proj_bias=self.c_proj.bias,
- use_separate_proj_weight=True,
- training=self.training,
- need_weights=False
- )
- return x.squeeze(0)
-
-
-class ModifiedResNet(nn.Module):
- """
- A ResNet class that is similar to torchvision's but contains the following changes:
- - There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool.
- - Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1
- - The final pooling layer is a QKV attention instead of an average pool
- """
-
- def __init__(self, layers, output_dim, heads, input_resolution=224, width=64):
- super().__init__()
- self.output_dim = output_dim
- self.input_resolution = input_resolution
-
- # the 3-layer stem
- self.conv1 = nn.Conv2d(3, width // 2, kernel_size=3, stride=2, padding=1, bias=False)
- self.bn1 = nn.BatchNorm2d(width // 2)
- self.relu1 = nn.ReLU(inplace=True)
- self.conv2 = nn.Conv2d(width // 2, width // 2, kernel_size=3, padding=1, bias=False)
- self.bn2 = nn.BatchNorm2d(width // 2)
- self.relu2 = nn.ReLU(inplace=True)
- self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False)
- self.bn3 = nn.BatchNorm2d(width)
- self.relu3 = nn.ReLU(inplace=True)
- self.avgpool = nn.AvgPool2d(2)
-
- # residual layers
- self._inplanes = width # this is a *mutable* variable used during construction
- self.layer1 = self._make_layer(width, layers[0])
- self.layer2 = self._make_layer(width * 2, layers[1], stride=2)
- self.layer3 = self._make_layer(width * 4, layers[2], stride=2)
- self.layer4 = self._make_layer(width * 8, layers[3], stride=2)
-
- embed_dim = width * 32 # the ResNet feature dimension
- self.attnpool = AttentionPool2d(input_resolution // 32, embed_dim, heads, output_dim)
-
- def _make_layer(self, planes, blocks, stride=1):
- layers = [Bottleneck(self._inplanes, planes, stride)]
-
- self._inplanes = planes * Bottleneck.expansion
- for _ in range(1, blocks):
- layers.append(Bottleneck(self._inplanes, planes))
-
- return nn.Sequential(*layers)
-
- def forward(self, x):
- def stem(x):
- x = self.relu1(self.bn1(self.conv1(x)))
- x = self.relu2(self.bn2(self.conv2(x)))
- x = self.relu3(self.bn3(self.conv3(x)))
- x = self.avgpool(x)
- return x
-
- x = x.type(self.conv1.weight.dtype)
- x = stem(x)
- x = self.layer1(x)
- x = self.layer2(x)
- x = self.layer3(x)
- x = self.layer4(x)
- x = self.attnpool(x)
-
- return x
-
-
-class LayerNorm(nn.LayerNorm):
- """Subclass torch's LayerNorm to handle fp16."""
-
- def forward(self, x: torch.Tensor):
- orig_type = x.dtype
- ret = super().forward(x.type(torch.float32))
- return ret.type(orig_type)
-
-
-class QuickGELU(nn.Module):
- def forward(self, x: torch.Tensor):
- return x * torch.sigmoid(1.702 * x)
-
-
-class ResidualAttentionBlock(nn.Module):
- def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None):
- super().__init__()
-
- self.attn = nn.MultiheadAttention(d_model, n_head)
- self.ln_1 = LayerNorm(d_model)
- self.mlp = nn.Sequential(OrderedDict([
- ("c_fc", nn.Linear(d_model, d_model * 4)),
- ("gelu", QuickGELU()),
- ("c_proj", nn.Linear(d_model * 4, d_model))
- ]))
- self.ln_2 = LayerNorm(d_model)
- self.attn_mask = attn_mask
-
- def attention(self, x: torch.Tensor):
- self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None
- return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
-
- def forward(self, x: torch.Tensor):
- x = x + self.attention(self.ln_1(x))
- x = x + self.mlp(self.ln_2(x))
- return x
-
-
-class Transformer(nn.Module):
- def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None):
- super().__init__()
- self.width = width
- self.layers = layers
- self.resblocks = nn.Sequential(*[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)])
-
- def forward(self, x: torch.Tensor):
- return self.resblocks(x)
-
-
-class VisionTransformer(nn.Module):
- def __init__(self, input_resolution: int, patch_size: int, width: int, layers: int, heads: int, output_dim: int):
- super().__init__()
- self.input_resolution = input_resolution
- self.output_dim = output_dim
- self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False)
-
- scale = width ** -0.5
- self.class_embedding = nn.Parameter(scale * torch.randn(width))
- self.positional_embedding = nn.Parameter(scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width))
- self.ln_pre = LayerNorm(width)
-
- self.transformer = Transformer(width, layers, heads)
-
- self.ln_post = LayerNorm(width)
- self.proj = nn.Parameter(scale * torch.randn(width, output_dim))
-
- def forward(self, x: torch.Tensor):
- x = self.conv1(x) # shape = [*, width, grid, grid]
- x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2]
- x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width]
- x = torch.cat([self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width]
- x = x + self.positional_embedding.to(x.dtype)
- x = self.ln_pre(x)
-
- x = x.permute(1, 0, 2) # NLD -> LND
- x = self.transformer(x)
- x = x.permute(1, 0, 2) # LND -> NLD
-
- x = self.ln_post(x[:, 0, :])
-
- if self.proj is not None:
- x = x @ self.proj
-
- return x
-
-
-class CLIP(nn.Module):
- def __init__(self,
- embed_dim: int,
- # vision
- image_resolution: int,
- vision_layers: Union[Tuple[int, int, int, int], int],
- vision_width: int,
- vision_patch_size: int,
- # text
- context_length: int,
- vocab_size: int,
- transformer_width: int,
- transformer_heads: int,
- transformer_layers: int
- ):
- super().__init__()
-
- self.context_length = context_length
-
- if isinstance(vision_layers, (tuple, list)):
- vision_heads = vision_width * 32 // 64
- self.visual = ModifiedResNet(
- layers=vision_layers,
- output_dim=embed_dim,
- heads=vision_heads,
- input_resolution=image_resolution,
- width=vision_width
- )
- else:
- vision_heads = vision_width // 64
- self.visual = VisionTransformer(
- input_resolution=image_resolution,
- patch_size=vision_patch_size,
- width=vision_width,
- layers=vision_layers,
- heads=vision_heads,
- output_dim=embed_dim
- )
-
- self.transformer = Transformer(
- width=transformer_width,
- layers=transformer_layers,
- heads=transformer_heads,
- attn_mask=self.build_attention_mask()
- )
-
- self.vocab_size = vocab_size
- self.token_embedding = nn.Embedding(vocab_size, transformer_width)
- self.positional_embedding = nn.Parameter(torch.empty(self.context_length, transformer_width))
- self.ln_final = LayerNorm(transformer_width)
-
- self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim))
- self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07))
-
- self.initialize_parameters()
-
- def initialize_parameters(self):
- nn.init.normal_(self.token_embedding.weight, std=0.02)
- nn.init.normal_(self.positional_embedding, std=0.01)
-
- if isinstance(self.visual, ModifiedResNet):
- if self.visual.attnpool is not None:
- std = self.visual.attnpool.c_proj.in_features ** -0.5
- nn.init.normal_(self.visual.attnpool.q_proj.weight, std=std)
- nn.init.normal_(self.visual.attnpool.k_proj.weight, std=std)
- nn.init.normal_(self.visual.attnpool.v_proj.weight, std=std)
- nn.init.normal_(self.visual.attnpool.c_proj.weight, std=std)
-
- for resnet_block in [self.visual.layer1, self.visual.layer2, self.visual.layer3, self.visual.layer4]:
- for name, param in resnet_block.named_parameters():
- if name.endswith("bn3.weight"):
- nn.init.zeros_(param)
-
- proj_std = (self.transformer.width ** -0.5) * ((2 * self.transformer.layers) ** -0.5)
- attn_std = self.transformer.width ** -0.5
- fc_std = (2 * self.transformer.width) ** -0.5
- for block in self.transformer.resblocks:
- nn.init.normal_(block.attn.in_proj_weight, std=attn_std)
- nn.init.normal_(block.attn.out_proj.weight, std=proj_std)
- nn.init.normal_(block.mlp.c_fc.weight, std=fc_std)
- nn.init.normal_(block.mlp.c_proj.weight, std=proj_std)
-
- if self.text_projection is not None:
- nn.init.normal_(self.text_projection, std=self.transformer.width ** -0.5)
-
- def build_attention_mask(self):
- # lazily create causal attention mask, with full attention between the vision tokens
- # pytorch uses additive attention mask; fill with -inf
- mask = torch.empty(self.context_length, self.context_length)
- mask.fill_(float("-inf"))
- mask.triu_(1) # zero out the lower diagonal
- return mask
-
- @property
- def dtype(self):
- return self.visual.conv1.weight.dtype
-
- def encode_image(self, image):
- return self.visual(image.type(self.dtype))
-
- def encode_text(self, text):
- x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model]
-
- x = x + self.positional_embedding.type(self.dtype)
- x = x.permute(1, 0, 2) # NLD -> LND
- x = self.transformer(x)
- x = x.permute(1, 0, 2) # LND -> NLD
- x = self.ln_final(x).type(self.dtype)
-
- # x.shape = [batch_size, n_ctx, transformer.width]
- # take features from the eot embedding (eot_token is the highest number in each sequence)
- x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection
-
- return x
-
- def forward(self, image, text):
- image_features = self.encode_image(image)
- text_features = self.encode_text(text)
-
- # normalized features
- image_features = image_features / image_features.norm(dim=1, keepdim=True)
- text_features = text_features / text_features.norm(dim=1, keepdim=True)
-
- # cosine similarity as logits
- logit_scale = self.logit_scale.exp()
- logits_per_image = logit_scale * image_features @ text_features.t()
- logits_per_text = logits_per_image.t()
-
- # shape = [global_batch_size, global_batch_size]
- return logits_per_image, logits_per_text
-
-
-def convert_weights(model: nn.Module):
- """Convert applicable model parameters to fp16"""
-
- def _convert_weights_to_fp16(l):
- if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)):
- l.weight.data = l.weight.data.half()
- if l.bias is not None:
- l.bias.data = l.bias.data.half()
-
- if isinstance(l, nn.MultiheadAttention):
- for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]:
- tensor = getattr(l, attr)
- if tensor is not None:
- tensor.data = tensor.data.half()
-
- for name in ["text_projection", "proj"]:
- if hasattr(l, name):
- attr = getattr(l, name)
- if attr is not None:
- attr.data = attr.data.half()
-
- model.apply(_convert_weights_to_fp16)
-
-
-def build_model(state_dict: dict):
- vit = "visual.proj" in state_dict
-
- if vit:
- vision_width = state_dict["visual.conv1.weight"].shape[0]
- vision_layers = len([k for k in state_dict.keys() if k.startswith("visual.") and k.endswith(".attn.in_proj_weight")])
- vision_patch_size = state_dict["visual.conv1.weight"].shape[-1]
- grid_size = round((state_dict["visual.positional_embedding"].shape[0] - 1) ** 0.5)
- image_resolution = vision_patch_size * grid_size
- else:
- counts: list = [len(set(k.split(".")[2] for k in state_dict if k.startswith(f"visual.layer{b}"))) for b in [1, 2, 3, 4]]
- vision_layers = tuple(counts)
- vision_width = state_dict["visual.layer1.0.conv1.weight"].shape[0]
- output_width = round((state_dict["visual.attnpool.positional_embedding"].shape[0] - 1) ** 0.5)
- vision_patch_size = None
- assert output_width ** 2 + 1 == state_dict["visual.attnpool.positional_embedding"].shape[0]
- image_resolution = output_width * 32
-
- embed_dim = state_dict["text_projection"].shape[1]
- context_length = state_dict["positional_embedding"].shape[0]
- vocab_size = state_dict["token_embedding.weight"].shape[0]
- transformer_width = state_dict["ln_final.weight"].shape[0]
- transformer_heads = transformer_width // 64
- transformer_layers = len(set(k.split(".")[2] for k in state_dict if k.startswith("transformer.resblocks")))
-
- model = CLIP(
- embed_dim,
- image_resolution, vision_layers, vision_width, vision_patch_size,
- context_length, vocab_size, transformer_width, transformer_heads, transformer_layers
- )
-
- for key in ["input_resolution", "context_length", "vocab_size"]:
- if key in state_dict:
- del state_dict[key]
-
- convert_weights(model)
- model.load_state_dict(state_dict)
- return model.eval()
diff --git a/spaces/Illumotion/Koboldcpp/LICENSE.md b/spaces/Illumotion/Koboldcpp/LICENSE.md
deleted file mode 100644
index 0ad25db4bd1d86c452db3f9602ccdbe172438f52..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/LICENSE.md
+++ /dev/null
@@ -1,661 +0,0 @@
- GNU AFFERO GENERAL PUBLIC LICENSE
- Version 3, 19 November 2007
-
- Copyright (C) 2007 Free Software Foundation, Inc.
- Everyone is permitted to copy and distribute verbatim copies
- of this license document, but changing it is not allowed.
-
- Preamble
-
- The GNU Affero General Public License is a free, copyleft license for
-software and other kinds of works, specifically designed to ensure
-cooperation with the community in the case of network server software.
-
- The licenses for most software and other practical works are designed
-to take away your freedom to share and change the works. By contrast,
-our General Public Licenses are intended to guarantee your freedom to
-share and change all versions of a program--to make sure it remains free
-software for all its users.
-
- When we speak of free software, we are referring to freedom, not
-price. Our General Public Licenses are designed to make sure that you
-have the freedom to distribute copies of free software (and charge for
-them if you wish), that you receive source code or can get it if you
-want it, that you can change the software or use pieces of it in new
-free programs, and that you know you can do these things.
-
- Developers that use our General Public Licenses protect your rights
-with two steps: (1) assert copyright on the software, and (2) offer
-you this License which gives you legal permission to copy, distribute
-and/or modify the software.
-
- A secondary benefit of defending all users' freedom is that
-improvements made in alternate versions of the program, if they
-receive widespread use, become available for other developers to
-incorporate. Many developers of free software are heartened and
-encouraged by the resulting cooperation. However, in the case of
-software used on network servers, this result may fail to come about.
-The GNU General Public License permits making a modified version and
-letting the public access it on a server without ever releasing its
-source code to the public.
-
- The GNU Affero General Public License is designed specifically to
-ensure that, in such cases, the modified source code becomes available
-to the community. It requires the operator of a network server to
-provide the source code of the modified version running there to the
-users of that server. Therefore, public use of a modified version, on
-a publicly accessible server, gives the public access to the source
-code of the modified version.
-
- An older license, called the Affero General Public License and
-published by Affero, was designed to accomplish similar goals. This is
-a different license, not a version of the Affero GPL, but Affero has
-released a new version of the Affero GPL which permits relicensing under
-this license.
-
- The precise terms and conditions for copying, distribution and
-modification follow.
-
- TERMS AND CONDITIONS
-
- 0. Definitions.
-
- "This License" refers to version 3 of the GNU Affero General Public License.
-
- "Copyright" also means copyright-like laws that apply to other kinds of
-works, such as semiconductor masks.
-
- "The Program" refers to any copyrightable work licensed under this
-License. Each licensee is addressed as "you". "Licensees" and
-"recipients" may be individuals or organizations.
-
- To "modify" a work means to copy from or adapt all or part of the work
-in a fashion requiring copyright permission, other than the making of an
-exact copy. The resulting work is called a "modified version" of the
-earlier work or a work "based on" the earlier work.
-
- A "covered work" means either the unmodified Program or a work based
-on the Program.
-
- To "propagate" a work means to do anything with it that, without
-permission, would make you directly or secondarily liable for
-infringement under applicable copyright law, except executing it on a
-computer or modifying a private copy. Propagation includes copying,
-distribution (with or without modification), making available to the
-public, and in some countries other activities as well.
-
- To "convey" a work means any kind of propagation that enables other
-parties to make or receive copies. Mere interaction with a user through
-a computer network, with no transfer of a copy, is not conveying.
-
- An interactive user interface displays "Appropriate Legal Notices"
-to the extent that it includes a convenient and prominently visible
-feature that (1) displays an appropriate copyright notice, and (2)
-tells the user that there is no warranty for the work (except to the
-extent that warranties are provided), that licensees may convey the
-work under this License, and how to view a copy of this License. If
-the interface presents a list of user commands or options, such as a
-menu, a prominent item in the list meets this criterion.
-
- 1. Source Code.
-
- The "source code" for a work means the preferred form of the work
-for making modifications to it. "Object code" means any non-source
-form of a work.
-
- A "Standard Interface" means an interface that either is an official
-standard defined by a recognized standards body, or, in the case of
-interfaces specified for a particular programming language, one that
-is widely used among developers working in that language.
-
- The "System Libraries" of an executable work include anything, other
-than the work as a whole, that (a) is included in the normal form of
-packaging a Major Component, but which is not part of that Major
-Component, and (b) serves only to enable use of the work with that
-Major Component, or to implement a Standard Interface for which an
-implementation is available to the public in source code form. A
-"Major Component", in this context, means a major essential component
-(kernel, window system, and so on) of the specific operating system
-(if any) on which the executable work runs, or a compiler used to
-produce the work, or an object code interpreter used to run it.
-
- The "Corresponding Source" for a work in object code form means all
-the source code needed to generate, install, and (for an executable
-work) run the object code and to modify the work, including scripts to
-control those activities. However, it does not include the work's
-System Libraries, or general-purpose tools or generally available free
-programs which are used unmodified in performing those activities but
-which are not part of the work. For example, Corresponding Source
-includes interface definition files associated with source files for
-the work, and the source code for shared libraries and dynamically
-linked subprograms that the work is specifically designed to require,
-such as by intimate data communication or control flow between those
-subprograms and other parts of the work.
-
- The Corresponding Source need not include anything that users
-can regenerate automatically from other parts of the Corresponding
-Source.
-
- The Corresponding Source for a work in source code form is that
-same work.
-
- 2. Basic Permissions.
-
- All rights granted under this License are granted for the term of
-copyright on the Program, and are irrevocable provided the stated
-conditions are met. This License explicitly affirms your unlimited
-permission to run the unmodified Program. The output from running a
-covered work is covered by this License only if the output, given its
-content, constitutes a covered work. This License acknowledges your
-rights of fair use or other equivalent, as provided by copyright law.
-
- You may make, run and propagate covered works that you do not
-convey, without conditions so long as your license otherwise remains
-in force. You may convey covered works to others for the sole purpose
-of having them make modifications exclusively for you, or provide you
-with facilities for running those works, provided that you comply with
-the terms of this License in conveying all material for which you do
-not control copyright. Those thus making or running the covered works
-for you must do so exclusively on your behalf, under your direction
-and control, on terms that prohibit them from making any copies of
-your copyrighted material outside their relationship with you.
-
- Conveying under any other circumstances is permitted solely under
-the conditions stated below. Sublicensing is not allowed; section 10
-makes it unnecessary.
-
- 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
-
- No covered work shall be deemed part of an effective technological
-measure under any applicable law fulfilling obligations under article
-11 of the WIPO copyright treaty adopted on 20 December 1996, or
-similar laws prohibiting or restricting circumvention of such
-measures.
-
- When you convey a covered work, you waive any legal power to forbid
-circumvention of technological measures to the extent such circumvention
-is effected by exercising rights under this License with respect to
-the covered work, and you disclaim any intention to limit operation or
-modification of the work as a means of enforcing, against the work's
-users, your or third parties' legal rights to forbid circumvention of
-technological measures.
-
- 4. Conveying Verbatim Copies.
-
- You may convey verbatim copies of the Program's source code as you
-receive it, in any medium, provided that you conspicuously and
-appropriately publish on each copy an appropriate copyright notice;
-keep intact all notices stating that this License and any
-non-permissive terms added in accord with section 7 apply to the code;
-keep intact all notices of the absence of any warranty; and give all
-recipients a copy of this License along with the Program.
-
- You may charge any price or no price for each copy that you convey,
-and you may offer support or warranty protection for a fee.
-
- 5. Conveying Modified Source Versions.
-
- You may convey a work based on the Program, or the modifications to
-produce it from the Program, in the form of source code under the
-terms of section 4, provided that you also meet all of these conditions:
-
- a) The work must carry prominent notices stating that you modified
- it, and giving a relevant date.
-
- b) The work must carry prominent notices stating that it is
- released under this License and any conditions added under section
- 7. This requirement modifies the requirement in section 4 to
- "keep intact all notices".
-
- c) You must license the entire work, as a whole, under this
- License to anyone who comes into possession of a copy. This
- License will therefore apply, along with any applicable section 7
- additional terms, to the whole of the work, and all its parts,
- regardless of how they are packaged. This License gives no
- permission to license the work in any other way, but it does not
- invalidate such permission if you have separately received it.
-
- d) If the work has interactive user interfaces, each must display
- Appropriate Legal Notices; however, if the Program has interactive
- interfaces that do not display Appropriate Legal Notices, your
- work need not make them do so.
-
- A compilation of a covered work with other separate and independent
-works, which are not by their nature extensions of the covered work,
-and which are not combined with it such as to form a larger program,
-in or on a volume of a storage or distribution medium, is called an
-"aggregate" if the compilation and its resulting copyright are not
-used to limit the access or legal rights of the compilation's users
-beyond what the individual works permit. Inclusion of a covered work
-in an aggregate does not cause this License to apply to the other
-parts of the aggregate.
-
- 6. Conveying Non-Source Forms.
-
- You may convey a covered work in object code form under the terms
-of sections 4 and 5, provided that you also convey the
-machine-readable Corresponding Source under the terms of this License,
-in one of these ways:
-
- a) Convey the object code in, or embodied in, a physical product
- (including a physical distribution medium), accompanied by the
- Corresponding Source fixed on a durable physical medium
- customarily used for software interchange.
-
- b) Convey the object code in, or embodied in, a physical product
- (including a physical distribution medium), accompanied by a
- written offer, valid for at least three years and valid for as
- long as you offer spare parts or customer support for that product
- model, to give anyone who possesses the object code either (1) a
- copy of the Corresponding Source for all the software in the
- product that is covered by this License, on a durable physical
- medium customarily used for software interchange, for a price no
- more than your reasonable cost of physically performing this
- conveying of source, or (2) access to copy the
- Corresponding Source from a network server at no charge.
-
- c) Convey individual copies of the object code with a copy of the
- written offer to provide the Corresponding Source. This
- alternative is allowed only occasionally and noncommercially, and
- only if you received the object code with such an offer, in accord
- with subsection 6b.
-
- d) Convey the object code by offering access from a designated
- place (gratis or for a charge), and offer equivalent access to the
- Corresponding Source in the same way through the same place at no
- further charge. You need not require recipients to copy the
- Corresponding Source along with the object code. If the place to
- copy the object code is a network server, the Corresponding Source
- may be on a different server (operated by you or a third party)
- that supports equivalent copying facilities, provided you maintain
- clear directions next to the object code saying where to find the
- Corresponding Source. Regardless of what server hosts the
- Corresponding Source, you remain obligated to ensure that it is
- available for as long as needed to satisfy these requirements.
-
- e) Convey the object code using peer-to-peer transmission, provided
- you inform other peers where the object code and Corresponding
- Source of the work are being offered to the general public at no
- charge under subsection 6d.
-
- A separable portion of the object code, whose source code is excluded
-from the Corresponding Source as a System Library, need not be
-included in conveying the object code work.
-
- A "User Product" is either (1) a "consumer product", which means any
-tangible personal property which is normally used for personal, family,
-or household purposes, or (2) anything designed or sold for incorporation
-into a dwelling. In determining whether a product is a consumer product,
-doubtful cases shall be resolved in favor of coverage. For a particular
-product received by a particular user, "normally used" refers to a
-typical or common use of that class of product, regardless of the status
-of the particular user or of the way in which the particular user
-actually uses, or expects or is expected to use, the product. A product
-is a consumer product regardless of whether the product has substantial
-commercial, industrial or non-consumer uses, unless such uses represent
-the only significant mode of use of the product.
-
- "Installation Information" for a User Product means any methods,
-procedures, authorization keys, or other information required to install
-and execute modified versions of a covered work in that User Product from
-a modified version of its Corresponding Source. The information must
-suffice to ensure that the continued functioning of the modified object
-code is in no case prevented or interfered with solely because
-modification has been made.
-
- If you convey an object code work under this section in, or with, or
-specifically for use in, a User Product, and the conveying occurs as
-part of a transaction in which the right of possession and use of the
-User Product is transferred to the recipient in perpetuity or for a
-fixed term (regardless of how the transaction is characterized), the
-Corresponding Source conveyed under this section must be accompanied
-by the Installation Information. But this requirement does not apply
-if neither you nor any third party retains the ability to install
-modified object code on the User Product (for example, the work has
-been installed in ROM).
-
- The requirement to provide Installation Information does not include a
-requirement to continue to provide support service, warranty, or updates
-for a work that has been modified or installed by the recipient, or for
-the User Product in which it has been modified or installed. Access to a
-network may be denied when the modification itself materially and
-adversely affects the operation of the network or violates the rules and
-protocols for communication across the network.
-
- Corresponding Source conveyed, and Installation Information provided,
-in accord with this section must be in a format that is publicly
-documented (and with an implementation available to the public in
-source code form), and must require no special password or key for
-unpacking, reading or copying.
-
- 7. Additional Terms.
-
- "Additional permissions" are terms that supplement the terms of this
-License by making exceptions from one or more of its conditions.
-Additional permissions that are applicable to the entire Program shall
-be treated as though they were included in this License, to the extent
-that they are valid under applicable law. If additional permissions
-apply only to part of the Program, that part may be used separately
-under those permissions, but the entire Program remains governed by
-this License without regard to the additional permissions.
-
- When you convey a copy of a covered work, you may at your option
-remove any additional permissions from that copy, or from any part of
-it. (Additional permissions may be written to require their own
-removal in certain cases when you modify the work.) You may place
-additional permissions on material, added by you to a covered work,
-for which you have or can give appropriate copyright permission.
-
- Notwithstanding any other provision of this License, for material you
-add to a covered work, you may (if authorized by the copyright holders of
-that material) supplement the terms of this License with terms:
-
- a) Disclaiming warranty or limiting liability differently from the
- terms of sections 15 and 16 of this License; or
-
- b) Requiring preservation of specified reasonable legal notices or
- author attributions in that material or in the Appropriate Legal
- Notices displayed by works containing it; or
-
- c) Prohibiting misrepresentation of the origin of that material, or
- requiring that modified versions of such material be marked in
- reasonable ways as different from the original version; or
-
- d) Limiting the use for publicity purposes of names of licensors or
- authors of the material; or
-
- e) Declining to grant rights under trademark law for use of some
- trade names, trademarks, or service marks; or
-
- f) Requiring indemnification of licensors and authors of that
- material by anyone who conveys the material (or modified versions of
- it) with contractual assumptions of liability to the recipient, for
- any liability that these contractual assumptions directly impose on
- those licensors and authors.
-
- All other non-permissive additional terms are considered "further
-restrictions" within the meaning of section 10. If the Program as you
-received it, or any part of it, contains a notice stating that it is
-governed by this License along with a term that is a further
-restriction, you may remove that term. If a license document contains
-a further restriction but permits relicensing or conveying under this
-License, you may add to a covered work material governed by the terms
-of that license document, provided that the further restriction does
-not survive such relicensing or conveying.
-
- If you add terms to a covered work in accord with this section, you
-must place, in the relevant source files, a statement of the
-additional terms that apply to those files, or a notice indicating
-where to find the applicable terms.
-
- Additional terms, permissive or non-permissive, may be stated in the
-form of a separately written license, or stated as exceptions;
-the above requirements apply either way.
-
- 8. Termination.
-
- You may not propagate or modify a covered work except as expressly
-provided under this License. Any attempt otherwise to propagate or
-modify it is void, and will automatically terminate your rights under
-this License (including any patent licenses granted under the third
-paragraph of section 11).
-
- However, if you cease all violation of this License, then your
-license from a particular copyright holder is reinstated (a)
-provisionally, unless and until the copyright holder explicitly and
-finally terminates your license, and (b) permanently, if the copyright
-holder fails to notify you of the violation by some reasonable means
-prior to 60 days after the cessation.
-
- Moreover, your license from a particular copyright holder is
-reinstated permanently if the copyright holder notifies you of the
-violation by some reasonable means, this is the first time you have
-received notice of violation of this License (for any work) from that
-copyright holder, and you cure the violation prior to 30 days after
-your receipt of the notice.
-
- Termination of your rights under this section does not terminate the
-licenses of parties who have received copies or rights from you under
-this License. If your rights have been terminated and not permanently
-reinstated, you do not qualify to receive new licenses for the same
-material under section 10.
-
- 9. Acceptance Not Required for Having Copies.
-
- You are not required to accept this License in order to receive or
-run a copy of the Program. Ancillary propagation of a covered work
-occurring solely as a consequence of using peer-to-peer transmission
-to receive a copy likewise does not require acceptance. However,
-nothing other than this License grants you permission to propagate or
-modify any covered work. These actions infringe copyright if you do
-not accept this License. Therefore, by modifying or propagating a
-covered work, you indicate your acceptance of this License to do so.
-
- 10. Automatic Licensing of Downstream Recipients.
-
- Each time you convey a covered work, the recipient automatically
-receives a license from the original licensors, to run, modify and
-propagate that work, subject to this License. You are not responsible
-for enforcing compliance by third parties with this License.
-
- An "entity transaction" is a transaction transferring control of an
-organization, or substantially all assets of one, or subdividing an
-organization, or merging organizations. If propagation of a covered
-work results from an entity transaction, each party to that
-transaction who receives a copy of the work also receives whatever
-licenses to the work the party's predecessor in interest had or could
-give under the previous paragraph, plus a right to possession of the
-Corresponding Source of the work from the predecessor in interest, if
-the predecessor has it or can get it with reasonable efforts.
-
- You may not impose any further restrictions on the exercise of the
-rights granted or affirmed under this License. For example, you may
-not impose a license fee, royalty, or other charge for exercise of
-rights granted under this License, and you may not initiate litigation
-(including a cross-claim or counterclaim in a lawsuit) alleging that
-any patent claim is infringed by making, using, selling, offering for
-sale, or importing the Program or any portion of it.
-
- 11. Patents.
-
- A "contributor" is a copyright holder who authorizes use under this
-License of the Program or a work on which the Program is based. The
-work thus licensed is called the contributor's "contributor version".
-
- A contributor's "essential patent claims" are all patent claims
-owned or controlled by the contributor, whether already acquired or
-hereafter acquired, that would be infringed by some manner, permitted
-by this License, of making, using, or selling its contributor version,
-but do not include claims that would be infringed only as a
-consequence of further modification of the contributor version. For
-purposes of this definition, "control" includes the right to grant
-patent sublicenses in a manner consistent with the requirements of
-this License.
-
- Each contributor grants you a non-exclusive, worldwide, royalty-free
-patent license under the contributor's essential patent claims, to
-make, use, sell, offer for sale, import and otherwise run, modify and
-propagate the contents of its contributor version.
-
- In the following three paragraphs, a "patent license" is any express
-agreement or commitment, however denominated, not to enforce a patent
-(such as an express permission to practice a patent or covenant not to
-sue for patent infringement). To "grant" such a patent license to a
-party means to make such an agreement or commitment not to enforce a
-patent against the party.
-
- If you convey a covered work, knowingly relying on a patent license,
-and the Corresponding Source of the work is not available for anyone
-to copy, free of charge and under the terms of this License, through a
-publicly available network server or other readily accessible means,
-then you must either (1) cause the Corresponding Source to be so
-available, or (2) arrange to deprive yourself of the benefit of the
-patent license for this particular work, or (3) arrange, in a manner
-consistent with the requirements of this License, to extend the patent
-license to downstream recipients. "Knowingly relying" means you have
-actual knowledge that, but for the patent license, your conveying the
-covered work in a country, or your recipient's use of the covered work
-in a country, would infringe one or more identifiable patents in that
-country that you have reason to believe are valid.
-
- If, pursuant to or in connection with a single transaction or
-arrangement, you convey, or propagate by procuring conveyance of, a
-covered work, and grant a patent license to some of the parties
-receiving the covered work authorizing them to use, propagate, modify
-or convey a specific copy of the covered work, then the patent license
-you grant is automatically extended to all recipients of the covered
-work and works based on it.
-
- A patent license is "discriminatory" if it does not include within
-the scope of its coverage, prohibits the exercise of, or is
-conditioned on the non-exercise of one or more of the rights that are
-specifically granted under this License. You may not convey a covered
-work if you are a party to an arrangement with a third party that is
-in the business of distributing software, under which you make payment
-to the third party based on the extent of your activity of conveying
-the work, and under which the third party grants, to any of the
-parties who would receive the covered work from you, a discriminatory
-patent license (a) in connection with copies of the covered work
-conveyed by you (or copies made from those copies), or (b) primarily
-for and in connection with specific products or compilations that
-contain the covered work, unless you entered into that arrangement,
-or that patent license was granted, prior to 28 March 2007.
-
- Nothing in this License shall be construed as excluding or limiting
-any implied license or other defenses to infringement that may
-otherwise be available to you under applicable patent law.
-
- 12. No Surrender of Others' Freedom.
-
- If conditions are imposed on you (whether by court order, agreement or
-otherwise) that contradict the conditions of this License, they do not
-excuse you from the conditions of this License. If you cannot convey a
-covered work so as to satisfy simultaneously your obligations under this
-License and any other pertinent obligations, then as a consequence you may
-not convey it at all. For example, if you agree to terms that obligate you
-to collect a royalty for further conveying from those to whom you convey
-the Program, the only way you could satisfy both those terms and this
-License would be to refrain entirely from conveying the Program.
-
- 13. Remote Network Interaction; Use with the GNU General Public License.
-
- Notwithstanding any other provision of this License, if you modify the
-Program, your modified version must prominently offer all users
-interacting with it remotely through a computer network (if your version
-supports such interaction) an opportunity to receive the Corresponding
-Source of your version by providing access to the Corresponding Source
-from a network server at no charge, through some standard or customary
-means of facilitating copying of software. This Corresponding Source
-shall include the Corresponding Source for any work covered by version 3
-of the GNU General Public License that is incorporated pursuant to the
-following paragraph.
-
- Notwithstanding any other provision of this License, you have
-permission to link or combine any covered work with a work licensed
-under version 3 of the GNU General Public License into a single
-combined work, and to convey the resulting work. The terms of this
-License will continue to apply to the part which is the covered work,
-but the work with which it is combined will remain governed by version
-3 of the GNU General Public License.
-
- 14. Revised Versions of this License.
-
- The Free Software Foundation may publish revised and/or new versions of
-the GNU Affero General Public License from time to time. Such new versions
-will be similar in spirit to the present version, but may differ in detail to
-address new problems or concerns.
-
- Each version is given a distinguishing version number. If the
-Program specifies that a certain numbered version of the GNU Affero General
-Public License "or any later version" applies to it, you have the
-option of following the terms and conditions either of that numbered
-version or of any later version published by the Free Software
-Foundation. If the Program does not specify a version number of the
-GNU Affero General Public License, you may choose any version ever published
-by the Free Software Foundation.
-
- If the Program specifies that a proxy can decide which future
-versions of the GNU Affero General Public License can be used, that proxy's
-public statement of acceptance of a version permanently authorizes you
-to choose that version for the Program.
-
- Later license versions may give you additional or different
-permissions. However, no additional obligations are imposed on any
-author or copyright holder as a result of your choosing to follow a
-later version.
-
- 15. Disclaimer of Warranty.
-
- THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
-APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
-HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
-OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
-THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
-PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
-IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
-ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
-
- 16. Limitation of Liability.
-
- IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
-WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
-THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
-GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
-USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
-DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
-PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
-EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
-SUCH DAMAGES.
-
- 17. Interpretation of Sections 15 and 16.
-
- If the disclaimer of warranty and limitation of liability provided
-above cannot be given local legal effect according to their terms,
-reviewing courts shall apply local law that most closely approximates
-an absolute waiver of all civil liability in connection with the
-Program, unless a warranty or assumption of liability accompanies a
-copy of the Program in return for a fee.
-
- END OF TERMS AND CONDITIONS
-
- How to Apply These Terms to Your New Programs
-
- If you develop a new program, and you want it to be of the greatest
-possible use to the public, the best way to achieve this is to make it
-free software which everyone can redistribute and change under these terms.
-
- To do so, attach the following notices to the program. It is safest
-to attach them to the start of each source file to most effectively
-state the exclusion of warranty; and each file should have at least
-the "copyright" line and a pointer to where the full notice is found.
-
-
- Copyright (C)
-
- This program is free software: you can redistribute it and/or modify
- it under the terms of the GNU Affero General Public License as published
- by the Free Software Foundation, either version 3 of the License, or
- (at your option) any later version.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU Affero General Public License for more details.
-
- You should have received a copy of the GNU Affero General Public License
- along with this program. If not, see .
-
-Also add information on how to contact you by electronic and paper mail.
-
- If your software can interact with users remotely through a computer
-network, you should also make sure that it provides a way for users to
-get its source. For example, if your program is a web application, its
-interface could display a "Source" link that leads users to an archive
-of the code. There are many ways you could offer source, and different
-solutions will be better for different programs; see section 13 for the
-specific requirements.
-
- You should also get your employer (if you work as a programmer) or school,
-if any, to sign a "copyright disclaimer" for the program, if necessary.
-For more information on this, and how to apply and follow the GNU AGPL, see
-.
diff --git a/spaces/Illumotion/Koboldcpp/Remote-Link.cmd b/spaces/Illumotion/Koboldcpp/Remote-Link.cmd
deleted file mode 100644
index a7c1f11096c1afb0f48c3ee38407902398e5f99e..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/Remote-Link.cmd
+++ /dev/null
@@ -1,18 +0,0 @@
-: # This script will help setup a cloudflared tunnel for accessing KoboldCpp over the internet
-: # It should work out of the box on both linux and windows
-: # ======
-: # WINDOWS PORTION
-:<', '', '#', '▃', '▁', '▂', ' ']
- for char in special_chars:
- msg = msg.replace(char, '')
- return msg
-
- def submit_API(self, prompt, trun=[]):
- """Submit prompt to yuan API interface and obtain an pure text reply.
- :prompt: Question or any content a user may input.
- :return: pure text response."""
- query = self.craft_query(prompt)
- res = self.response(query, engine=self.engine,
- max_tokens=self.max_tokens,
- temperature=self.temperature,
- topP=self.topP,
- topK=self.topK,
- frequencyPenalty=self.frequencyPenalty,
- responsePenalty=self.responsePenalty,
- noRepeatNgramSize=self.noRepeatNgramSize)
- if 'resData' in res and res['resData'] != None:
- txt = res['resData']
- else:
- txt = '模型返回为空,请尝试修改输入'
- # 单独针对翻译模型的后处理
- if self.engine == 'translate':
- txt = txt.replace(' ##', '').replace(' "', '"').replace(": ", ":").replace(" ,", ",") \
- .replace('英文:', '').replace('文:', '').replace("( ", "(").replace(" )", ")")
- else:
- txt = txt.replace(' ', '')
- txt = self.del_special_chars(txt)
-
- # trun多结束符截断模型输出
- if isinstance(trun, str):
- trun = [trun]
- try:
- if trun != None and isinstance(trun, list) and trun != []:
- for tr in trun:
- if tr in txt and tr != "":
- txt = txt[:txt.index(tr)]
- else:
- continue
- except:
- return txt
- return txt
-
-
-class YuanAPI:
- ACCOUNT = ''
- PHONE = ''
-
- SUBMIT_URL = "http://api.airyuan.cn:32102/v1/interface/api/infer/getRequestId?"
- REPLY_URL = "http://api.airyuan.cn:32102/v1/interface/api/result?"
-
- def __init__(self, user, phone):
- self.ACCOUNT = user
- self.PHONE = phone
-
- @staticmethod
- def code_md5(str):
- code = str.encode("utf-8")
- m = hashlib.md5()
- m.update(code)
- result = m.hexdigest()
- return result
-
- @staticmethod
- def rest_get(url, header, timeout, show_error=False):
- '''Call rest get method'''
- try:
- response = requests.get(url, headers=header, timeout=timeout, verify=False)
- return response
- except Exception as exception:
- if show_error:
- print(exception)
- return None
-
- def header_generation(self):
- """Generate header for API request."""
- t = datetime.now(pytz.timezone("Asia/Shanghai")).strftime("%Y-%m-%d")
- token = self.code_md5(self.ACCOUNT + self.PHONE + t)
- headers = {'token': token}
- return headers
-
- def submit_request(self, query, temperature, topP, topK, max_tokens, engine, frequencyPenalty, responsePenalty,
- noRepeatNgramSize):
- """Submit query to the backend server and get requestID."""
- headers = self.header_generation()
- # url=SUBMIT_URL + "account={0}&data={1}&temperature={2}&topP={3}&topK={4}&tokensToGenerate={5}&type={6}".format(ACCOUNT,query,temperature,topP,topK,max_tokens,"api")
- # url=SUBMIT_URL + "engine={0}&account={1}&data={2}&temperature={3}&topP={4}&topK={5}&tokensToGenerate={6}" \
- # "&type={7}".format(engine,ACCOUNT,query,temperature,topP,topK, max_tokens,"api")
- url = self.SUBMIT_URL + "engine={0}&account={1}&data={2}&temperature={3}&topP={4}&topK={5}&tokensToGenerate={6}" \
- "&type={7}&frequencyPenalty={8}&responsePenalty={9}&noRepeatNgramSize={10}". \
- format(engine, self.ACCOUNT, query, temperature, topP, topK, max_tokens, "api", frequencyPenalty,
- responsePenalty, noRepeatNgramSize)
- response = self.rest_get(url, headers, 30)
- response_text = json.loads(response.text)
- if response_text["flag"]:
- requestId = response_text["resData"]
- return requestId
- else:
- raise RuntimeWarning(response_text)
-
- def reply_request(self, requestId, cycle_count=5):
- """Check reply API to get the inference response."""
- url = self.REPLY_URL + "account={0}&requestId={1}".format(self.ACCOUNT, requestId)
- headers = self.header_generation()
- response_text = {"flag": True, "resData": None}
- for i in range(cycle_count):
- response = self.rest_get(url, headers, 30, show_error=True)
- response_text = json.loads(response.text)
- if response_text["resData"] is not None:
- return response_text
- if response_text["flag"] is False and i == cycle_count - 1:
- raise RuntimeWarning(response_text)
- time.sleep(3)
- return response_text
-
-
-class Yuan_Client(BaseLLMModel):
-
- def __init__(self, model_name, api_key, user_name="", system_prompt=None):
- super().__init__(model_name=model_name, user=user_name)
- self.history = []
- self.api_key = api_key
- self.system_prompt = system_prompt
-
- self.input_prefix = ""
- self.output_prefix = ""
-
- def set_text_prefix(self, option, value):
- if option == 'input_prefix':
- self.input_prefix = value
- elif option == 'output_prefix':
- self.output_prefix = value
-
- def get_answer_at_once(self):
- # yuan temperature is (0,1] and base model temperature is [0,2], and yuan 0.9 == base 1 so need to convert
- temperature = self.temperature if self.temperature <= 1 else 0.9 + (self.temperature - 1) / 10
- topP = self.top_p
- topK = self.n_choices
- # max_tokens should be in [1,200]
- max_tokens = self.max_generation_token if self.max_generation_token is not None else 50
- if max_tokens > 200:
- max_tokens = 200
- stop = self.stop_sequence if self.stop_sequence is not None else []
- examples = []
- system_prompt = self.system_prompt
- if system_prompt is not None:
- lines = system_prompt.splitlines()
- # TODO: support prefixes in system prompt or settings
- """
- if lines[0].startswith('-'):
- prefixes = lines.pop()[1:].split('|')
- self.input_prefix = prefixes[0]
- if len(prefixes) > 1:
- self.output_prefix = prefixes[1]
- if len(prefixes) > 2:
- stop = prefixes[2].split(',')
- """
- for i in range(0, len(lines), 2):
- in_line = lines[i]
- out_line = lines[i + 1] if i + 1 < len(lines) else ""
- examples.append((in_line, out_line))
- yuan = Yuan(engine=self.model_name.replace('yuanai-1.0-', ''),
- temperature=temperature,
- max_tokens=max_tokens,
- topK=topK,
- topP=topP,
- input_prefix=self.input_prefix,
- input_suffix="",
- output_prefix=self.output_prefix,
- output_suffix="".join(stop),
- )
- if not self.api_key:
- return NO_APIKEY_MSG, 0
- yuan.set_account(self.api_key)
-
- for in_line, out_line in examples:
- yuan.add_example(Example(inp=in_line, out=out_line))
-
- prompt = self.history[-1]["content"]
- answer = yuan.submit_API(prompt, trun=stop)
- return answer, len(answer)
diff --git a/spaces/JosefJilek/loliDiffusionSpace/README.md b/spaces/JosefJilek/loliDiffusionSpace/README.md
deleted file mode 100644
index 32750d594431e7086068ce8f8bd1cad0191d68f4..0000000000000000000000000000000000000000
--- a/spaces/JosefJilek/loliDiffusionSpace/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Webui
-emoji: 🚧
-colorFrom: yellow
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.9
-app_file: app.py
-pinned: false
-duplicated_from: ai-moroz/webui-cpu
----
diff --git a/spaces/KPatrick/PaddleSpeechASR/app.py b/spaces/KPatrick/PaddleSpeechASR/app.py
deleted file mode 100644
index 7f1436bfc1e18ae99e50b5dbcdb8f1be01d69f1a..0000000000000000000000000000000000000000
--- a/spaces/KPatrick/PaddleSpeechASR/app.py
+++ /dev/null
@@ -1,41 +0,0 @@
-import gradio as gr
-import librosa
-import numpy as np
-import paddlehub as hub
-from paddlenlp import Taskflow
-from paddlespeech.cli import ASRExecutor
-import soundfile as sf
-
-# asr_model = hub.Module(name='u2_conformer_aishell')
-asr_executor = ASRExecutor()
-text_correct_model = Taskflow("text_correction")
-punc_model = hub.Module(name='auto_punc')
-
-
-def speech_recognize(file):
- data, sr = librosa.load(file)
- if sr != 16000:
- data = librosa.resample(data, sr, 16000)
- sf.write(file, data, samplerate=16000)
-
- print(f'[Audio Input] shape: {data.shape}, dtype: {data.dtype}, file: {file}')
- # text = asr_model.speech_recognize(file, device='cpu')
- text = asr_executor(file)
- text_correction = text_correct_model(text)[0]
- cor_text, errors = text_correction['target'], text_correction['errors']
- print(f'[Text Correction] errors: {errors}')
- punc_text = punc_model.add_puncs(cor_text, device='cpu')[0]
-
- ret = ''
- ret += f'[ASR] {text}\n'
- ret += f'[COR] {cor_text}\n'
- ret += f'[PUN] {punc_text}'
- return ret
-
-
-iface = gr.Interface(
- fn=speech_recognize,
- inputs=gr.inputs.Audio(source="microphone", type='filepath'),
- outputs="text",
-)
-iface.launch()
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/slicer2.py b/spaces/Kangarroar/ApplioRVC-Inference/slicer2.py
deleted file mode 100644
index 5b29ee262aa54045e807be2cffeb41687499ba58..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/slicer2.py
+++ /dev/null
@@ -1,260 +0,0 @@
-import numpy as np
-
-
-# This function is obtained from librosa.
-def get_rms(
- y,
- frame_length=2048,
- hop_length=512,
- pad_mode="constant",
-):
- padding = (int(frame_length // 2), int(frame_length // 2))
- y = np.pad(y, padding, mode=pad_mode)
-
- axis = -1
- # put our new within-frame axis at the end for now
- out_strides = y.strides + tuple([y.strides[axis]])
- # Reduce the shape on the framing axis
- x_shape_trimmed = list(y.shape)
- x_shape_trimmed[axis] -= frame_length - 1
- out_shape = tuple(x_shape_trimmed) + tuple([frame_length])
- xw = np.lib.stride_tricks.as_strided(y, shape=out_shape, strides=out_strides)
- if axis < 0:
- target_axis = axis - 1
- else:
- target_axis = axis + 1
- xw = np.moveaxis(xw, -1, target_axis)
- # Downsample along the target axis
- slices = [slice(None)] * xw.ndim
- slices[axis] = slice(0, None, hop_length)
- x = xw[tuple(slices)]
-
- # Calculate power
- power = np.mean(np.abs(x) ** 2, axis=-2, keepdims=True)
-
- return np.sqrt(power)
-
-
-class Slicer:
- def __init__(
- self,
- sr: int,
- threshold: float = -40.0,
- min_length: int = 5000,
- min_interval: int = 300,
- hop_size: int = 20,
- max_sil_kept: int = 5000,
- ):
- if not min_length >= min_interval >= hop_size:
- raise ValueError(
- "The following condition must be satisfied: min_length >= min_interval >= hop_size"
- )
- if not max_sil_kept >= hop_size:
- raise ValueError(
- "The following condition must be satisfied: max_sil_kept >= hop_size"
- )
- min_interval = sr * min_interval / 1000
- self.threshold = 10 ** (threshold / 20.0)
- self.hop_size = round(sr * hop_size / 1000)
- self.win_size = min(round(min_interval), 4 * self.hop_size)
- self.min_length = round(sr * min_length / 1000 / self.hop_size)
- self.min_interval = round(min_interval / self.hop_size)
- self.max_sil_kept = round(sr * max_sil_kept / 1000 / self.hop_size)
-
- def _apply_slice(self, waveform, begin, end):
- if len(waveform.shape) > 1:
- return waveform[
- :, begin * self.hop_size : min(waveform.shape[1], end * self.hop_size)
- ]
- else:
- return waveform[
- begin * self.hop_size : min(waveform.shape[0], end * self.hop_size)
- ]
-
- # @timeit
- def slice(self, waveform):
- if len(waveform.shape) > 1:
- samples = waveform.mean(axis=0)
- else:
- samples = waveform
- if samples.shape[0] <= self.min_length:
- return [waveform]
- rms_list = get_rms(
- y=samples, frame_length=self.win_size, hop_length=self.hop_size
- ).squeeze(0)
- sil_tags = []
- silence_start = None
- clip_start = 0
- for i, rms in enumerate(rms_list):
- # Keep looping while frame is silent.
- if rms < self.threshold:
- # Record start of silent frames.
- if silence_start is None:
- silence_start = i
- continue
- # Keep looping while frame is not silent and silence start has not been recorded.
- if silence_start is None:
- continue
- # Clear recorded silence start if interval is not enough or clip is too short
- is_leading_silence = silence_start == 0 and i > self.max_sil_kept
- need_slice_middle = (
- i - silence_start >= self.min_interval
- and i - clip_start >= self.min_length
- )
- if not is_leading_silence and not need_slice_middle:
- silence_start = None
- continue
- # Need slicing. Record the range of silent frames to be removed.
- if i - silence_start <= self.max_sil_kept:
- pos = rms_list[silence_start : i + 1].argmin() + silence_start
- if silence_start == 0:
- sil_tags.append((0, pos))
- else:
- sil_tags.append((pos, pos))
- clip_start = pos
- elif i - silence_start <= self.max_sil_kept * 2:
- pos = rms_list[
- i - self.max_sil_kept : silence_start + self.max_sil_kept + 1
- ].argmin()
- pos += i - self.max_sil_kept
- pos_l = (
- rms_list[
- silence_start : silence_start + self.max_sil_kept + 1
- ].argmin()
- + silence_start
- )
- pos_r = (
- rms_list[i - self.max_sil_kept : i + 1].argmin()
- + i
- - self.max_sil_kept
- )
- if silence_start == 0:
- sil_tags.append((0, pos_r))
- clip_start = pos_r
- else:
- sil_tags.append((min(pos_l, pos), max(pos_r, pos)))
- clip_start = max(pos_r, pos)
- else:
- pos_l = (
- rms_list[
- silence_start : silence_start + self.max_sil_kept + 1
- ].argmin()
- + silence_start
- )
- pos_r = (
- rms_list[i - self.max_sil_kept : i + 1].argmin()
- + i
- - self.max_sil_kept
- )
- if silence_start == 0:
- sil_tags.append((0, pos_r))
- else:
- sil_tags.append((pos_l, pos_r))
- clip_start = pos_r
- silence_start = None
- # Deal with trailing silence.
- total_frames = rms_list.shape[0]
- if (
- silence_start is not None
- and total_frames - silence_start >= self.min_interval
- ):
- silence_end = min(total_frames, silence_start + self.max_sil_kept)
- pos = rms_list[silence_start : silence_end + 1].argmin() + silence_start
- sil_tags.append((pos, total_frames + 1))
- # Apply and return slices.
- if len(sil_tags) == 0:
- return [waveform]
- else:
- chunks = []
- if sil_tags[0][0] > 0:
- chunks.append(self._apply_slice(waveform, 0, sil_tags[0][0]))
- for i in range(len(sil_tags) - 1):
- chunks.append(
- self._apply_slice(waveform, sil_tags[i][1], sil_tags[i + 1][0])
- )
- if sil_tags[-1][1] < total_frames:
- chunks.append(
- self._apply_slice(waveform, sil_tags[-1][1], total_frames)
- )
- return chunks
-
-
-def main():
- import os.path
- from argparse import ArgumentParser
-
- import librosa
- import soundfile
-
- parser = ArgumentParser()
- parser.add_argument("audio", type=str, help="The audio to be sliced")
- parser.add_argument(
- "--out", type=str, help="Output directory of the sliced audio clips"
- )
- parser.add_argument(
- "--db_thresh",
- type=float,
- required=False,
- default=-40,
- help="The dB threshold for silence detection",
- )
- parser.add_argument(
- "--min_length",
- type=int,
- required=False,
- default=5000,
- help="The minimum milliseconds required for each sliced audio clip",
- )
- parser.add_argument(
- "--min_interval",
- type=int,
- required=False,
- default=300,
- help="The minimum milliseconds for a silence part to be sliced",
- )
- parser.add_argument(
- "--hop_size",
- type=int,
- required=False,
- default=10,
- help="Frame length in milliseconds",
- )
- parser.add_argument(
- "--max_sil_kept",
- type=int,
- required=False,
- default=500,
- help="The maximum silence length kept around the sliced clip, presented in milliseconds",
- )
- args = parser.parse_args()
- out = args.out
- if out is None:
- out = os.path.dirname(os.path.abspath(args.audio))
- audio, sr = librosa.load(args.audio, sr=None, mono=False)
- slicer = Slicer(
- sr=sr,
- threshold=args.db_thresh,
- min_length=args.min_length,
- min_interval=args.min_interval,
- hop_size=args.hop_size,
- max_sil_kept=args.max_sil_kept,
- )
- chunks = slicer.slice(audio)
- if not os.path.exists(out):
- os.makedirs(out)
- for i, chunk in enumerate(chunks):
- if len(chunk.shape) > 1:
- chunk = chunk.T
- soundfile.write(
- os.path.join(
- out,
- f"%s_%d.wav"
- % (os.path.basename(args.audio).rsplit(".", maxsplit=1)[0], i),
- ),
- chunk,
- sr,
- )
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Karwasze/Whisper-ASR-youtube-subtitles/app.py b/spaces/Karwasze/Whisper-ASR-youtube-subtitles/app.py
deleted file mode 100644
index e2eadb809dd64660e4f8daad46cc31c6b550b3f3..0000000000000000000000000000000000000000
--- a/spaces/Karwasze/Whisper-ASR-youtube-subtitles/app.py
+++ /dev/null
@@ -1,271 +0,0 @@
-import gradio as gr
-import os
-from pathlib import Path
-import time
-
-import pandas as pd
-import re
-import time
-import os
-
-import whisper
-from pytube import YouTube
-
-import psutil
-num_cores = psutil.cpu_count()
-os.environ["OMP_NUM_THREADS"] = f"{num_cores}"
-
-
-import torch
-
-
-# is cuda available?
-
-from easynmt import EasyNMT
-translation_model = EasyNMT('m2m_100_418M', max_new_tokens=60, max_length=60)
-
-asr_model = whisper.load_model("base")
-transcribe_options = dict(beam_size=3, best_of=3, without_timestamps=False, language="Spanish")
-
-translation_models = {
-"Finnish": "fi",
-"Swedish": "sv",
-"Danish": "da",
-"English": "en",
-"German": "de"
-}
-
-
-device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-print("DEVICE IS: ")
-print(device)
-
-videos_out_path = Path("./videos_out")
-videos_out_path.mkdir(parents=True, exist_ok=True)
-
-def get_youtube(video_url):
- yt = YouTube(video_url)
- abs_video_path = yt.streams.filter(progressive=True, file_extension='mp4').order_by('resolution').desc().first().download()
- print("LADATATTU POLKUUN")
- print(abs_video_path)
-
- return abs_video_path
-
-async def speech_to_text(video_file_path, selected_translation_lang):
- """
- # Youtube with translated subtitles using OpenAI Whisper and Opus-MT models.
- # Currently supports only English audio
- This space allows you to:
- 1. Download youtube video with a given url
- 2. Watch it in the first video component
- 3. Run automatic speech recognition on the video using Whisper
- 4. Translate the recognized transcriptions to Finnish, Swedish, Danish, English, German (More languages coming later)
- 5. Burn the translations to the original video and watch the video in the 2nd video component
-
- Speech Recognition is based on OpenAI Whisper https://github.com/openai/whisper
- """
-
- if(video_file_path == None):
- raise ValueError("Error no video input")
- print(video_file_path)
- try:
- audio = whisper.load_audio(video_file_path)
- except Exception as e:
- raise RuntimeError("Error converting video to audio")
-
- last_time = time.time()
-
- try:
- print(f'Transcribing via local model')
- transcribe_options = dict(beam_size=5, best_of=5, without_timestamps=False)
-
- transcription = asr_model.transcribe(audio, **transcribe_options)
-
-
- #translation_options = dict(language=selected_translation_lang, beam_size=5, best_of=5, without_timestamps=False)
- #translations = asr_model.transcribe(audio, **translation_options)
-
- df = pd.DataFrame(columns=['start','end','text'])
-
-
-
- for i,segment in enumerate(transcription['segments']):
- new_row = {'start': segment['start'],
- 'end': segment['end'],
- 'text': segment['text']
- }
- df = df.append(new_row, ignore_index=True)
-
- if selected_translation_lang is None:
- selected_translation_lang = 'Finnish'
-
- sentences = df['text']
- df['translation'] = translation_model.translate(sentences, target_lang=translation_models.get(selected_translation_lang))
-
-
- print('After translation to target language \n')
-
- return (df)
- except Exception as e:
- raise RuntimeError("Error Running inference with local model", e)
-
-
-def create_srt_and_burn(df, video_in):
-
- print("Starting creation of video wit srt")
-
-
- with open('testi.srt','w', encoding="utf-8") as file:
- for i in range(len(df)):
- file.write(str(i+1))
- file.write('\n')
- start = df.iloc[i]['start']
-
-
- milliseconds = round(start * 1000.0)
-
- hours = milliseconds // 3_600_000
- milliseconds -= hours * 3_600_000
-
- minutes = milliseconds // 60_000
- milliseconds -= minutes * 60_000
-
- seconds = milliseconds // 1_000
- milliseconds -= seconds * 1_000
-
- file.write(f"{hours}:{minutes:02d}:{seconds:02d}.{milliseconds:03d}")
-
- stop = df.iloc[i]['end']
-
-
- milliseconds = round(stop * 1000.0)
-
- hours = milliseconds // 3_600_000
- milliseconds -= hours * 3_600_000
-
- minutes = milliseconds // 60_000
- milliseconds -= minutes * 60_000
-
- seconds = milliseconds // 1_000
- milliseconds -= seconds * 1_000
-
-
- file.write(' --> ')
- file.write(f"{hours}:{minutes:02d}:{seconds:02d}.{milliseconds:03d}")
- file.write('\n')
- file.writelines(df.iloc[i]['translation'])
- if int(i) != len(df)-1:
- file.write('\n\n')
-
- print("SRT DONE")
- try:
- file1 = open('./testi.srt', 'r', encoding="utf-8")
- Lines = file1.readlines()
-
- count = 0
- # Strips the newline character
- for line in Lines:
- count += 1
- print("{}".format(line))
-
- print(type(video_in))
- print(video_in)
-
- video_out = video_in.replace('.mp4', '_out.mp4')
- print(video_out)
- command = 'ffmpeg -i "{}" -y -vf subtitles=./testi.srt "{}"'.format(video_in, video_out)
- print(command)
- os.system(command)
- return video_out
- except Exception as e:
- print(e)
- return video_out
-
-
-# ---- Gradio Layout -----
-video_in = gr.Video(label="Video file", mirror_webcam=False)
-youtube_url_in = gr.Textbox(label="Youtube url", lines=1, interactive=True)
-video_out = gr.Video(label="Video Out", mirror_webcam=False)
-
-
-df_init = pd.DataFrame(columns=['start','end','text','translation'])
-selected_translation_lang = gr.Dropdown(choices=["English", "German","Finnish","Swedish", "Danish"], type="value", value="English", label="Language to translate transcriptions to", interactive=True)
-
-transcription_df = gr.DataFrame(value=df_init,label="Transcription dataframe", row_count=(0, "dynamic"), max_rows = 10)
-
-
-demo = gr.Blocks(css='''
-#cut_btn, #reset_btn { align-self:stretch; }
-#\\31 3 { max-width: 540px; }
-.output-markdown {max-width: 65ch !important;}
-''')
-demo.encrypt = False
-with demo:
- transcription_var = gr.Variable()
-
- with gr.Row():
- with gr.Column():
- gr.Markdown('''
- ### This space allows you to:
- ##### 1. Download youtube video with a given URL
- ##### 2. Watch it in the first video component
- ##### 3. Run automatic speech recognition on the video using Whisper (Please remember to select translation language)
- ##### 4. Translate the recognized transcriptions to English, Finnish, Swedish, Danish and German
- ##### 5. Burn the translations to the original video and watch the video in the 2nd video component
- ''')
-
- with gr.Column():
- gr.Markdown('''
- ### 1. Insert Youtube URL below (Some examples below which I suggest to use for first tests)
- ##### 1. https://www.youtube.com/watch?v=nlMuHtV82q8&ab_channel=NothingforSale24
- ##### 2. https://www.youtube.com/watch?v=JzPfMbG1vrE&ab_channel=ExplainerVideosByLauren
- ##### 3. https://www.youtube.com/watch?v=S68vvV0kod8&ab_channel=Pearl-CohnTelevision
- ''')
-
- with gr.Row():
- with gr.Column():
- youtube_url_in.render()
- download_youtube_btn = gr.Button("Step 1. Download Youtube video")
- download_youtube_btn.click(get_youtube, [youtube_url_in], [
- video_in])
- print(video_in)
-
-
- with gr.Row():
- with gr.Column():
- video_in.render()
- with gr.Column():
- gr.Markdown('''
- ##### Here you can start the transcription and translation process.
- ##### Be aware that processing will last for a while (35 second video took around 20 seconds in my testing)
- ''')
- transcribe_btn = gr.Button("Step 2. Transcribe and translate audio")
-
- transcribe_btn.click(speech_to_text, [video_in, selected_translation_lang], transcription_df)
-
- with gr.Row():
- with gr.Column():
- selected_translation_lang.render()
-
- with gr.Row():
- gr.Markdown('''
- ##### Here you will get transcription and translation output
- ##### If you see error please remember to select translation language
- ##### ''')
-
- with gr.Row():
- with gr.Column():
- transcription_df.render()
-
- with gr.Row():
- with gr.Column():
- translate_and_make_srt_btn = gr.Button("Step 3. Create and burn srt to video")
- print(video_in)
- translate_and_make_srt_btn.click(create_srt_and_burn, [transcription_df,video_in], [
- video_out])
- video_out.render()
-
-
-if __name__ == "__main__":
- demo.launch(debug=True)
-
diff --git a/spaces/Kevin676/AutoGPT/autogpt/commands/file_operations.py b/spaces/Kevin676/AutoGPT/autogpt/commands/file_operations.py
deleted file mode 100644
index ad145ec956dd9dafd39e09c2244d001cf5febd2f..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/AutoGPT/autogpt/commands/file_operations.py
+++ /dev/null
@@ -1,267 +0,0 @@
-"""File operations for AutoGPT"""
-from __future__ import annotations
-
-import os
-import os.path
-from typing import Generator
-
-import requests
-from colorama import Back, Fore
-from requests.adapters import HTTPAdapter, Retry
-
-from autogpt.spinner import Spinner
-from autogpt.utils import readable_file_size
-from autogpt.workspace import WORKSPACE_PATH, path_in_workspace
-
-LOG_FILE = "file_logger.txt"
-LOG_FILE_PATH = WORKSPACE_PATH / LOG_FILE
-
-
-def check_duplicate_operation(operation: str, filename: str) -> bool:
- """Check if the operation has already been performed on the given file
-
- Args:
- operation (str): The operation to check for
- filename (str): The name of the file to check for
-
- Returns:
- bool: True if the operation has already been performed on the file
- """
- log_content = read_file(LOG_FILE)
- log_entry = f"{operation}: {filename}\n"
- return log_entry in log_content
-
-
-def log_operation(operation: str, filename: str) -> None:
- """Log the file operation to the file_logger.txt
-
- Args:
- operation (str): The operation to log
- filename (str): The name of the file the operation was performed on
- """
- log_entry = f"{operation}: {filename}\n"
-
- # Create the log file if it doesn't exist
- if not os.path.exists(LOG_FILE_PATH):
- with open(LOG_FILE_PATH, "w", encoding="utf-8") as f:
- f.write("File Operation Logger ")
-
- append_to_file(LOG_FILE, log_entry, shouldLog=False)
-
-
-def split_file(
- content: str, max_length: int = 4000, overlap: int = 0
-) -> Generator[str, None, None]:
- """
- Split text into chunks of a specified maximum length with a specified overlap
- between chunks.
-
- :param content: The input text to be split into chunks
- :param max_length: The maximum length of each chunk,
- default is 4000 (about 1k token)
- :param overlap: The number of overlapping characters between chunks,
- default is no overlap
- :return: A generator yielding chunks of text
- """
- start = 0
- content_length = len(content)
-
- while start < content_length:
- end = start + max_length
- if end + overlap < content_length:
- chunk = content[start : end + overlap - 1]
- else:
- chunk = content[start:content_length]
-
- # Account for the case where the last chunk is shorter than the overlap, so it has already been consumed
- if len(chunk) <= overlap:
- break
-
- yield chunk
- start += max_length - overlap
-
-
-def read_file(filename: str) -> str:
- """Read a file and return the contents
-
- Args:
- filename (str): The name of the file to read
-
- Returns:
- str: The contents of the file
- """
- try:
- filepath = path_in_workspace(filename)
- with open(filepath, "r", encoding="utf-8") as f:
- content = f.read()
- return content
- except Exception as e:
- return f"Error: {str(e)}"
-
-
-def ingest_file(
- filename: str, memory, max_length: int = 4000, overlap: int = 200
-) -> None:
- """
- Ingest a file by reading its content, splitting it into chunks with a specified
- maximum length and overlap, and adding the chunks to the memory storage.
-
- :param filename: The name of the file to ingest
- :param memory: An object with an add() method to store the chunks in memory
- :param max_length: The maximum length of each chunk, default is 4000
- :param overlap: The number of overlapping characters between chunks, default is 200
- """
- try:
- print(f"Working with file {filename}")
- content = read_file(filename)
- content_length = len(content)
- print(f"File length: {content_length} characters")
-
- chunks = list(split_file(content, max_length=max_length, overlap=overlap))
-
- num_chunks = len(chunks)
- for i, chunk in enumerate(chunks):
- print(f"Ingesting chunk {i + 1} / {num_chunks} into memory")
- memory_to_add = (
- f"Filename: {filename}\n" f"Content part#{i + 1}/{num_chunks}: {chunk}"
- )
-
- memory.add(memory_to_add)
-
- print(f"Done ingesting {num_chunks} chunks from {filename}.")
- except Exception as e:
- print(f"Error while ingesting file '{filename}': {str(e)}")
-
-
-def write_to_file(filename: str, text: str) -> str:
- """Write text to a file
-
- Args:
- filename (str): The name of the file to write to
- text (str): The text to write to the file
-
- Returns:
- str: A message indicating success or failure
- """
- if check_duplicate_operation("write", filename):
- return "Error: File has already been updated."
- try:
- filepath = path_in_workspace(filename)
- directory = os.path.dirname(filepath)
- if not os.path.exists(directory):
- os.makedirs(directory)
- with open(filepath, "w", encoding="utf-8") as f:
- f.write(text)
- log_operation("write", filename)
- return "File written to successfully."
- except Exception as e:
- return f"Error: {str(e)}"
-
-
-def append_to_file(filename: str, text: str, shouldLog: bool = True) -> str:
- """Append text to a file
-
- Args:
- filename (str): The name of the file to append to
- text (str): The text to append to the file
-
- Returns:
- str: A message indicating success or failure
- """
- try:
- filepath = path_in_workspace(filename)
- with open(filepath, "a") as f:
- f.write(text)
-
- if shouldLog:
- log_operation("append", filename)
-
- return "Text appended successfully."
- except Exception as e:
- return f"Error: {str(e)}"
-
-
-def delete_file(filename: str) -> str:
- """Delete a file
-
- Args:
- filename (str): The name of the file to delete
-
- Returns:
- str: A message indicating success or failure
- """
- if check_duplicate_operation("delete", filename):
- return "Error: File has already been deleted."
- try:
- filepath = path_in_workspace(filename)
- os.remove(filepath)
- log_operation("delete", filename)
- return "File deleted successfully."
- except Exception as e:
- return f"Error: {str(e)}"
-
-
-def search_files(directory: str) -> list[str]:
- """Search for files in a directory
-
- Args:
- directory (str): The directory to search in
-
- Returns:
- list[str]: A list of files found in the directory
- """
- found_files = []
-
- if directory in {"", "/"}:
- search_directory = WORKSPACE_PATH
- else:
- search_directory = path_in_workspace(directory)
-
- for root, _, files in os.walk(search_directory):
- for file in files:
- if file.startswith("."):
- continue
- relative_path = os.path.relpath(os.path.join(root, file), WORKSPACE_PATH)
- found_files.append(relative_path)
-
- return found_files
-
-
-def download_file(url, filename):
- """Downloads a file
- Args:
- url (str): URL of the file to download
- filename (str): Filename to save the file as
- """
- safe_filename = path_in_workspace(filename)
- try:
- message = f"{Fore.YELLOW}Downloading file from {Back.LIGHTBLUE_EX}{url}{Back.RESET}{Fore.RESET}"
- with Spinner(message) as spinner:
- session = requests.Session()
- retry = Retry(total=3, backoff_factor=1, status_forcelist=[502, 503, 504])
- adapter = HTTPAdapter(max_retries=retry)
- session.mount("http://", adapter)
- session.mount("https://", adapter)
-
- total_size = 0
- downloaded_size = 0
-
- with session.get(url, allow_redirects=True, stream=True) as r:
- r.raise_for_status()
- total_size = int(r.headers.get("Content-Length", 0))
- downloaded_size = 0
-
- with open(safe_filename, "wb") as f:
- for chunk in r.iter_content(chunk_size=8192):
- f.write(chunk)
- downloaded_size += len(chunk)
-
- # Update the progress message
- progress = f"{readable_file_size(downloaded_size)} / {readable_file_size(total_size)}"
- spinner.update_message(f"{message} {progress}")
-
- return f'Successfully downloaded and locally stored file: "{filename}"! (Size: {readable_file_size(total_size)})'
- except requests.HTTPError as e:
- return f"Got an HTTP Error whilst trying to download file: {e}"
- except Exception as e:
- return "Error: " + str(e)
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-2.0/README.md b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-2.0/README.md
deleted file mode 100644
index 614a9fa7f53e6372e9dffdb061dccf0e674650ae..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-2.0/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Voice Cloning
-emoji: ⚡
-colorFrom: yellow
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.11
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: BilalSardar/Voice-Cloning
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Lianjd/stock_dashboard/backtrader/analyzers/annualreturn.py b/spaces/Lianjd/stock_dashboard/backtrader/analyzers/annualreturn.py
deleted file mode 100644
index 07a9c835efe9f768c98e27c5bfa59d720647f61a..0000000000000000000000000000000000000000
--- a/spaces/Lianjd/stock_dashboard/backtrader/analyzers/annualreturn.py
+++ /dev/null
@@ -1,89 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8; py-indent-offset:4 -*-
-###############################################################################
-#
-# Copyright (C) 2015-2020 Daniel Rodriguez
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program. If not, see .
-#
-###############################################################################
-from __future__ import (absolute_import, division, print_function,
- unicode_literals)
-
-from collections import OrderedDict
-
-from backtrader.utils.py3 import range
-from backtrader import Analyzer
-
-
-class AnnualReturn(Analyzer):
- '''
- This analyzer calculates the AnnualReturns by looking at the beginning
- and end of the year
-
- Params:
-
- - (None)
-
- Member Attributes:
-
- - ``rets``: list of calculated annual returns
-
- - ``ret``: dictionary (key: year) of annual returns
-
- **get_analysis**:
-
- - Returns a dictionary of annual returns (key: year)
- '''
-
- def stop(self):
- # Must have stats.broker
- cur_year = -1
-
- value_start = 0.0
- value_cur = 0.0
- value_end = 0.0
-
- self.rets = list()
- self.ret = OrderedDict()
-
- for i in range(len(self.data) - 1, -1, -1):
- dt = self.data.datetime.date(-i)
- value_cur = self.strategy.stats.broker.value[-i]
-
- if dt.year > cur_year:
- if cur_year >= 0:
- annualret = (value_end / value_start) - 1.0
- self.rets.append(annualret)
- self.ret[cur_year] = annualret
-
- # changing between real years, use last value as new start
- value_start = value_end
- else:
- # No value set whatsoever, use the currently loaded value
- value_start = value_cur
-
- cur_year = dt.year
-
- # No matter what, the last value is always the last loaded value
- value_end = value_cur
-
- if cur_year not in self.ret:
- # finish calculating pending data
- annualret = (value_end / value_start) - 1.0
- self.rets.append(annualret)
- self.ret[cur_year] = annualret
-
- def get_analysis(self):
- return self.ret
diff --git a/spaces/LinoyTsaban/edit_friendly_ddpm_inversion/utils.py b/spaces/LinoyTsaban/edit_friendly_ddpm_inversion/utils.py
deleted file mode 100644
index 6d8ad030f6ad0be98176226fce712e53b1b36fee..0000000000000000000000000000000000000000
--- a/spaces/LinoyTsaban/edit_friendly_ddpm_inversion/utils.py
+++ /dev/null
@@ -1,116 +0,0 @@
-import PIL
-from PIL import Image, ImageDraw ,ImageFont
-from matplotlib import pyplot as plt
-import torchvision.transforms as T
-import os
-import torch
-import yaml
-
-# This file was copied from the DDPM inversion Repo - https://github.com/inbarhub/DDPM_inversion #
-
-def show_torch_img(img):
- img = to_np_image(img)
- plt.imshow(img)
- plt.axis("off")
-
-def to_np_image(all_images):
- all_images = (all_images.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8).cpu().numpy()[0]
- return all_images
-
-def tensor_to_pil(tensor_imgs):
- if type(tensor_imgs) == list:
- tensor_imgs = torch.cat(tensor_imgs)
- tensor_imgs = (tensor_imgs / 2 + 0.5).clamp(0, 1)
- to_pil = T.ToPILImage()
- pil_imgs = [to_pil(img) for img in tensor_imgs]
- return pil_imgs
-
-def pil_to_tensor(pil_imgs):
- to_torch = T.ToTensor()
- if type(pil_imgs) == PIL.Image.Image:
- tensor_imgs = to_torch(pil_imgs).unsqueeze(0)*2-1
- elif type(pil_imgs) == list:
- tensor_imgs = torch.cat([to_torch(pil_imgs).unsqueeze(0)*2-1 for img in pil_imgs]).to(device)
- else:
- raise Exception("Input need to be PIL.Image or list of PIL.Image")
- return tensor_imgs
-
-
-## TODO implement this
-# n = 10
-# num_rows = 4
-# num_col = n // num_rows
-# num_col = num_col + 1 if n % num_rows else num_col
-# num_col
-def add_margin(pil_img, top = 0, right = 0, bottom = 0,
- left = 0, color = (255,255,255)):
- width, height = pil_img.size
- new_width = width + right + left
- new_height = height + top + bottom
- result = Image.new(pil_img.mode, (new_width, new_height), color)
-
- result.paste(pil_img, (left, top))
- return result
-
-def image_grid(imgs, rows = 1, cols = None,
- size = None,
- titles = None, text_pos = (0, 0)):
- if type(imgs) == list and type(imgs[0]) == torch.Tensor:
- imgs = torch.cat(imgs)
- if type(imgs) == torch.Tensor:
- imgs = tensor_to_pil(imgs)
-
- if not size is None:
- imgs = [img.resize((size,size)) for img in imgs]
- if cols is None:
- cols = len(imgs)
- assert len(imgs) >= rows*cols
-
- top=20
- w, h = imgs[0].size
- delta = 0
- if len(imgs)> 1 and not imgs[1].size[1] == h:
- delta = top
- h = imgs[1].size[1]
- if not titles is None:
- font = ImageFont.truetype("/usr/share/fonts/truetype/freefont/FreeMono.ttf",
- size = 20, encoding="unic")
- h = top + h
- grid = Image.new('RGB', size=(cols*w, rows*h+delta))
- for i, img in enumerate(imgs):
-
- if not titles is None:
- img = add_margin(img, top = top, bottom = 0,left=0)
- draw = ImageDraw.Draw(img)
- draw.text(text_pos, titles[i],(0,0,0),
- font = font)
- if not delta == 0 and i > 0:
- grid.paste(img, box=(i%cols*w, i//cols*h+delta))
- else:
- grid.paste(img, box=(i%cols*w, i//cols*h))
-
- return grid
-
-
-"""
-input_folder - dataset folder
-"""
-def load_dataset(input_folder):
- # full_file_names = glob.glob(input_folder)
- # class_names = [x[0] for x in os.walk(input_folder)]
- class_names = next(os.walk(input_folder))[1]
- class_names[:] = [d for d in class_names if not d[0] == '.']
- file_names=[]
- for class_name in class_names:
- cur_path = os.path.join(input_folder, class_name)
- filenames = next(os.walk(cur_path), (None, None, []))[2]
- filenames = [f for f in filenames if not f[0] == '.']
- file_names.append(filenames)
- return class_names, file_names
-
-
-def dataset_from_yaml(yaml_location):
- with open(yaml_location, 'r') as stream:
- data_loaded = yaml.safe_load(stream)
-
- return data_loaded
\ No newline at end of file
diff --git a/spaces/MZhaovo/Llama_Difu/README.md b/spaces/MZhaovo/Llama_Difu/README.md
deleted file mode 100644
index cd6e67c46c328e345118346e45d6df60198c295f..0000000000000000000000000000000000000000
--- a/spaces/MZhaovo/Llama_Difu/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Llama Difu
-emoji: 📚
-colorFrom: purple
-colorTo: blue
-sdk: gradio
-sdk_version: 3.20.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/utils/__init__.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/utils/__init__.py
deleted file mode 100644
index 5277f46157403e47fd830fc519144b97ef69d4ae..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/utils/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/MarkuzML/swap_face/generate.py b/spaces/MarkuzML/swap_face/generate.py
deleted file mode 100644
index 70530dbfac1fed71f8cad74cedf4bcb0f8733612..0000000000000000000000000000000000000000
--- a/spaces/MarkuzML/swap_face/generate.py
+++ /dev/null
@@ -1,29 +0,0 @@
-import os
-import face_recognition
-import pickle
-
-
-PATH_BACKGROUND = 'images_background'
-PATH_MODEL = 'bin'
-DATA_IMAGE_PICKLE = 'data_images.pkl'
-
-print('Loading data from brackground images ...')
-filename = os.path.join(os.getcwd(), PATH_MODEL, DATA_IMAGE_PICKLE)
-images_background_encoding = []
-images_background_names = []
-images_background_contents = []
-for filename_image in os.listdir(PATH_BACKGROUND):
- if filename_image.endswith('.gitkeep'):
- continue
- image_path = os.path.join(PATH_BACKGROUND, filename_image)
- image_loaded = face_recognition.load_image_file(image_path)
- face_encoding = face_recognition.face_encodings(image_loaded)[0]
- images_background_encoding.append(face_encoding)
- images_background_names.append(filename_image)
- images_background_contents.append(image_loaded)
-
-data_images = {"names": images_background_names, "encodings": images_background_encoding, 'content':images_background_contents}
-with open(filename, 'wb') as file:
- pickle.dump(data_images, file)
-
-print(f'Generated data from background images on {filename}')
\ No newline at end of file
diff --git a/spaces/MashiroSA/sovits-emu-voice-transform/cluster/__init__.py b/spaces/MashiroSA/sovits-emu-voice-transform/cluster/__init__.py
deleted file mode 100644
index f1b9bde04e73e9218a5d534227caa4c25332f424..0000000000000000000000000000000000000000
--- a/spaces/MashiroSA/sovits-emu-voice-transform/cluster/__init__.py
+++ /dev/null
@@ -1,29 +0,0 @@
-import numpy as np
-import torch
-from sklearn.cluster import KMeans
-
-def get_cluster_model(ckpt_path):
- checkpoint = torch.load(ckpt_path)
- kmeans_dict = {}
- for spk, ckpt in checkpoint.items():
- km = KMeans(ckpt["n_features_in_"])
- km.__dict__["n_features_in_"] = ckpt["n_features_in_"]
- km.__dict__["_n_threads"] = ckpt["_n_threads"]
- km.__dict__["cluster_centers_"] = ckpt["cluster_centers_"]
- kmeans_dict[spk] = km
- return kmeans_dict
-
-def get_cluster_result(model, x, speaker):
- """
- x: np.array [t, 256]
- return cluster class result
- """
- return model[speaker].predict(x)
-
-def get_cluster_center_result(model, x,speaker):
- """x: np.array [t, 256]"""
- predict = model[speaker].predict(x)
- return model[speaker].cluster_centers_[predict]
-
-def get_center(model, x,speaker):
- return model[speaker].cluster_centers_[x]
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/datasets/drive.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/datasets/drive.py
deleted file mode 100644
index 06e8ff606e0d2a4514ec8b7d2c6c436a32efcbf4..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/datasets/drive.py
+++ /dev/null
@@ -1,59 +0,0 @@
-# dataset settings
-dataset_type = 'DRIVEDataset'
-data_root = 'data/DRIVE'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-img_scale = (584, 565)
-crop_size = (64, 64)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations'),
- dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)),
- dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
- dict(type='RandomFlip', prob=0.5),
- dict(type='PhotoMetricDistortion'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_semantic_seg'])
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=img_scale,
- # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0],
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img'])
- ])
-]
-
-data = dict(
- samples_per_gpu=4,
- workers_per_gpu=4,
- train=dict(
- type='RepeatDataset',
- times=40000,
- dataset=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/training',
- ann_dir='annotations/training',
- pipeline=train_pipeline)),
- val=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/validation',
- ann_dir='annotations/validation',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='images/validation',
- ann_dir='annotations/validation',
- pipeline=test_pipeline))
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/hooks/logger/mlflow.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/hooks/logger/mlflow.py
deleted file mode 100644
index f9a72592be47b534ce22573775fd5a7e8e86d72d..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/hooks/logger/mlflow.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from ...dist_utils import master_only
-from ..hook import HOOKS
-from .base import LoggerHook
-
-
-@HOOKS.register_module()
-class MlflowLoggerHook(LoggerHook):
-
- def __init__(self,
- exp_name=None,
- tags=None,
- log_model=True,
- interval=10,
- ignore_last=True,
- reset_flag=False,
- by_epoch=True):
- """Class to log metrics and (optionally) a trained model to MLflow.
-
- It requires `MLflow`_ to be installed.
-
- Args:
- exp_name (str, optional): Name of the experiment to be used.
- Default None.
- If not None, set the active experiment.
- If experiment does not exist, an experiment with provided name
- will be created.
- tags (dict of str: str, optional): Tags for the current run.
- Default None.
- If not None, set tags for the current run.
- log_model (bool, optional): Whether to log an MLflow artifact.
- Default True.
- If True, log runner.model as an MLflow artifact
- for the current run.
- interval (int): Logging interval (every k iterations).
- ignore_last (bool): Ignore the log of last iterations in each epoch
- if less than `interval`.
- reset_flag (bool): Whether to clear the output buffer after logging
- by_epoch (bool): Whether EpochBasedRunner is used.
-
- .. _MLflow:
- https://www.mlflow.org/docs/latest/index.html
- """
- super(MlflowLoggerHook, self).__init__(interval, ignore_last,
- reset_flag, by_epoch)
- self.import_mlflow()
- self.exp_name = exp_name
- self.tags = tags
- self.log_model = log_model
-
- def import_mlflow(self):
- try:
- import mlflow
- import mlflow.pytorch as mlflow_pytorch
- except ImportError:
- raise ImportError(
- 'Please run "pip install mlflow" to install mlflow')
- self.mlflow = mlflow
- self.mlflow_pytorch = mlflow_pytorch
-
- @master_only
- def before_run(self, runner):
- super(MlflowLoggerHook, self).before_run(runner)
- if self.exp_name is not None:
- self.mlflow.set_experiment(self.exp_name)
- if self.tags is not None:
- self.mlflow.set_tags(self.tags)
-
- @master_only
- def log(self, runner):
- tags = self.get_loggable_tags(runner)
- if tags:
- self.mlflow.log_metrics(tags, step=self.get_iter(runner))
-
- @master_only
- def after_run(self, runner):
- if self.log_model:
- self.mlflow_pytorch.log_model(runner.model, 'models')
diff --git a/spaces/MiloSobral/PortiloopDemo/portiloop/src/hardware/__init__.py b/spaces/MiloSobral/PortiloopDemo/portiloop/src/hardware/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/MohamedAlgebali/VideoQuERI/utils.py b/spaces/MohamedAlgebali/VideoQuERI/utils.py
deleted file mode 100644
index 090e13dfdd9537812383fcfd17c1df0c98b21b69..0000000000000000000000000000000000000000
--- a/spaces/MohamedAlgebali/VideoQuERI/utils.py
+++ /dev/null
@@ -1,235 +0,0 @@
-from youtube_transcript_api import YouTubeTranscriptApi
-import streamlit as st
-from langchain.docstore.document import Document
-from langchain.text_splitter import TokenTextSplitter
-import re
-import base64
-from whisper_result import *
-
-def postprocess_time_if_transcript_was_already_generated(time):
- if time < 60:
- sec = int(time)
- return f'0:{sec}'
-
- hour = int(time) // 3600
- min = int(time) // 60
- sec = int(time) % 60
- if hour == 0:
- return f'{min}:{sec}'
- else:
- return f"{hour}:{abs(hour*60 - min)}:{sec}"
-
-def ret_trans(vid):
- # retrieve the available transcripts
- transcript_list = YouTubeTranscriptApi.list_transcripts(vid)
-
- # iterate over all available transcripts
- for transcript in transcript_list:
- if 'en' in transcript.language_code:
- return transcript.fetch()
-
- elif transcript.is_translatable and 'en' in [t['language_code'] for t in transcript.translation_languages]:
- return transcript.translate('en').fetch()
-
- else:
- return transcript.fetch()
-
-def get_generated_transcript(video_url):
- video_id = video_url.split('=')[1]
- res = ret_trans(video_id)
-
- transcript = ', '.join([f"{postprocess_time_if_transcript_was_already_generated(t['start'])} {t['text']}" for t in res])
- transcript = [Document(page_content=transcript)]
-
- return transcript
-
-def extract_start_end_time(passage):
- time_pattern = r'\d{1,2}:\d{1,2}(?::\d{1,2})?'
-
- times = re.findall(time_pattern, passage)
- # print(times)
- if len(times) >= 2:
- start_time = times[1]
- end_time = times[-2]
- # print(times)
- return start_time, end_time
- else:
- return None, None
-
-def decode_unicode(text):
- return bytes(text, "utf-8").decode("unicode-escape")
-
-def get_transcript(video_url):
- try: #if the transcript was alrady generated
- transcript = get_generated_transcript(video_url)
- return transcript, 'return_from_generated_transcript'
- except:
- st.info("Looks like the provided video does not have transcription. Plese be patient until transcription is generated.")
- s = time.time()
- transcript = get_whisper_result(video_url)
- if transcript:
- st.info(f"Generating Caption took {round(time.time() - s, 2)} seconds")
- return [Document(page_content=transcript)], 'return_from_whisper'
-
- else:
- return False, ''
-
-# Define your FAQ questions and answers
-def FAQs():
- faq = {
- "What is VideoQuERI?":"It is a versatile and interactive website that utilizes AI capabilities to process videos, answer questions, generate code, solve puzzles, and perform mathematical operations.\
- It depends that the video is described in someone's voice not visually. If the video's description is solely visual, the algorithm will not function effectively.",
-
- "What are the Capabilities of VideoQuERI?
" :
- "
**Video Processing**: Users can input video URLs to your website. The AI can then process these videos to extract information, such as speech recognition for transcriptions.
"
- "
**Question Answering**:Users can ask questions related to the video's content. The website's AI can analyze the video's transcriptions and content to provide relevant answers to users' questions.
"
- "
**Code Generation**: If the video contains step-by-step instructions for coding, AI can extract these instructions and generate code snippets.
"
- "
**Generating Chapters**: You can ask the bot to help you splitting your video to chapters.
"
- "
**Puzzle Solving**: Videos with puzzle verbally instructions can be processed by the AI to understand the rules and mechanics. Users can input puzzle-specific queries, and it can provide solutions or hints.
"
- "
**Memory**: Chatbot has memory to retain and recall information from previous parts of the conversation. But,honestly, it is not that strong.
"
- "
**Information Retrieval** : If you forget when a piece of information was said, you can provide the video and your question.
"
- "
**Educational Content**: Your website can serve as an educational platform by offering explanations, demonstrations, and tutorials on various subjects based on the video content.
"
- "
**Natural Language Understanding**: The AI can understand and analyze the natural language used in both the video's transcriptions and user queries. This allows for more contextually accurate responses.
"
- "
**Interactive UI**: Your website's user interface can incorporate elements like text input fields, and result displays to make the interactions intuitive and engaging.
"
- "
**Scalability**: The AI-driven capabilities can be applied to various types of videos, making your website versatile and adaptable to different content.
"
- ,
-
- "What if the user has already generated transcription (e.g. from platforms like Coursera or Udemy)?":
- "You can copy it and ask ChatGPT or Poe",
-
- "what if Caption generation took a long time?":"There are two propable reasons. First, the video url is not supported. Second, the transcription generation API has too many requuests\
- If the first case, then the video may be streamed to wesite in .ts format , and .ts is not supported .However,if your case is the second case, you can visit the us after a period of time.",
-
- "What if the video is in your local machine?":"You can Upload it to your google drive and then share the link with us.",
-
- "What are supported formats?" :
- "However, most video formats are supported, streaming videos in the .ts format (Transport Stream) are currently not compatible with our system.\
- Transport Stream is a container format often used for streaming live broadcasts and might require specialized processing.\
- If you have a .ts format video, you might consider converting it to a supported format using 'ffmpeg' and upload it to your drive and share the link with us.\
- We appreciate your understanding and are here to assist you with any questions you may have! ",
-
- "How can I get the video link?":
- """You should install this chrome extension, \
- firefox extension.\
- If you are in the webpage that has the desired video click on the extension logo , a menu will be listed , click copy url, finally paste in the video url input field.
- """ ,
-
- "What languages are supported?" :
- "Afrikaans, Arabic, Armenian, Azerbaijani, Belarusian, Bosnian, Bulgarian, Catalan, Chinese, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French,\
- Galician, German, Greek, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Italian, Japanese, Kannada, Kazakh, Korean, Latvian, Lithuanian, Macedonian, Malay, Marathi,\
- Maori, Nepali, Norwegian, Persian, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swahili, Swedish, Tagalog, Tamil, Thai, Turkish, Ukrainian, Urdu, Vietnamese, and Welsh.",
-
- "Is there a tip to get the most out of VideoQuERI":"Yes, you should ask your question in English and ask the bot to answer in your favourite language(e.g. What is this video about? answer in 'arabic').",
-
- "What is the purpose of the video URL field?":
- "The video URL field allows you to input the URL of the video you want to query.Our system will analyze the video content to provide relevant answers.",
-
- "How do I input a video URL, especially for platforms like Facebook or embedded videos?":
- "To input a video URL, simply copy the URL of the video you want to query and paste it into the video URL field.",
-
- "What is the chunk size slider for?":
- "The chunk size slider lets you adjust the size of video segments that the system analyzes. This can help you get more focused and precise answers based on specific parts of the video.",
-
- "How does the system generate answers from the video?":
- "Our system uses advanced AI technology to analyze the video's audio content. It then generates answers based on the context and content of the video.",
-
- "Is there a limit to the video length I can query?":
- "While there's generally no strict limit on video length, very long videos might take longer to process. It's recommended to choose appropriate chunk sizes for efficient processing and accurate answers.",
-
- "Can I change the chunk size while the video is being processed?":
- "No, you can adjust the chunk size slider after generating the caption then click `Generate the Caption` button again . This allows you to explore different parts of the video and get answers for specific segments.",
-
- "Can I ask questions while the caption is being generated?":
- "No, you can ask questions after the caption generation is completed.",
-
- "How accurate are the answers generated from the video?":
- "The accuracy of answers depends on various factors such as the clarity of the audio, and the specificity of your questions. Generally, the system strives to provide relevant and coherent answers.",
-
- "Can I save or bookmark specific answers from the video?":
- "At the moment, the system doesn't offer direct saving or bookmarking of answers. However, you can take screenshots or notes to keep track of important information.",
-
- "Are there certain types of videos that work better with this feature?":
- "The system is designed to work with a wide range of videos, but videos with clear audio tend to yield better results. Educational, instructional, and well-structured videos are usually more suitable."
-
-
- }
- # with st.expander("FAQs"):
- for i, faq_key in enumerate(faq.keys()):
- # with st.sidebar.expander(faq_key):
- st.write(f"**Q{i+1}. {faq_key}**\n \n**Answer** : {faq[faq_key]}", unsafe_allow_html=True)
- st.write('-'*50)
-
-def contact():
- mail = """